Hardware support: Abbott Labs kills free tool that lets you own the blood-sugar data from your glucose monitor, saying it violates copyright law |
- Abbott Labs kills free tool that lets you own the blood-sugar data from your glucose monitor, saying it violates copyright law
- According to Steam hardware survey, 6-core CPUs overtook dual-cores in December 2019
- Nvidia Ampere to have 50% more performance and halve power consumption
- AMD Ryzen 4800U 8C/16T found on UserBenchmark by TUM-APISAK
- Samsung Unveils New Odyssey Gaming Monitor Line-up at CES 2020
- Nervous system manipulation by electromagnetic fields from monitors
- (Serve the home) AMD Ryzen Threadripper 3960X Review 24 Cores of Impressive
- Speculation: The purported 80CU Navi 21 GPU should have around 66% performance uplift over the 5700xt
- Apple could be making a Gaming PC
- An Interconnected Interview with Intel’s Ramune Nagisetty: A Future with Foveros
- How next gen storage works
- XFX's RX 5600 XT THICC II Pro Pictured - Confirms RX 5600 XT Specs
- Introducing BusKill: A Kill Cord for your Laptop
- How architecturally is my DGPU driving my display over USB3?
- How can the vodka inside the 'vodka cooled PC' built by youtuber 'Life of Boris' be more effective than the actual cooling liquid provided by the company?
- Dell’s latest XPS 13 has a new design with a bigger display and Ice Lake chips
Posted: 03 Jan 2020 11:57 AM PST | ||||||||||||||||||||||||||||||
According to Steam hardware survey, 6-core CPUs overtook dual-cores in December 2019 Posted: 03 Jan 2020 05:05 AM PST
Interestingly, Intel increased their overall lead (83.92% vs 16.06% in DEC, vs 80.54% / 19.45% in NOV). [link] [comments] | ||||||||||||||||||||||||||||||
Nvidia Ampere to have 50% more performance and halve power consumption Posted: 03 Jan 2020 08:06 PM PST | ||||||||||||||||||||||||||||||
AMD Ryzen 4800U 8C/16T found on UserBenchmark by TUM-APISAK Posted: 03 Jan 2020 03:53 AM PST | ||||||||||||||||||||||||||||||
Samsung Unveils New Odyssey Gaming Monitor Line-up at CES 2020 Posted: 03 Jan 2020 01:15 PM PST | ||||||||||||||||||||||||||||||
Nervous system manipulation by electromagnetic fields from monitors Posted: 03 Jan 2020 10:44 PM PST | ||||||||||||||||||||||||||||||
(Serve the home) AMD Ryzen Threadripper 3960X Review 24 Cores of Impressive Posted: 03 Jan 2020 10:24 AM PST | ||||||||||||||||||||||||||||||
Posted: 03 Jan 2020 06:15 AM PST First of all, what inspired me to do the calculation in this post was this Buildzoid video https://m.youtube.com/watch?v=eNKybalWKVg Let's assume we have an 80 CU Navi 21 die with equivalent IPC to Navi 10 on the TSMC 7nm EUV process coupled with a 384 bit 16Gbps GDDR6 memory system and 300W TBP as Buildzoid explores in the video. Going by Buildzoid's 37W approximation for the non-core/non-VRM power consumption, we have 263W left for the core + VRM inefficiency. Assuming 0.93 VRM efficiency as he says, we get 245W left for the core power. This is of course less than the 360W we'd be looking at if we had 2x 40CU Navi 10 dies. Going by the 15% power reduction figure for the 7nm+ process as opposed to 7nm, we require a power reduction factor of 245/(360/1.15)=0.78 to fit the power constraints. How do we achieve this? Naturally, undervolt and underclock. But, how much of a clock hit does a 0.78 factor mean? To answer that question, we need some data fitting and some high school physics; we need a way to compute the frequency given the power and vice versa. The quantity linking these two is the voltage. Now, I dont have a 5700xt so I will be relying on a random 5700xt Wattman voltage-frequency curve I found on the internet https://www.pcgamesn.com/wp-content/uploads/2019/07/amd-rx-5700-xt-wattman-900x506.jpg The image from the article seems to imply that the card would have done 800MHz at 673mV and (based on some coarse approximation I made going by the grid on the graph) 2150MHz at 1140mV. I do not know if this is typical of other 5700xt users' experience, but I can recalculate later. If we fit an exponential function of the form voltage = Aexp(Bfreq.) to these data points, we get voltage = 492.466*exp(3.9 * 10-4 *freq.) Now, we also have to remember that power = voltage2 /resistance for any electrical component. Assuming identical resistances for Navi 10 and Navi 21 (again, no clue if this is reasonable), P_target/P_doubled = 245/(360/1.15) = 0.78 = V_target2 /V_Navi102 => V_target = V_Navi10 * sqrt(0.78) Plugging Buildzoid's Navi 10 boost clock figure of 1887MHz into the voltage-frequency curve fit above, V_Navi10 = 1028mV which implies V_target = 908mV. Now, we need to use the voltage-frequency fit backwards* to compute the clock from the voltage. Some basic algebra will show that f_target = 1569MHz. Assuming linear scaling of CUs and clockspeed, 80/40*1569/1887 gives us 1.66 which means a 66% performance uplift compared to the Navi 10 die, which the 71% increase in memory performance that Buildzoid speculates should be capable of handling. What do you guys think? *I know that for Navi 21, a different die on a different process would have a different voltage-clock curve but I chose to gloss over that by simply saying that the 15% power reduction and the P=V2 /R relationship would mean a quite straightforward downshift for the voltages at identical clocks to not complicate the calculation further [link] [comments] | ||||||||||||||||||||||||||||||
Apple could be making a Gaming PC Posted: 03 Jan 2020 10:52 PM PST | ||||||||||||||||||||||||||||||
An Interconnected Interview with Intel’s Ramune Nagisetty: A Future with Foveros Posted: 03 Jan 2020 08:51 AM PST | ||||||||||||||||||||||||||||||
Posted: 03 Jan 2020 02:16 AM PST Microsoft's Head of Gaming said:
[link] [comments] | ||||||||||||||||||||||||||||||
XFX's RX 5600 XT THICC II Pro Pictured - Confirms RX 5600 XT Specs Posted: 03 Jan 2020 04:49 AM PST | ||||||||||||||||||||||||||||||
Introducing BusKill: A Kill Cord for your Laptop Posted: 03 Jan 2020 05:55 AM PST | ||||||||||||||||||||||||||||||
How architecturally is my DGPU driving my display over USB3? Posted: 03 Jan 2020 01:38 PM PST I have an eluktronics mag-15 laptop and a ThinkPad USB 3.0 Pro Dock. It is connected to 2 identical Dell Ultrasharp U2410f monitors: In DirectX I can see Display 1 appropriately connected to the GTX1660 and Display 2 appropriately connected to the DisplayLink USB Device. However what is surprising to me is: there seems to be no difference in performance between the 2 displays. For instance I can load up an application like Cemu breath of the wild on Display 2 and it is rock solid 40FPS. This is a demanding application on both CPU and GPU (my ryzen + radeon rx570 desktop was incapable of running it) and lord knows the integrated Intel GPU can't handle this. So I can only sumrise that somehow the GTX 1660 is doing all the work and then it's sending it over a frame buffer using USB alternate mode or something? But how would that work? Okay so you invoke your application, OS gives control to user space app to run on CPU, which fetches the shaders from DRAM which are sent to the GPU to compile into primitive instructions and execute, but the app needs to interface w/ the driver too to do so so what is going on!? According to directx it is using the dlidusb3.dll and wudfrd.sys for the USB displayport link and it's using 4x instances of nvldumdx.dll for the 1660 so what is the OS somehow tricking the app into using the nvidia driver then overriding all function of the DP driver and just giving it the frames directly? Like what would the conceptual block diagram for this look like? Wouldn't the 2 drivers have to communicate or does the OS override and handle everything? I'm not a GPU expert but I didn't think there was traditionally OS involvement once the GPU compiles and executes it's instructions and gets a frame buffer to push to the display. Like just physically that happens on the GPU HW which is connected physically to the pinout for DP DVI HDMI etc. is it an OS feature or a GPU driver feature that says "hey let's take those frames, and send it over USB to DP for alternate mode"? It doesn't seem like it's something that would "just work" so either the OS is working some magic or the drivers are working some magic. I suppose it's in nvidia's best interest to enable such a feature but DAMN I did not expect it to be so smooth. I just expected that I'd only be able to use 1 display for gaming - the one that was directly connected to the HDMI port on the laptop that is physically right next to the GPU. Now it's working better than I thought but I'm now just confused why it's working so well! I can literally have the CEMU window split between 2 monitors and it runs flawlessly? [link] [comments] | ||||||||||||||||||||||||||||||
Posted: 03 Jan 2020 11:28 AM PST So let me get this straight: There's a company selling pricey computer parts, with the ability to tailor-brew any coolant liquid they please (and since the substance is toxic, I assume they did) and despite this, a crazy youtuber making a meme video manages to cool his system more effectively, using Vodka instead of that tailor-brewed liquid. the video in question: https://www.youtube.com/watch?v=IYTJfLyo_vE Tl;dw.: he modifies the cooling system and replaces the liquid provided by the company with Vodka instead. This allows him to achieve CPU temperatures I know this might be a better question to ask a physicist (chemist?) but it's also computer related so I thought I'd start my investigation here. [link] [comments] | ||||||||||||||||||||||||||||||
Dell’s latest XPS 13 has a new design with a bigger display and Ice Lake chips Posted: 03 Jan 2020 05:59 AM PST |
You are subscribed to email updates from /r/hardware: a technology subreddit for computer hardware news, reviews and discussion.. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment