• Breaking News

    Saturday, January 4, 2020

    Hardware support: Abbott Labs kills free tool that lets you own the blood-sugar data from your glucose monitor, saying it violates copyright law

    Hardware support: Abbott Labs kills free tool that lets you own the blood-sugar data from your glucose monitor, saying it violates copyright law


    Abbott Labs kills free tool that lets you own the blood-sugar data from your glucose monitor, saying it violates copyright law

    Posted: 03 Jan 2020 11:57 AM PST

    According to Steam hardware survey, 6-core CPUs overtook dual-cores in December 2019

    Posted: 03 Jan 2020 05:05 AM PST

    Physical CPUs AUG SEP OCT NOV DEC
    2 23.87% 23.88% 23.66% 21.14% 16.87%
    4 53.39% 52.10% 51.86% 52.47% 55.13%
    6 16.67% 17.55% 17.98% 19.44% 21.70%
    8 3.60% 4.00% 4.13% 4.62% 4.42%

    Interestingly, Intel increased their overall lead (83.92% vs 16.06% in DEC, vs 80.54% / 19.45% in NOV).

    submitted by /u/kikimaru024
    [link] [comments]

    Nvidia Ampere to have 50% more performance and halve power consumption

    Posted: 03 Jan 2020 08:06 PM PST

    AMD Ryzen 4800U 8C/16T found on UserBenchmark by TUM-APISAK

    Posted: 03 Jan 2020 03:53 AM PST

    Samsung Unveils New Odyssey Gaming Monitor Line-up at CES 2020

    Posted: 03 Jan 2020 01:15 PM PST

    Nervous system manipulation by electromagnetic fields from monitors

    Posted: 03 Jan 2020 10:44 PM PST

    (Serve the home) AMD Ryzen Threadripper 3960X Review 24 Cores of Impressive

    Posted: 03 Jan 2020 10:24 AM PST

    Speculation: The purported 80CU Navi 21 GPU should have around 66% performance uplift over the 5700xt

    Posted: 03 Jan 2020 06:15 AM PST

    First of all, what inspired me to do the calculation in this post was this Buildzoid video https://m.youtube.com/watch?v=eNKybalWKVg

    Let's assume we have an 80 CU Navi 21 die with equivalent IPC to Navi 10 on the TSMC 7nm EUV process coupled with a 384 bit 16Gbps GDDR6 memory system and 300W TBP as Buildzoid explores in the video.

    Going by Buildzoid's 37W approximation for the non-core/non-VRM power consumption, we have 263W left for the core + VRM inefficiency. Assuming 0.93 VRM efficiency as he says, we get 245W left for the core power. This is of course less than the 360W we'd be looking at if we had 2x 40CU Navi 10 dies. Going by the 15% power reduction figure for the 7nm+ process as opposed to 7nm, we require a power reduction factor of 245/(360/1.15)=0.78 to fit the power constraints.

    How do we achieve this? Naturally, undervolt and underclock. But, how much of a clock hit does a 0.78 factor mean? To answer that question, we need some data fitting and some high school physics; we need a way to compute the frequency given the power and vice versa. The quantity linking these two is the voltage. Now, I dont have a 5700xt so I will be relying on a random 5700xt Wattman voltage-frequency curve I found on the internet https://www.pcgamesn.com/wp-content/uploads/2019/07/amd-rx-5700-xt-wattman-900x506.jpg

    The image from the article seems to imply that the card would have done 800MHz at 673mV and (based on some coarse approximation I made going by the grid on the graph) 2150MHz at 1140mV. I do not know if this is typical of other 5700xt users' experience, but I can recalculate later. If we fit an exponential function of the form voltage = Aexp(Bfreq.) to these data points, we get voltage = 492.466*exp(3.9 * 10-4 *freq.)

    Now, we also have to remember that power = voltage2 /resistance for any electrical component. Assuming identical resistances for Navi 10 and Navi 21 (again, no clue if this is reasonable),

    P_target/P_doubled = 245/(360/1.15) = 0.78 = V_target2 /V_Navi102 => V_target = V_Navi10 * sqrt(0.78)

    Plugging Buildzoid's Navi 10 boost clock figure of 1887MHz into the voltage-frequency curve fit above, V_Navi10 = 1028mV which implies V_target = 908mV. Now, we need to use the voltage-frequency fit backwards* to compute the clock from the voltage. Some basic algebra will show that f_target = 1569MHz.

    Assuming linear scaling of CUs and clockspeed, 80/40*1569/1887 gives us 1.66 which means a 66% performance uplift compared to the Navi 10 die, which the 71% increase in memory performance that Buildzoid speculates should be capable of handling. What do you guys think?

    *I know that for Navi 21, a different die on a different process would have a different voltage-clock curve but I chose to gloss over that by simply saying that the 15% power reduction and the P=V2 /R relationship would mean a quite straightforward downshift for the voltages at identical clocks to not complicate the calculation further

    submitted by /u/cherryteastain
    [link] [comments]

    Apple could be making a Gaming PC

    Posted: 03 Jan 2020 10:52 PM PST

    An Interconnected Interview with Intel’s Ramune Nagisetty: A Future with Foveros

    Posted: 03 Jan 2020 08:51 AM PST

    How next gen storage works

    Posted: 03 Jan 2020 02:16 AM PST

    Microsoft's Head of Gaming said:

    Thanks to their speed, developers can now use the SSD practically as virtual RAM. The SSD access times come close to the memory access times of the current console generation. Of course, the OS must allow developers access that goes beyond that of a pure storage medium. Then we will see how the address space will increase immensely - comparable to the change from Win16 to Win32 or in some cases Win64.

    Of course, the SSD will still be slower than the GDDR6 RAM that sits directly on top of the die. But the ability to directly supply data to the CPU and GPU via the SSD will enable game worlds to be created that will not only be richer, but also more seamless. Not only in terms of pure loading times, but also in terrain mapping. A graphic designer no longer has to worry about when GDDR6 ends and when the SSD starts. I like that Mark Cerny and his team at Sony are also investing in an SSD for the PlayStation 5, the engines and tools can implement corresponding functions. Together we will ensure a larger installed base - and developers will do everything possible to master and support the programming of these hardware capabilities. I don't have a PS5 development kit, I don't even think our Minecraft team does. But it will be exciting to see how the industry will benefit from the comprehensive use of such solutions

    https://www.google.com/amp/s/wccftech.com/spencer-on-xsx-ssd-can-be-used-as-virtual-ram-i-like-that-sony-is-investing-in-ssd-for-ps5-too/amp/

    submitted by /u/XVll-L
    [link] [comments]

    XFX's RX 5600 XT THICC II Pro Pictured - Confirms RX 5600 XT Specs

    Posted: 03 Jan 2020 04:49 AM PST

    Introducing BusKill: A Kill Cord for your Laptop

    Posted: 03 Jan 2020 05:55 AM PST

    How architecturally is my DGPU driving my display over USB3?

    Posted: 03 Jan 2020 01:38 PM PST

    I have an eluktronics mag-15 laptop and a ThinkPad USB 3.0 Pro Dock.

    It is connected to 2 identical Dell Ultrasharp U2410f monitors:
    Display 1: Connected directly to laptop using HDMI (which is right next the the GTX1660 DGPU)
    Display 2: Connected to dock via DP which is connected to the USB 3.0 port (not even the thunderbolt)

    In DirectX I can see Display 1 appropriately connected to the GTX1660 and Display 2 appropriately connected to the DisplayLink USB Device.

    However what is surprising to me is: there seems to be no difference in performance between the 2 displays. For instance I can load up an application like Cemu breath of the wild on Display 2 and it is rock solid 40FPS. This is a demanding application on both CPU and GPU (my ryzen + radeon rx570 desktop was incapable of running it) and lord knows the integrated Intel GPU can't handle this.

    So I can only sumrise that somehow the GTX 1660 is doing all the work and then it's sending it over a frame buffer using USB alternate mode or something? But how would that work? Okay so you invoke your application, OS gives control to user space app to run on CPU, which fetches the shaders from DRAM which are sent to the GPU to compile into primitive instructions and execute, but the app needs to interface w/ the driver too to do so so what is going on!? According to directx it is using the dlidusb3.dll and wudfrd.sys for the USB displayport link and it's using 4x instances of nvldumdx.dll for the 1660 so what is the OS somehow tricking the app into using the nvidia driver then overriding all function of the DP driver and just giving it the frames directly? Like what would the conceptual block diagram for this look like? Wouldn't the 2 drivers have to communicate or does the OS override and handle everything? I'm not a GPU expert but I didn't think there was traditionally OS involvement once the GPU compiles and executes it's instructions and gets a frame buffer to push to the display. Like just physically that happens on the GPU HW which is connected physically to the pinout for DP DVI HDMI etc. is it an OS feature or a GPU driver feature that says "hey let's take those frames, and send it over USB to DP for alternate mode"? It doesn't seem like it's something that would "just work" so either the OS is working some magic or the drivers are working some magic. I suppose it's in nvidia's best interest to enable such a feature but DAMN I did not expect it to be so smooth.

    I just expected that I'd only be able to use 1 display for gaming - the one that was directly connected to the HDMI port on the laptop that is physically right next to the GPU. Now it's working better than I thought but I'm now just confused why it's working so well! I can literally have the CEMU window split between 2 monitors and it runs flawlessly?

    submitted by /u/corenfro
    [link] [comments]

    How can the vodka inside the 'vodka cooled PC' built by youtuber 'Life of Boris' be more effective than the actual cooling liquid provided by the company?

    Posted: 03 Jan 2020 11:28 AM PST

    So let me get this straight:

    There's a company selling pricey computer parts, with the ability to tailor-brew any coolant liquid they please (and since the substance is toxic, I assume they did) and despite this, a crazy youtuber making a meme video manages to cool his system more effectively, using Vodka instead of that tailor-brewed liquid.

    the video in question: https://www.youtube.com/watch?v=IYTJfLyo_vE

    Tl;dw.: he modifies the cooling system and replaces the liquid provided by the company with Vodka instead. This allows him to achieve CPU temperatures averaging 5-10 C less averaging about 5 C less, than the company liquid, while eliminating some lag spikes in GTA5 that only occurred with said substance. He used the same home-made mechanical structure to test the CPU performance with both liquids, so that can't be the reason.

    I know this might be a better question to ask a physicist (chemist?) but it's also computer related so I thought I'd start my investigation here.

    submitted by /u/TehKingofPrussia
    [link] [comments]

    Dell’s latest XPS 13 has a new design with a bigger display and Ice Lake chips

    Posted: 03 Jan 2020 05:59 AM PST

    No comments:

    Post a Comment