• Breaking News

    Tuesday, August 25, 2020

    Hardware support: TSMC Dishes on 5nm, 4nm, and 3nm Process Nodes, Introduces 3DFabric Technology

    Hardware support: TSMC Dishes on 5nm, 4nm, and 3nm Process Nodes, Introduces 3DFabric Technology


    TSMC Dishes on 5nm, 4nm, and 3nm Process Nodes, Introduces 3DFabric Technology

    Posted: 24 Aug 2020 12:17 PM PDT

    F1 2020 Adds DLSS For Increased Performance

    Posted: 24 Aug 2020 10:09 PM PDT

    1usmus ClockTuner for Ryzen - Linus Tech Tips

    Posted: 24 Aug 2020 10:26 AM PDT

    [HUB] 5700 XT PCIe 4.0 vs 3.0 + RTX 2080 Ti PCIe Scaling - Very interesting results

    Posted: 24 Aug 2020 05:19 AM PDT

    AMD EPYC from Dell/EMC: Would you like a thousand threads with that?

    Posted: 24 Aug 2020 01:01 PM PDT

    [AnandTech] TSMC Details 3nm Process Technology: Full Node Scaling for 2H22 Volume Production

    Posted: 24 Aug 2020 01:00 PM PDT

    The PCIe scaling discussion, will next-gen cards need 4.0?

    Posted: 24 Aug 2020 09:55 AM PDT

    Hey Guys!

    So the Ampere launch is just around the corner, and as an Intel i9 9900K owner who wants to upgrade to Ampere, but didn't plan on upgrading his CPU/mobo mainly because I game at 4K and I think at that resolution the 9900K would be more then enough not to bottleneck even the 3090, the only thing that makes me worried is whether the bandwith of PCIe 3.0 x16 will be enough for these new cards.

    Hardware unboxed released an interesting video today:

    5700 XT PCIe 4.0 vs 3.0 + RTX 2080 Ti PCIe scaling

    One of the interesting parts of the video can be found at 10:01, but to make it easier I made a screenshot so you don't have to find it.

    It compares how different Geforce GPUs scale with PCIe 3.0 x16, x8 and x4 (so basically 3.0 x16 vs 2.0 x16 vs 1.1 x16), from the GTX 1660 Super up to the 2080 Ti. I compared the numbers to see what the difference looks like in percentage points:

    RTX 2080 Ti average framerate: x16: 100% x8: 92,8% x4: 75,7%
    RTX 2080 Ti minimum framerate: x16: 100% x8: 92,2% x4: 64,4%
    RTX 2080 average framerate: x16: 100% x8: 94,2% x4: 79%
    RTX 2080 minimum framerate: x16: 100% x8: 95,2% x4: 70,8%
    RTX 2070 average framerate: x16: 100% x8: 93,85% x4: 82%
    RTX 2070 minimum framerate: x16: 100% x8: 94,2% x4: 76%
    RTX 2060 average framerate: x16: 100% x8: 97,5% x4: 88,8%
    RTX 2060 minimum framerate: x16: 100% x8: 95,8% x4: 84,9%
    GTX 1660 Super average framerate: x16: 100% x8: 97,6% x4: 86,6%
    GTX 1660 Super minimum framerate: x16: 100% x8: 99% x4: 79,8%

    Sadly they only tested this one game like this, there are other games that were not at all affected when they used PCIe x8 insted of x16 with the 2080 Ti, but what's interesting about these results is, that they way a lot of us tought about PCIe bandwidth is that it's a wall, once a GPU is fast enough hit the limit, it will stop performance scaling dead on its tracks. So if the 2080 Ti is fast enough to overwhelm PCIe 3.0 x8, but it only loses a few percentage points of performance then the 2080 shouldn't lose any performance at all. And if the 2080 is fast enough to overwhelm PCIe 3.0 x8 then the 2080 Ti shouldn't be faster at all then 2080 using PCIe 3.0 x8.

    But this is not the case. All GPUs lose some performance using x8 mode and more using x4 in this title, and while slower GPUs lose less performance, the loss of performance is not in line with the performance difference of the cards. For instance the strongest GPU in the list, the 2080 Ti loses 7.2% performance of average framerate, and the weakest in the list, the 1660 loses 2.4%. The loss of performance is bigger using the 2080 Ti, sure, but the 2080 Ti is around 2 times as fast as the 1660 Super so I think it should be way more.

    Actually the 2080 loses 5.8% performance in average framerates using x8, while the weaker 2070 6,15% performance. The same can be seen when comparing the weaker 1660% losing 2,4% performance to the stronger 2060 losing 2,5%. This is all margin of error stuff, but it can be concluded, that the performance of the card doesn't directly scale with the loss of performance using less bandwith, and even if a weaker GPU is already bottlenecked by the PCIe bandwith, using a faster GPU will still result in an increase in performance. The same can't be said about CPU bottleneck, if the CPU holds back the GPU, putting in a more powerful GPU in your system will not significantly increase the performance.

    Of course bottleneck heavily depends on the title used for testing.

    In light of this, what do you guys think, how badly will PCIe 3.0 x16 hold back the next generation of high-end GPUs?

    PS: Sorry for the long post.

    submitted by /u/K1llrzzZ
    [link] [comments]

    ClockTuner Unlocks Higher Performance, Lower Power Consumption on AMD Zen 2 CPUs

    Posted: 24 Aug 2020 03:52 PM PDT

    (igor'sLAB) Comes AMD’s new and still secret “Lucienne” APU exclusively as 8-core only for Google? OPN and first data leaked!

    Posted: 24 Aug 2020 06:10 AM PDT

    Rumor: Pixel 5 is slower than the Pixel 4, has same camera as the Pixel 2

    Posted: 24 Aug 2020 03:44 PM PDT

    An Arm Opportunity with Cloud Service Providers

    Posted: 24 Aug 2020 11:11 AM PDT

    Corsair Gaming is a billion-dollar company, and everything else we spotted in the IPO filing

    Posted: 24 Aug 2020 11:26 PM PDT

    Does RAM speed matter for gaming on AMD Ryzen? Testing memory up to 4000MHz

    Posted: 24 Aug 2020 03:39 AM PDT

    Whats in this boxes... oh...

    Posted: 25 Aug 2020 01:10 AM PDT

    Hi guys!

    Some time ago i found 2 boxes in the deeps of my cave: Two brand new Cisco AIR AP1832 WLAN Access Points!

    It seems to be that nobody whats to buy these so i was thinking about doing something with them.

    Anyone got some funny ideas how to torture them?

    submitted by /u/AranoXAustria
    [link] [comments]

    NEC's VersaPro UltraLite Laptop Weighs 1.8lbs, Promises 15-Hour Battery Life

    Posted: 24 Aug 2020 07:18 AM PDT

    No comments:

    Post a Comment