• Breaking News

    Sunday, September 6, 2020

    Hardware support: Why does the RTX IO in the Nvidia GeForce RTX 30 Series use any CPU at all even though the information flow from the storage to the GPU?

    Hardware support: Why does the RTX IO in the Nvidia GeForce RTX 30 Series use any CPU at all even though the information flow from the storage to the GPU?


    Why does the RTX IO in the Nvidia GeForce RTX 30 Series use any CPU at all even though the information flow from the storage to the GPU?

    Posted: 05 Sep 2020 05:13 PM PDT

    I saw in the 2020-09-01 Official Launch Event for the NVIDIA GeForce RTX 30 Series that the RTX IO in the Nvidia GeForce RTX 30 Series allows the GPU to directly access the storage (SSD) without having to go through the CPU, as shown in the flowcharts below.

    In the Nvidia GeForce RTX 20 Series and before, the information from the storage goes through the CPU before reaching the GPU (flowchart from the Official Launch Event linked above): https://i.stack.imgur.com/HAN4W.png

    In the Nvidia GeForce RTX 30 Series, the information goes straight from the storage to the GPU: https://i.stack.imgur.com/MP6QI.png

    How ever I see on the same slide "20x lower CPU utilization": https://i.stack.imgur.com/rv7e2.png

    Why does the RTX IO in the Nvidia GeForce RTX 30 Series use any CPU at all when information flow from the storage to the GPU, since according to the flowchart the information goes straight from the storage to the GPU?

    submitted by /u/Franck_Dernoncourt
    [link] [comments]

    AMD/Intel Mobile CPUs Energy Consumption Benchmark

    Posted: 05 Sep 2020 02:34 PM PDT

    I made a few charts showing how much energy CPU uses for 3 very high demand tasks. I used data from last Hardware Unboxed video: https://www.youtube.com/watch?v=hFYdHkvRs2c&t=0s

    I can't understand why so few reviewers don't make this kind of comparison. It shows very well of efficient CPUs are nowadays.It's a cheap math expression (Watt*Minutes*60=Joule), it isn't very accurate (total system consumption takes account RAM, SSD, Display consumption that can be different laptop by laptop) but indicates very well what a huge step AMD made with the Ryzen 4000 Mobile series.

    Handbrake and Blender 2.80

    Chromium Compile

    I added also a relative comparison with common and maximum amount of battery energy placed inside notebooks.

    Graph 1

    Chromium Graph

    I hope you'll find this interesting as much as I do. Thanks for reading!

    submitted by /u/Marocco2
    [link] [comments]

    AMD Ryzen 7 4800U Review, Mind Boggling Performance at 15W

    Posted: 05 Sep 2020 04:14 AM PDT

    DirectStorage is coming to PC | DirectX Developer Blog

    Posted: 06 Sep 2020 01:07 AM PDT

    RAID performance penalty on NVMe or Optane?

    Posted: 05 Sep 2020 10:21 PM PDT

    It used to be that RAID with parity has a performance penalty when writing to the storage drive.

    Then there were two types of RAID. Software and Hardware RAID

    Hardware RAID is faster.

    Then we got NVMe and Optane storage which has much faster write speed and IO than hard drives.

    Has there been any public benchmark/testing done to see what is the performance impact of RAID 5 or RAID 6 on an NVMe or Optane?

    submitted by /u/lintstah1337
    [link] [comments]

    Is it fair to calculate GPU performance by the total number of pixels being rendered? Details in post.

    Posted: 05 Sep 2020 02:25 PM PDT

    I hope this is the right sub to post this. What I'm curious of is if you can calculate the rough percentage difference of how a GPU performs by the increase in pixels.

    1440p is 3,110,400 pixels 4k is 8,294,400 pixels

    So lets say that 1440p at Ultra is 200 FPS in a given application. To keep our numbers round, lets say 4K is 75fps (my poor rough math). Now, lets increase the resolution to 5120x1440 (7,372,800 pixels) could we do some rough math of what that performance would be knowing the relative performance of both 1440p and 4k? The way I'd calculate it is to determine the difference in pixels

    What I'm not sure about is if GPU performance scales in a linear enough way to use this as a viable way to calculate performance. The ultimate goal is see how much the difference between a 3090 and 3080 is at pushing a game to a 3x display setup.

    Appreciate any insight into this.

    submitted by /u/iAtty
    [link] [comments]

    The gradual TDP increase in the past few Nvidia GPU series is quite disappointing in my opinion.

    Posted: 05 Sep 2020 05:12 AM PDT

    UPDATE:

    Thank you for all of your comments and insights. It does seem that the underwhelming performance/$ gain delivered by Turing over Pascal has made some to be craving for more performance/$ despite having to sacrifice TDP number in the process. Pascal was indeed one of the exceptional generation.

    To be frank, I believe that throwing more power into the silicon to uplift performance is likely to be the "easy way out". I had hoped that perhaps Nvidia could continue innovating in the future to increase perf/watt even further.

    I am concerned about perf/watt just as much as perf/$ given that I build SFF PC - both volume and heat matter. High power consumption is usually not an issue given that high wattage SFX-PSUs are available, but heat dissipation would be a significant problem in small PC cases. Of course, SFF build community is relatively niche in the grand scheme of things.

    Original post below:

    In the context of Nvidia GPU, it seems that TDP has risen steadily per tier. I'd argue that it's a sloppy approach to boost performance while throwing power consumption figure into the bin, especially in mid tier (x60 and x70) class of cards.

    Remember that GTX 1060 has a TDP of 120W while performing close to 165W GTX 980? On the other hand, RTX 2060 TDP is at 160W to match GTX 1080 performance at 180W. I imagine the RTX 3060 TDP would need to be at 200W to match the RTX 2070s (215W) (note: please point me out if this could be wrong).

    Notice how the TDP reduction between cards of similar performance across two generations become smaller for the past 5 years or so.

    Is it because of the RTX features that cause such increase in power consumption? Is it because we are hitting the limit of semiconductor manufacturing? Or nobody cares because everyone are just getting 1000W PSU nowadays?

    I kind of miss the ol' times when graphics cards were actually "cards", not a PCB with 2-inch-thick metal fins slapped onto it. I had wished for sub-150W GTX 1080Ti RTX equivalent this time around, but it probably won't happen in the near future.

    submitted by /u/_GHQ
    [link] [comments]

    What preparations or changes are you making to your setup for RTX 3000?

    Posted: 06 Sep 2020 02:14 AM PDT

    I'm planning on getting the RTX 3080, so I've ordered a new Dell S2721DGF 27" Gaming Monitor for the 1440p 165Hz, an update from my 1080p 144Hz display.

    I've also installed a 5 pack of Arctic P12 PWM PST fans to my Meshify C PC case in preparation for a hot power hungry card.

    The last thing I've been doing is overclocking my 3800X CPU's fabric clock, a and adjusting precision boost overdrive settings with the aim of improving single-core performance to improve games.

    What changes are you guys making to your systems for the new 3000 cards? Are any of you getting a whole new rig?

    submitted by /u/HydraHead9
    [link] [comments]

    No comments:

    Post a Comment