• Breaking News

    Saturday, September 12, 2020

    Hardware support: Nvidia changes their mind: RTX 3080, 3090 FE Cards Will Be Sold In Australia

    Hardware support: Nvidia changes their mind: RTX 3080, 3090 FE Cards Will Be Sold In Australia


    Nvidia changes their mind: RTX 3080, 3090 FE Cards Will Be Sold In Australia

    Posted: 11 Sep 2020 08:24 PM PDT

    Heads up: 3080 FE review embargo moved to Sept 16th at 6 a.m. Pacific Time

    Posted: 11 Sep 2020 02:00 PM PDT

    RTX 3070 Releases Oct. 15.

    Posted: 11 Sep 2020 04:39 PM PDT

    (GN)Get Good: AMD Radeon Marketing Friendly Fire & RDNA2 Competing vs. RTX 3000

    Posted: 11 Sep 2020 04:48 PM PDT

    _rogame on twitter - Possible RDNA 2 Benchmark Spotted AOTS

    Posted: 11 Sep 2020 08:04 PM PDT

    Digital Foundry new FS2020 scaling video for those who were thinking about buying a 3090.

    Posted: 11 Sep 2020 11:17 PM PDT

    [VideoCardz] NVIDIA GeForce RTX 3060 Ti gets 4864 CUDA cores and 8GB GDDR6 memory

    Posted: 11 Sep 2020 04:12 AM PDT

    LG Launch 163-inch 4K Micro LED TV, Revealing Brutal Truth about MicroLED

    Posted: 11 Sep 2020 09:29 AM PDT

    Are there any display port 2.0 monitors on the horizon?

    Posted: 12 Sep 2020 02:11 AM PDT

    ROG and a new company called Eve are the only two I know of that are supposedly releasing an hdmi 2.1 input 4k monitor before the year is up...

    The Eve is rated for 144Hz and I think the ROG will come out with the same specs when they drop.

    The problem is hdmi 2.1 is only rated for 4k @ 120Hz...you need display port 2.0 to achieve true 4k @ 144Hz without any compression.

    I just dont understand why they would even make the monitor go above what the inputs can actually provide...whats the point of bottlenecking your own product right out of the gate?

    Anyone know of a monitor that is releasing with display port 2.0 in the next 3 months or so, please drop that info, thanks!

    submitted by /u/DPisfun0nufispd
    [link] [comments]

    RedGamingTech: RDNA 2 Is Monstrous | Insane Cache System & Performance Info - EXCLUSIVE

    Posted: 11 Sep 2020 12:40 PM PDT

    Why Xbox Series S's Dumb 8+2GB Memory Configuration Isn't As Dumb As You Think

    Posted: 11 Sep 2020 01:37 PM PDT

    I wrote a piece on "Why Xbox Series X's Dumb 10+6GB Memory Configuration Isn't As Dumb As You Think" and noticed that Microsoft has done something similar for the Xbox Series S, so here we go again!

    The Memory Config Is Even Stranger

    A couple things were expected:

    • <16GB Total Capacity

      • No need to match the Series X's 16GB VRAM.
    • 14Gbps GDDR6

      • The most widely manufactured (i.e. cheap) high bandwidth memory on the market today, used in the Series X.
    • 8Gb (1GB) and 16Gb (2GB) GDDR6

      • Used in the Series X, so MS already has supply contracts.
    • MS would probably use separate "fast" and "slow" memory segments to avoid wasting performance on memory used by the OS.

      • It made sense on the Series X, and it still makes sense here.

    A couple things were not expected:

    • There's only a 128-bit bus based on counting four 32-bit GDDR6 chips on the front side of the PCB render in promo images.

    • MS almost certainly clamshelled the memory to get to 10GB on a 128-bit bus.

      • Given MS is already juggling an unprecedented number of memory chips for its consoles (this is a deceptively big deal for supply chains of their scale) and 32Gb GDDR6 doesn't exist, this is the only option.
    • 6/8 chips use 8Gb (1GB) GDDR6 and the remaining 2/8 chips use 16Gb (2GB) GDDR6.

      • This requires the memory to be split into two separate segments: the "first" GB of each of the 8 chips form one 8GB 128-bit 224GB/s segment while the second GB in the 2 16Gb chips form a separate 2GB 32-bit 56GB/s segment.

    Wtf Is Clamshelling?

    Clamshelling is an old technique built into the GDDR spec to allow a designers to double memory capacity without extra memory controllers.

    The most recent example is the Nvidia RTX 3090. The 3090 used clamshelling to "double" its VRAM capacity from 12GB to 24GB despite only using a 384-bit bus with 8Gb memory chips.

    • Without clamshelling, the 3090 would have 12GB of VRAM.

      • That's 12 8Gb 32-bit memory chips (384-bit bus).
      • At 19.5Gbps on a 384-bit bus, the non-clamshelled 3090 gets 936GB/s of bandwidth.
    • With clamshelling, the 3090 can have 24GB of VRAM.

      • That's 24 8Gb memory chips in 16-bit mode married up to the same exact six 64-bit memory controllers (384-bit bus).
      • At 19.5Gbps on the same 384-bit bus, the 3090 still gets 936GB/s of bandwidth. Clamshelling does not affect bandwidth.

    Videocardz reported on a couple nice PCB pictures where you can see the spots for the 12 memory chips on the front PCB side and the spots for 12 more memory chips on the back PCB side.

    3090? Clamshelling Sounds Expensive...

    Clamshelling is generally a premium option that you see on workstation cards like the AMD FirePro S9100 that reached 32GB of VRAM on a single GPU all the way back in 2014 (!) despite using the same bog-standard 512-bit Grenada GPU that showed up in the Radeon R9 390 and 390X of the same era.

    General advantages of clamshelling include:

    • Doubled Memory Capacity

    General disadvantages of clamshelling include:

    • Extra Cost

      • The extra memory chips cost something.
      • The extra power delivery for the memory costs something.
      • The more complicated traces for those memory chips will likely require more PCB layers.
      • The extra heat requires a better thermal solution (or better binned GPUs that can achieve the same performance at lower voltages).
    • Extra Power/Heat

      • The extra memory and their power delivery require more power for the GPU.

    Needless to say, clamshelling is expected on >$1000 products like the RTX 3090 or FirePro 9100, but it is unexpected on a commodity like a $300 gaming console.

    So Are You Sure?

    I'm not aware of an official source or teardown to confirm that the Xbox Series S uses clamshelled memory. However, there's no other feasible way to achieve what the Series S has achieved without clamshelling.

    It might be possible that only one of the memory controllers has to run in clamshell mode (or can partially run in clamshell mode) so that MS doesn't have to duplicate all of the memory chips.

    For example, if the GDDR spec permits one of the 64-bit memory controllers to "partially" run in clamshell mode, then the memory config would probably look like:

    • 3 16Gb (2GB) chips running in 32-bit mode "un-clamshelled".

    • 2 16Gb (2GB) chips running in 16-bit mode "clamshelled".

    Surely only worrying about 5 memory devices would be cheaper than 8, but I'm not aware of any historical examples where this "partial clamshell" technique was used. Also, if MS did that, then why wouldn't they put all five memory chips on the front of the PCB? Wouldn't that be easier to cool? As a reminder, the render has only four memory chips on the front of the PCB with an unknown number on the back.

    Additionally, if 32Gb (4GB) GDDR6 memory was available, then Microsoft could use it to avoid this clamshelling business in the first place. However, I'm not aware of this existing yet and this would mean MS is juggling three separate memory types for its console supply chain. Two types of memory is probably already a low-key nightmare for a sophisticated supply chain like that of the Xbox, so I doubt that MS would enjoy adding a third to the mix even if it was available.

    Does This Stuff Even Matter?

    This kind of thing isn't important. All that matters is:

    • There's 10GB of VRAM.

    • There's 8GB of "fast" VRAM for the game.

    • There's 2GB of "slow" VRAM for the OS.

    Developers already have to worry about this "fast-v-slow" complexity on the Series X, so it shouldn't be a big deal for the Series S.

    submitted by /u/ImSpartacus811
    [link] [comments]

    3080 Geekbench 5 Vulkan score

    Posted: 11 Sep 2020 11:11 AM PDT

    https://browser.geekbench.com/v5/compute/1477073

    2080 TI scores between 100k - 115k. the average is 102k

    submitted by /u/cyperalien
    [link] [comments]

    Artificial Intelligence (GPT-3) Explains How RAM Works

    Posted: 11 Sep 2020 09:22 PM PDT

    ELI5 the weird RAM configuration of the Xbox Series S

    Posted: 11 Sep 2020 09:45 AM PDT

    It's a full day that I'm scratching my head about the RAM configuration on the Microsoft new little next-gen console.

    They gave us the bandwidth and with 224 GB/s on the "fast side" I assume they used the same 14 Gbps GDDR6 modules of the Series X on a 128 bit bus. The "really slow" 2 GB @ 56 GB/s are bothering me. Since in the exploded view we can see only 4 modules around the SoC I assume that a single 14 Gbps module on a 32 bit bus is hidden somewhere.

    Am I correct with my assumption? Thank for your replies.

    submitted by /u/Defrag25
    [link] [comments]

    Heatsinks: Do heatpipes soldered to the fins and base matter?

    Posted: 11 Sep 2020 07:48 AM PDT

    As many may know, Noctua has always been a dominant heatsink brand for many years. I wondered what made it so, and from short research it seems one of the most distinguishing traits they have is that they solder their heatpipes to the fins and base of the heatsink. The only other brand I know that does that is Thermalright (who don't market this for some reason).

    Supposedly, this is done to improve the conductivity of heat from base to the pipes to the fins as well as the longevity of the heatsink through all the expansions and contractions from heating and cooling. This might explain why heatsinks with many heatpipes like the Raijintek Pallas 120 (6) for instance consistently does worse than the Noctua NH-L12S (4).

    I was very sure this was what made them so good, but then the Scythe Big Shuriken 3 comes up.According to many benchmarks, such as Optimum Tech's, it's always reached very similar temps to the NH-L12S - in some cases even getting better temps too. Out of curiosity, I asked a Scythe representative if they soldered that heatsink, but the Big Shuriken 3's heatpipes are not soldered to the base and fins; only press-fitted.

    Which does raise a few questions:

    • Does solder contribute significantly to a heatsink's performance? Or is it just coincidence that Noctua solders their heatsinks too?
    • If soldering does increase the longevity of the heatsink, how quickly would a press-fitted heatsink degrade over, let's say, five years? 1-2 degrees celsius? Or more/less?
    • Noctua has been able to solder their heatsinks without having an absurd price. Why is it that no one (except Thermalright) does it too?

    It's always been a wonder why Noctua's been so good, and I thought I found the very thing that made it so... but then this Scythe heatsink comes with its more average process and somehow comes really close to beating the Noctua. What do you guys think?

    Edit: If anyone's interested, here are a list of some brands that I found use either a soldered base or fins:

    • Noctua
    • Thermalright
    • Gelid
    • Cryorig
    • Gamerstorm?
    submitted by /u/DatGameh
    [link] [comments]

    A set of hypothesises about the mystery of the Navi Bus

    Posted: 11 Sep 2020 09:26 AM PDT

    As we all know by now, Kopite ID'd a GPU with what appeared to be a 256 bit bus as Navi 21. This does not compote with the calculated abilities of the arch, as we know it from RDNA 1.0, or the behavior of nVidia in response to it.

    I have found several hypothesises to answer this that have been endorsed by retweets by Kopite and _Rogame, probably the most credible silicon leakers there are at the moment.

    https://twitter.com/Locuza_/status/1304407437450137600

    Among other things, it suggests either changes to the L2$ tiles, and/or they may be using HBM for the big boy after all.

    The one I go with

    https://twitter.com/3DCenter_org/status/1304264470508834821

    https://twitter.com/kopite7kimi/status/1304238246797275138

    The burden of these conversations comes to 3 possibilities: Possibly a 512 bus, like Hawaii, Kopite messed up (He admitted, by retweeting the first one that he may have called it wrong, and the die is a touch small for that), or dinkered test article.

    We may have even fallen for something AMD did to fox industrial espionage - they have been running a tight ship on info, so it would not surprise if they're doing things to fuck with folks peering in.

    submitted by /u/Jeep-Eep
    [link] [comments]

    When will we likely see Graphics cards with hbm2 close to the die like the Vega series back on the market?

    Posted: 11 Sep 2020 03:17 AM PDT

    Are there any indications or credible speculations on that kind of design coming back any time "soon"?

    submitted by /u/Liblin
    [link] [comments]

    Is SLI/multi-GPU truly dead for gaming?

    Posted: 11 Sep 2020 05:56 PM PDT

    I'm trying to research into the efficacy of SLI/multi-GPU setups with current/next generation cards, but have had a hard time getting reliable information. Most of the resources I can find are people saying they're either not supported at all or else aren't worth it because scaling is terrible. But, those are two very different claims, one is a flat out prohibition of any use, while the other can be highly variable and is heavily dependent upon the use case.

    Of the articles I've found that have actual numbers behind them, I've seen both atrocious and excellent scaling on nVidia's Turing chips. This variability has almost always been explained by different resolutions, where scaling at a lower resolution (1080p/1440p) is awful, while scaling at higher resolutions (particularly above 4K) has been able to achieve almost linear performance gain (90% for some 8K applications). However, I've also read that SLI support has been totally dropped from some major titles, like Metro: Exodus and MSFT Flight Simulator. Yet, there are also videos of people claiming to run Metro at 8K with dual RTX Titans. So, it's hard to know what's actually true.

    Personally, I have 3x 4K displays and have been able to drive them in ultra wide Surround with my multi-GPU setup (originally quad Titan-Xs... though a few have died over the years). I've always had great luck with SLI for my use case (11520x2160 is a lot of pixels and scaling's been good) and even modern games, like RDR2 have had seemingly good SLI support. Yet, many sources point to that drying up on future titles.

    So, does anyone know if SLI is flat out unsupported in current/future releases, or if it still is supported, but it just requires a suitably demanding display to show any appreciable benefit.

    Apologies if this is the wrong subreddit for this question, but I'm hoping for a more substantive discussion than "it's dead" and "scaling sucked for me on my 1080p monitor."

    submitted by /u/Merchant__of__Menace
    [link] [comments]

    Xbox series X vs 2080

    Posted: 11 Sep 2020 03:28 PM PDT

    What are your thoughts on the XBSX being "as good or better" than the 2080? Just curious.

    submitted by /u/Nephilim4826
    [link] [comments]

    No comments:

    Post a Comment