• Breaking News

    Saturday, July 11, 2020

    Hardware support: COVID-19 Causes PC Demand in the US to Soar Back to 2009 Levels

    Hardware support: COVID-19 Causes PC Demand in the US to Soar Back to 2009 Levels


    COVID-19 Causes PC Demand in the US to Soar Back to 2009 Levels

    Posted: 10 Jul 2020 07:31 AM PDT

    Diagram of hard disk manufacturers

    Posted: 10 Jul 2020 04:03 AM PDT

    Nintendo Switch Successor may be Powered by Samsung SoC w/ AMD RDNA Graphics

    Posted: 10 Jul 2020 06:31 AM PDT

    [VideoCardz.com] - AMD Ryzen Threadripper PRO 3000 final specifications leaked

    Posted: 10 Jul 2020 09:33 AM PDT

    Some Partial Research on Nvidia GPUs

    Posted: 10 Jul 2020 03:34 PM PDT

    Was thinking about upgrading my hardware since it has been a while since my last build and ended up doing some partial research to help with decisions. Personally, I am excited about the upcoming generation of hardware but find some of the rumors hard to believe, so I thought looking at hard data from past generations might help better adjust my expectations.

    I have compiled the data into two sheets, one for the physical aspect and one for compute:

    [Link to Spreadsheets](https://docs.google.com/spreadsheets/d/1HXDXpSYo8npdFvjUosAUQchi7H6doXA7DVzeswRgNMY/edit?usp=sharing)

    Some notes on my selection of data:

    • Almost all values are obtained from TechPowerUp's GPU database.
    • I started with GTX 570 because that was the choice for my first DIY build so there is no value in looking any earlier.
    • Only certain tiers of products are listed because that is where my main interests are and where I generally consider my purchases.
    • For compute data I chose OC numbers because 1) GPU Boost makes it difficult to run at complete stock 2) People interested in this range of products usually OC 3) these numbers are what I considered achievable for most people if they attempt to OC, MemClock is basically just stock +200MHz.
    • I excluded dual GPU cards since my last experience with it (HD7990) was terrible. It is hard to compare them to single GPU cards anyway.
    • Also excluded Titans because to me they seem more like cheaper Quadros than more expensive GeForces, which have completely different target audiences.

    With all that said here are my observations and interpretations of what I find interesting that I wanted to share:

    • First thing I noticed was how good (or bad, depending on your perspective) Nvidia is at market segmentation, considering this is only a portion of the product stack. From this limited selection, we are looking at 22 products from 10 GPUs. Just look at the GK104 situation lol. Turing is not any better in that regard either.
    • While GPUs have physical modifications for better segmentation, with the right combination of settings, products that share the same die can potentially achieve the same theoretical performance. So, if you have a clear understanding of your computational needs, it can be much easier to navigate Nvidia's product stack, which I assume can be potentially confusing to the general public.
    • To further the above point, if we are to overly simplify hardware performance in our use cases down to just FP32 compute and memory bandwidth, it becomes apparent that apart from modifications to the dies, Nvidia achieves segmentation through a combination of memory bus width, shader counts and their respective clocks (also memory configuration but that cannot be changed when you buy the product). In this context, the aforementioned specs alone provide little value, rather, it is meaningful to look at BusWidth(bit)*MemClock(GHz)*0.5 for GDDR5; BusWidth(bit)*MemClock(GHz) for GDDR5X and GDDR6; Shaders*ShaderClock(GHz)*2(IPC) for Fermi FP32 and Shaders*CoreClock(GHz)*2(IPC) for post-Fermi FP32.
    • There is a clear divide in VRAM configurations in the middle. Pre-Pascal it was common to have around 2-3GB memory (GM200 been an outlier), then boom Pascal hits and suddenly 8GB is common in these tiers of products. While I can see why some would consider Turing "stagnating" in terms of VRAM capacity, I wonder how much incentive there is for Nvidia to actually increase it considering 1) the introduction of better compression techniques and native FP16 support, 2) to protect other segments of the market where VRAM is the deciding factor for purchases (especially for ML where some models have VRAM requirements not available on most GeForce products).
    • While transistor density is not a true indicator for performance, in this case it correlates pretty closely with compute figures, which is a given I guess since the pulse frequency dictates how much work is done and a better process gives higher frequencies. This part is probably the most interesting to me because we can essentially go from 22 products to 10 GPUs to 3 lithography processes (16/12 counts as one). And to differentiate different generations of products in each lithography process, Nvidia either uses more silicon (bigger dies), uses more power (higher TDP) or a combination of both (treat this as simplified).
    • Something important to note is that up to this point, all compute figures I provided are theoretical peaks as opposed to actual sustained. Not sure how close they are compared to each other in reality, which is probably a reason why you cannot simply use compute figures to justify performance since it may differ from actual figures and there are data types other than FP32 involved.
    • The bandwidth to compute ratio in these tiers of products has dropped over the past decade due to improvements in compute outpacing improvements in memory speed. Not sure what kind of impact this has, or if any at all.
    • Also, not sure how much texture rate and pixel rate matter in the context of traditional rasterisation. Maybe others more knowledge can chime in on these two points.
    • Three interesting stats that stood out:
    1. GP102 has the highest transistor density, TU102 comes in 3rd after TU104 (SMs with RT and Tensor cores seem less "space-efficient"?)
    2. GM200 is the second biggest die after TU102, which is the biggest by far
    3. 1070Ti has the highest Perf/$ (probably targeted specifically at miners during the crypto rush)

    The observation in point 1 could also be resulted from certain on-die functionalities not scaling well
    in terms of size on different lithography processes?

    • Some things that these data fail to address is the software stack that interacts with all this hardware, which I think in some cases are the real difference maker.

    Hope this is interesting enough to evoke some discussions, I didn't initially plan on making a post as I was just trying to do some digging to help with my next upgrade but some stuff turned out to be interesting so I got curious as to what the community's observations and interpretations are. I try not to speculate on the upcoming hardware since it could lead to unrealistic expectations. Also feel free to point out any mistakes as I might not have been thorough in checking some things.

    submitted by /u/stanrofl
    [link] [comments]

    (GN)Lian Li Lancool II Mesh Case Review vs. Phanteks P500A, Original, & More

    Posted: 10 Jul 2020 06:17 AM PDT

    Here is why LPDDR4x-4266 RAM seems considerably faster when coupled with Intel Tiger Lake-U CPUs compared to equivalent AMD Renoir APUs [NotebookCheck]

    Posted: 10 Jul 2020 11:26 AM PDT

    Samsung Galaxy Book S Laptop Review [Lakefield SOC] [notebookcheck German]

    Posted: 10 Jul 2020 01:33 PM PDT

    News Corner | Threadripper "3995WX", Thunderbolt 4 Specs, Zen 3 "In the Labs"

    Posted: 10 Jul 2020 07:41 AM PDT

    Dell's new XPS Desktop fits NVIDIA and AMD GPUs inside a smaller 19L case

    Posted: 10 Jul 2020 05:08 AM PDT

    Air Cooling PERFECTION - Lancool II Mesh Case Review

    Posted: 10 Jul 2020 09:43 AM PDT

    [Tech Yes City] Return of the Xeon - 1680 v2 vs. Ryzen 9 3950X vs. i9-10900k

    Posted: 10 Jul 2020 09:24 PM PDT

    AMD Ryzen 5 3600 CPUs Being Sold in Ryzen 3 3200G Packaging

    Posted: 10 Jul 2020 06:45 AM PDT

    Does higher ray-tracing peformance require more Pcie bandwith?

    Posted: 10 Jul 2020 05:42 AM PDT

    Hi!

    With the new upcoming Ampere cards probably supporting Pcie 4.0, lots of us who are still on 3.0 are wondering if our lack of 4.0 support will bottleneck the upcoming cards. If history if any indication, it won't since Pcie 2.0 (3.0 x8) just started to bottleneck the 2080 Ti according to TechPowerUps testing.

    But ray tracing is a fairly new technology, and Ampere is rumored to have a massive uplift in this area, so I wonder: Does higher ray-tracing performance mean higher Pcie bandwith requirement?

    submitted by /u/K1llrzzZ
    [link] [comments]

    Keyboardio Atreus Review

    Posted: 10 Jul 2020 08:02 AM PDT

    What upgrades could next-next gen consoles (PS6) have?

    Posted: 10 Jul 2020 04:33 PM PDT

    Besides a better GPU, let's talk about the improvements next-next gen could make. I know it's all speculation, talking not only about unreleased consoles, but ones that are currently only a concept. I ask because I'm thinking the improvements might grind to a halt, based on current tech.

    Take the CPU. The slowdown in IPC gain is well documented. While Moore's Law isn't dead, most of the gains will have to come from parallel processing. You can still game on an i7 2600K. Compared to the r5 3600, over a console gen away, it's still within the ballpark at higher resolutions. It's not even 50% ahead, and the i5 even comes around 70-80% of the r5 in some 1440p benchmarks I checked. The higher the resolution, the less the CPU will matter.

    I don't see more than 8 cores being necessary. Sure, that could become the next "you don't need more than an i5/4 cores", but current games for the first couple years will be optimised around the Jaguar chip. We might not see games taking advantage of the 7 cores/14 threads on offer until the middle/end of next gen.

    Then there's the power envelope. While I don't think they'll reach 5ghz, or even 4.5, if they do, I doubt they could keep to the 65W envelope. So with all things permitting, I could see a PS6 7 years from now with a CPU that, with all the IPC and clock speed improvements, may be at most twice as fast.

    Does this also mean that consoles now will have indefinite backwards compatibility? I could see that from Microsoft, since they supported the original Xbox, but can it be assumed that the PS6, 7 and 8 will play PS4 games?

    Another thing is the controller. Will they keep all the old features like the touchbar, gyro, and the new haptic triggers? The Xbox controller is simple, but could we see a very expensive future PS controller, forced to support a bunch of legacy features only used in a handful of games?

    The SSD was a huge improvement. A fast one too. Could next-next gen have SSDs fast enough to replace RAM? So you have the game literally running off disc, talking to the GPU directly? It would basically make the amount of VRAM redundant. It's as big as it needs to be. It might be unrelated, but Linus did a test with SSDs and DOOM showing no perceptible difference. If loading screens really do disappear next gen, SSD speed increases won't matter.

    As advanced as the GPU is, the target will still be the TV. TVs aren't known for pushing refresh rates above 60. Most of their content is at 24fps. Some system sellers like TLOU may again squeeze every last drop of graphical performance at the cost of framerate, but now there's a hard cap of 60. I'd be interested to see whether these games aim for photorealism at 30fps, or whether 60fps will become the new standard, and the only upgrades occurring with resolution.

    I imagine in 7 years, 4K will be the norm, but next-next gen will still chase it. Like current (base) consoles chasing 1080p, despite 4K existing when they were leased. I think they'll try and beautify 4K 60 rather than chase 5K or higher framerates.

    Of course VR might be ubiquitous by then, which would put the extra horsepower to good use. If we get graphene chips or some other breakthrough, the monumental gaps we see between generations might return. But if not, we might reach a point where a new gen isn't needed. That doesn't mean they won't make them. They need money, after all.

    If next-next gen needs 16 cores, or a new wifi antenna for faster access to cloud computing, sure. I'm not predicting the end of generations, or even their shortening. That was a wrong prediction made about this gen. But I feel like the room for improvement is smaller than it's ever been.

    I don't remember the pre-PS4/Xone console launches, but I do remember the disappointment in this one, with the mid-tier GPUs and especially the low-tier CPUs. There wasn't as much excitement as there is now, and while nobody can predict the future, I wonder whether the excitement of new hardware will be neutered.

    Not just by mid-gen refreshes (in fact the Pro/X apathy might be a sign of things to come), but by an idea that change is "done". The architecture of console gaming is set in stone, and nothing risky or experimental like the CELL can exist. This is just how consoles are from now on.

    submitted by /u/wgolding
    [link] [comments]

    No comments:

    Post a Comment