• Breaking News

    Thursday, August 27, 2020

    Hardware support: Analysis of Nvidia GPU prices over time. Or "Why is Turing considered a poor value"

    Hardware support: Analysis of Nvidia GPU prices over time. Or "Why is Turing considered a poor value"


    Analysis of Nvidia GPU prices over time. Or "Why is Turing considered a poor value"

    Posted: 26 Aug 2020 01:14 PM PDT

    With the next generation of GPUs on the near horizon, I wanted to take a look at the last 7 years of Nvidia GPUs to see how Turing compares to historical trends of price and performance. Partially for my own curiosity, but also to answer the question of why Turing is considered a poor value by so many. And if that opinion is based on objective data, or if it's simply a repeated meme.

     

    One thing that I want to get out of the way before anyone taking Econ 101 decides to show up; GPUs are luxury goods, and are non-essential items. They are not comparable to essential goods like food, water, medicine, etc. An Increase in price of a non-essential good is unfortunate, but is not going to cause you to starve or otherwise disrupt your standard of living.

     

    With all that said, let's talk about that sweet sweet data.

    I've used TechPowerUPs GPU database for this analysis. I'm using their information for release date, MSRP, and their relative performance calculation to compare GPU to GPU. I've opted not to update the MSRP of the GPUs, many models did get price reductions over their lifetime, but it would be a pain to go back and determine exactly when certain models were cost reduced in order to recalculate. And so I've opted not to. So the high GTX 780 price stays as is, but so does the RTX 2060.

    I've opted to use the GTX 760 as the baseline reference for this analysis. When it was launched it was a very capable performer at 1920x1080, came highly recommended from multiple reviewers, and was a very affordable $250.

    Table is sorted from lowest Cost per Relative Perf (760) to highest.

    GPU Architecture Release MSRP at Launch Relative Perf (760) Cost per Relative Perf (760) Time from 760 (Days) Reduction in cost per year since 760
    NVIDIA GeForce GTX 1650 SUPER Turing 11/22/2019 $159 201% $79 2341 -$26
    NVIDIA GeForce GTX 1660 SUPER Turing 10/29/2019 $229 250% $92 2317 -$25
    NVIDIA GeForce GTX 1660 Turing 03/14/2019 $219 223% $98 2088 -$26
    NVIDIA GeForce GTX 1650 Turing 04/23/2019 $149 149% $100 2128 -$26
    NVIDIA GeForce GTX 1660 Ti Turing 02/22/2019 $279 256% $109 2068 -$25
    NVIDIA GeForce RTX 2060 Turing 01/07/2019 $349 301% $116 2022 -$24
    NVIDIA GeForce GTX 1050 Ti Pascal 10/25/2016 $139 119% $117 1218 -$40
    NVIDIA GeForce RTX 2060 SUPER Turing 07/09/2019 $399 338% $118 2205 -$22
    NVIDIA GeForce RTX 2070 SUPER Turing 07/09/2019 $499 388% $129 2205 -$20
    NVIDIA GeForce GTX 1070 Ti Pascal 11/02/2017 $399 289% $138 1591 -$25
    NVIDIA GeForce RTX 2070 Turing 10/17/2018 $499 351% $142 1940 -$20
    NVIDIA GeForce GTX 1070 Pascal 06/10/2016 $379 256% $148 1081 -$34
    NVIDIA GeForce GTX 1060 6 GB Pascal 07/19/2016 $299 190% $157 1120 -$30
    NVIDIA GeForce RTX 2080 SUPER Turing 07/23/2019 $699 437% $160 2219 -$15
    NVIDIA GeForce RTX 2080 Turing 09/20/2018 $699 412% $170 1913 -$15
    NVIDIA GeForce GTX 950 Maxwell 08/20/2015 $159 90% $177 786 -$34
    NVIDIA GeForce GTX 960 Maxwell 01/22/2015 $199 110% $181 576 -$43
    NVIDIA GeForce GTX 1080 Ti Pascal 03/10/2017 $699 384% $182 1354 -$18
    NVIDIA GeForce GTX 970 Maxwell 09/19/2014 $329 174% $189 451 -$48
    NVIDIA GeForce GTX 1080 Pascal 05/27/2016 $599 301% $199 1067 -$17
    NVIDIA GeForce RTX 2080 Ti Turing 09/20/2018 $999 483% $207 1913 -$8
    NVIDIA GeForce GTX 750 Ti Maxwell 02/18/2014 $149 71% $210 238 -$60
    NVIDIA GeForce GTX 760 Kepler 06/25/2013 $249 100% $249 0 $0
    NVIDIA GeForce GTX 980 Maxwell 09/19/2014 $549 199% $276 451 $22
    NVIDIA GeForce GTX 980 Ti Maxwell 06/02/2015 $649 229% $283 707 $18
    NVIDIA GeForce GTX 770 Kepler 03/30/2013 $399 122% $327 87 $327
    NVIDIA GeForce GTX 780 Ti Kepler 11/07/2013 $699 179% $391 135 $383
    NVIDIA GeForce GTX 780 Kepler 05/23/2013 $649 144% $451 33 $2231

    These are my observations and analysis of this data;

    Lets start by talking about the very bottom of the list. You'll note that the higher end models of the Kepler 700 series are listed here with high costs per relative perf, and extremely high increase in cost per year. This is actually expected, higher end models have diminishing returns. The highest end models of each generation have the lowest $/! ratio, and this is reflected across all 7 years of data. These Kepler models also came out either shortly before or shortly after the 760, and so there isn't any time at all for costs to be reduced.

    Moving up just a bit we can see that the GTX 980 and 980ti GPUs are actually an increase in cost per year. This will not be the only time this happens.

    The 750ti represents the greatest reduction in cost per year out of this entire list, and is a good example of why I didn't use that metric as the sorting method for this list. In the brief 8 month window from the GTX 760 to the 750ti launch, the $149 GPU get's 71% percent of the performance. This is an excellent example showing how within a generation (even though this is technically a Maxwell arch GPU) the lower end models often demonstrate the best $/! ratio.

    Going up to the first Turing GPU from the bottom of the list, the 2080ti. It's not surprising that the 2080ti is where it is on this list. It's high $999 MSRP making it 4x more expensive than the GTX 760, while the performance is more than 4x as much. It is technically a reduction in cost over time, but I would not say it is a good value. But this is not surprising considering it is a flagship GPU.

    The GTX 970 was considered an extremely good value proposition for the Maxwell generation, even with the 3.5GB debacle. And that is easy to see here. compared to the 980 and 980ti GPUs, the 970 represents a huge $48 reduction in cost per year compared to the relative perf of the 760. that is a better reduction in cost per year than even the 950 and 960.

    The rest of the chart I want to talk about as a whole, because you'll notice that the top of the chart is dominated by Turing architecture GPUs, except for the 1050ti they take the top 10 spots for lowest cost per relative perf. Now these are the GTX 16 series GPUs, so they don't have all of the features like DLSS and RT support like the RTX 20 series. But they still represent a great value gain for mid-range gamers.

    And the RTX 2060 is still doing very well in this metric for a $349 GPU, averaging 3X the performance of the 760. The 2060 SUPER and 2070 SUPER are not far behind either.

     

    So why then is the Turing generation so often derided as being a poor value for gamers?

    I think it's a combination of factors;

    1. That RTX 2080ti MSRP did not do Nvidia any favors. That very high $999 price really set peoples attitudes against them, regardless of how many people actually buy that class of GPU. As I stated before it's normal for the flagship GPUs of a generation to be a poor $/! prospect, but the absolute price of that 2080ti was still so high that public opinion was slammed to the negative side.

    2. AIB partner cards not at MSRP really hurt Turing early on. I'm not sure why, but it was extremely difficult to find AIB partner cards available at MSRP for the RTX 2070 and RTX 2080. This, combined with a well stocked second hand market, hurt public opinion of Turings value early on. To be clear, you could find AIB partner cards, but at prices $75 to $150 above MSRP. I'm slowly turning to the opinion that AIB partners add very little value to these GPUs. Their cooling systems were better than the blower coolers used on older Nvidia reference cards, but with the new axial and crossflow coolers, the AIB coolers are simply on par. And GPU boost almost completely eliminates any 'factory overclock' advantage they could have

    3. That extra few months between the Pascal launch and the Turing launch played a bigger role than I previously thought. Kepler to Maxwell was only a bout a year and a half long generation. Maxwell to Pascal was 2 years. But Pascal to Turing was 2 and a half years. That may not sound like a huge amount of variation, but the effects of those 6 months are significant. Not only did it represent a period of time where crypto mining was wrecking havoc on the GPU market, it also increased peoples expectations of what the Turing generation represented in terms of performance increases as hype continued to increase.

    4. I think memes played a bigger role than anticipated. Even to this day I hear people parrot things to the effect of 'same performance as Pascal at the same price' in reference to Turing. In reality Turing did represent an increase in performance at the same price points. In the case of the 2080 it was more minor, but the 2070 and 2060 represented a larger decrease in cost per relative performance compared to their Pascal predecessors. The SUPER refresh improved on these to a degree, but it's interesting that the 2060 SUPER actually represents an increase of cost per relative perf compared even to the original $349 RTX 2060.

    5. Generation to Generation improvements are not what people think they are. Going back even to the Kepler-Maxwell transition, and the Maxwell-Pascal transition. People think that each generation represented a massive decrease in cost for high end GPU performance. In reality Turing is pretty much par for the course in terms of generational improvements and cost of GPUs. A 1080ti was not a good value, and neither was a 980ti or a 780ti. It was only years after the fact when those who bought high-end GPUs wanted to convince themselves of the good value of their high end GPU purchase years before that these statements get made. I remember people saying similar things about their 980tis as people today say about their 1080tis. I've held the opinion for a long time that the two best ways to get the most bang for your buck in terms of GPUs is to; A. buy into the mid-range price bracket for every generation, sell at the start of the next generation, rinse repeat. B. Buy a high-end GPU and skip one generation. In this specific case, anyone with a 1080 or 1080ti would not see a huge increase in performance with any Turing GPU, and so there would be no incentive to upgrade. But someone with a 980ti would have a definite incentive to upgrade.

    6. Second-hand prices, and internet forum advice, have ruined price/performance discussion. Too often I see people talk about getting a 'good deal' on a 1080ti, when they are paying only slightly less than the MSRP was originally, 2 years after the fact. $588 is not a good price for a used 1080ti. And my god, why would you buy a GTX 1080 for $450?

     

    In reality Turing as an architecture, and a generation of GPUs, represents the continuing decline in price/performance that we have come to expect from each GPU generation. BUT the 2080ti specifically has poisoned the well of public opinion. I don't expect this to change with the next generation of GPUs. The highest end models will be a poor value proposition, with the mid range GPUs offering a significantly better $/! ratio. And a continual decrease in cost per relative performance, probably to the tune of $25-$30 per year.

    People will make memes about these new GPUs, those memes will be taken as truth, and people will make buying decisions based on that 'truth', and they will end up with a worse GPU because of it.

    EDIT: Typos

    Many people seem to have trouble grokking the table in reddit formatting.

    Here is the link the spreadsheet I created for this post; https://docs.google.com/spreadsheets/d/1haMR3ZDMYNL64QKxT33YxHRh00ZBvGyT95bTy8W40YQ/edit?usp=sharing

    Additionally user u/Integralds has put together this visualization of the data I've collated. Turing GPUs are highlighted in red.

    "Up" and "left" are "good," in that "up" means more performance for the same price and "left" means same performance for less price.

    I was torn on what to do with the Y-axis. In the end I went with a log scale, because you have already converted "raw" performance into "relative" performance, and a log scale shows relative differences more clearly than a level scale. (In relative terms, going from 1 to 2 is the same as going from 2 to 4, for example.)

    submitted by /u/zyck_titan
    [link] [comments]

    [VideoCardz] Intel teases Tiger Lake launch and company rebranding

    Posted: 27 Aug 2020 12:26 AM PDT

    Taiwan and US make joint declaration to only use 'clean' 5G kit

    Posted: 26 Aug 2020 09:13 PM PDT

    [VideoCardz] Confirmed: NVIDIA GeForce RTX 3090 has 24GB memory, RTX 3080 gets 10GB

    Posted: 26 Aug 2020 02:53 AM PDT

    NVIDIA Confirms 12-pin GPU Power Connector [Anandtech]

    Posted: 26 Aug 2020 10:26 AM PDT

    One of China's flagship 7nm foundries falls in a hole as funding flees

    Posted: 26 Aug 2020 05:10 PM PDT

    [Nvidia] The Remarkable Art & Science of Modern Graphics Card Design

    Posted: 26 Aug 2020 06:00 AM PDT

    "Change the PCB" and "Move The Fans" - Nvidia teases its radical RTX 30 Series Heatsink

    Posted: 26 Aug 2020 06:03 AM PDT

    ThinkPad X1 Fold, the first foldable laptop, has been listed on Lenovo's website

    Posted: 26 Aug 2020 05:59 PM PDT

    NVIDIA Ampere RTX “3080” and “3090” with 12-layer PCB and backdrill, BIOS is already RC2 (release candidate), pilot production is running

    Posted: 27 Aug 2020 02:00 AM PDT

    [Optimum Tech] My Dream CPU Waterblock Now Exists!

    Posted: 26 Aug 2020 08:20 AM PDT

    (HWUB)All Boards Tested B550 Roundup, Part 3 $180 - $300

    Posted: 26 Aug 2020 04:01 AM PDT

    [LTT] There's a REAL Nintendo Wii Packed into this Handheld!!!

    Posted: 26 Aug 2020 12:49 PM PDT

    Why I don't think that GPU prices these days are actually any worse than they were 10 to 15 years ago. A look at cost of GPUs over the years per mm² of die space.

    Posted: 26 Aug 2020 04:44 PM PDT

    https://i.imgur.com/4lZAiWO.png Made a chart GPU prices adjusted to inflation.

    Can't seem to find an Nvidia post these days where someone does not mention the crazy prices of high end GPUs today, and how Nvidia is "ripping everyone off". So I looked into what prices were actually like 15 years ago, and accounted for inflation. The claim (like from AMD) is that recently die shrinks are to blame for inflated prices of GPUs. There definitely is some truth to that. R&D prices for new tech has skyrocketed, but so has demand, mass production, and the markets that they target and what those markets are willing to pay.

    Historically, (before the 2080 ti launch) Nvidia has stuck to 500 mm² dies for the last 14 years or so when it comes to their top end GPUs, so that's where I tried to select SKUs from.

    There were some exceptions. There is no ~500mm² gtx 900 GPU, so I used a 400nm² die gtx 980 and a 600mm² die gtx 980 ti instead for reference.

    I used the launch price of the gxt 280 which was $650 ($770 after inflation), but dropped a few weeks later after competition from ATI/AMD.

    I tried to chart GPUs that were as fully enabled as possible. For example, a gtx 970 uses the same die as the 980, but has a bunch of the chip soldered off. I did make the exception of excluding Titan branded GPUs, though.

    I think a lot of people have forgotten how much a top tier card actually used to cost. Not on this list (because they are like ~300mm² dies) are the 6800 Ultra from 2004 which sold for $653 after inflation, and the 7800 GTX which sold for $768 after inflation. And those dies were almost half the size of an RTX 2080 Super's 545mm² die. Also, the 6800 Ultra was a 81w TDP GPU. So it's not like some over complicated cooling or power delivery system was needed. The 14 year old GeForce 6800 Ultra had a lower TDP than a new gtx 1660 Super, a smaller die area, and yet cost 3x the price.

    The price of a 754mm² RTX 2080 ti really doesn't look that insane at $1200 anymore if you consider worse yields at such a massive die size. Outside of the data center it is the biggest die Nvidia has ever made, by large margin. They just introduced a new ultra-high-end tier and somehow that makes people upset at the price.

    submitted by /u/bubblesort33
    [link] [comments]

    Choosing specific Discrete GPU on multiGpu Systems per App/Program.

    Posted: 26 Aug 2020 01:17 AM PDT

    So this is something very interesting i noticed today in the latest preview builds of windows 10,
    which solves a long standing issue i had for a long time.

    The latest windows builds have an option to choose specific dGPUs and not just iGPUs for desktop or multi GPU setups, meaning now that having multiple or spare GPUs on a desktop can be use to optimize performance or for compatibility purposes.

    My current desktop has a RX480 (spare card) a RTX 2070 (main card) and i can assign the RX480 to run the windows desktop and wallpaper animations while the RTX runs video games and such.

    Pretty nifty feature, in case you updated but the option is not visible, in the registry make sure this exists:

    Hkey_Users\***\Software\Microsoft\Directx\UserGpuPreferences add a dword
    SpecificGPUOptionApplicable value 1

    Then on settings system display advanced options graphics settings you will be able to add apps and programs and choose either, low power GPU, high performance (main windows DWM GPU) or specific GPU (list of all the GPUs installed in the computer)

    submitted by /u/assasing123
    [link] [comments]

    No comments:

    Post a Comment