Hardware support: SamMobile: "New rumor reveals Samsung's custom AMD GPU is coming soon" |
- SamMobile: "New rumor reveals Samsung's custom AMD GPU is coming soon"
- The pace of progress: CPUs, GPUs, Surveys, Nanometres, and Graphs
- "Imagination's GPU selected by StarFive to create high-performance, small and low-cost BeagleV RISC-V AI single board computer"
- Lenovo Updates ThinkPads: New Intel and AMD Processors, New Displays, New Designs
- Apple M1 Macs appear to be chewing through their SSDs
- [Gamers Nexus] Corsair 5000D Airflow vs. Solid Case Review: Thermals, Noise, Build Quality
- ASGARD announces DDR5-4800 memory for Intel 12th Gen Core "Alder Lake" series - VideoCardz.com
- ScaleFlux: Our computational storage drives are bigger, faster and cheaper than ordinary SSDs
- Is it technically possible to produce a LED panel with a resolution and ppi similar of an old tube CRT TV in order to emulate the looks of video games but on new tech?
- Techspot concludes Rtx is to blame for AMD usb issues.
SamMobile: "New rumor reveals Samsung's custom AMD GPU is coming soon" Posted: 23 Feb 2021 06:57 AM PST |
The pace of progress: CPUs, GPUs, Surveys, Nanometres, and Graphs Posted: 22 Feb 2021 12:12 PM PST This is a collections of graphs I have made from public data about hardware improvements over time. Transistor density improvements over timeThis data is taken from Wikipedia. Surprisingly to me, we've stuck the same rate of transistor scaling since 1970, with almost zero meaningful deviation, and no slowdown in recent times. The rate here is a doubling every two and a half years, which is a bit slower than the traditional Moore's Law claim, but is still exceptionally fast. GPUs have not followed Moore's law quite as cleanly, which makes sense as they are comparatively new and faster to hit thermal limits. However, they still track close to the line. Note that Moore's Law as used here is not a measure of performance or frequency. CPU performance using corrected Geekbench resultsThese graphs track single-threaded and multi-threaded performance in Geekbench, plus frequency, for Intel, AMD, Arm (Snapdragon) and Apple (A, AX, M) CPUs. While Geekbench is a more reliable benchmark than some of its critics suggest, it aggregates its subscores in a poor way, and I consider its crypto test to be meaningless. Here I've taken the median result for each subscore between the benchmarks, and then taken the geometric mean of those to get an overall single score. I've tried to keep trend lines stable by sticking with the same offering within a product line between generations, but there is subjectivity in this choice and I can imagine others might disagree. I have added extra data points where multiple product lines are meaningful, such as with Intel's mobile products. Note that the performance axis is a linear scale. The long mostly-horizontal Intel line shows the disastrous pace of progress of the industry over the last decade, but every other line is on a much faster upward slope. Apple, AMD and Intel are all now in the same ballpark, if you include preliminary results for the 11700K. The estimated 11700K score is an atypical jump for Intel; I hope they can repeat it, because everyone else is. You can see the stark difference between Arm's and Apple's lineups, but at the same time the similarity of their trajectories, and why even without Apple an Arm product might soon look disruptive. There is a much starker difference between product lines in multicore, as well as larger jumps between generations. It's easy to see the immediate effect Ryzen had on Intel's multicore offerings. You can also see the huge A14-M1 gap, and more easily imagine where an 8+4 core would land on the graph. Frequency scaling is much gentler than other performance scaling, but I still found this an interesting graph. The Intel mobile jump, 1065G7 to 1185G7, shows clearly how much of a difference Intel's 10nm fixes made, and how bad the node was before. Phone CPUs are all dancing around the same sort of numbers, much lower than Intel or AMD. Most interesting to me was that AMD's frequency scaling is arrow-straight; if this holds through for the 6800X, that's going to be clocked fast. These numbers are the median of measured results, so this is what you'd expect an average person with the CPUs to have, rather than unrealistic peaks. Single-threaded performance, frequency-normalized. Multi-threaded performance, frequency-normalized. These graphs are created simply by dividing the unnormalized graph by the normalized frequency. They are interesting in that they separate out microarchitectural advances, like from the A7 to A14, from stagnant microarchitectures that are improving based on frequency. The normalization done here is crude, so should not be used to debate small differences, but the larger trends are reliable. Thanks goes to /u/dustarma for the suggestion. Steam Survey: Physical CPUsThis is a simple chart of data from the Steam Survey showing how many people have how many cores. Only core counts that have been >1% of the survey are graphed. There are some artifacts in this data, like a dip at the start of 2018. These should mostly just be ignored, and don't reflect real changes in CPU populations. At the start of the graph, core counts were increasing steadily; 1-cores were dying, 2-cores were steady, and 4-cores were growing. From 2012 to 2017, Intel had an uncontested lead, and progress completely stagnated. Then Ryzen happened, and progress resumed, with 6 and 8 cores taking a 50-50 split of the newcomers. This growth is solidly linear, so unless a stagnation repeats for similar reasons, I feel it's safe to say that 2-cores will be dead by early 2023, and 6-core CPUs will be the new low-end by early 2025. (Note that the GPU brand data tab is outdated.) GPU performanceThis data is taken from TechPowerUp, with supplementary material from Wikipedia and usinflationcalculator.com. The charts here are a bit more complicated than normal, but hopefully they'll make sense after an explanation. Color represents the price range. The categories are $0-$200, $201-$400, $401-$600, and $601+, inflation adjusted. Unlike CPUs, GPU performance scales pretty simply with transistors, so overall generational growth is about linear on a log plot, so differences over time tend to smother differences between the brands. NVIDIA has a slightly faster slope, and more top-end cards, but the data is noisy, and good lines of fit for this sort of data are hard to calculate in Google Sheets, so I wouldn't read much into small variations in the trend line. NVIDIA performance per dollar, inflation adjusted. AMD performance per dollar, inflation adjusted. The trendline here is for cards in the $401-$600 inflation adjusted price bracket. Overall the progress is solid for both cards, this time with a better slope for AMD, though the same caveats apply. IMO this is the line you need to think about when talking about the value these companies are offering. What matters is simply that this trend line continues across the whole product range. NVIDIA performance per dollar, relative to expected performance per dollar for that year. AMD performance per dollar, relative to expected performance per dollar for that year. These graphs are not easily comparable, since they are being normalized by different things. They basically try to remove the year-on-year technology improvements to ask, which cards are overpriced for the generation, and how does the price affect the performance per dollar? You can clearly see a top-left to bottom-right slope in both graphs, representing how more expensive cards have worse performance per dollar. Cards to the right of the trendline, like the 690, 1070, and 9600 GT, are well priced, and likely to last longer than average. Cards to the left, like the 260 and 1650, are badly priced, and not likely to last. This graph is very sensitive to specifics, so don't read much into small variations. DRAM has a hard time scaling, so I was surprised to see how well VRAM scaling has held between generations. The current 30 series cards except for the 3090 are below the trend, but refreshes might adjust them back up to their historic expectations. Note that I added together the VRAMs for dual GPU cards, even though they do not act as a unified memory pool of that much memory. Thanks /u/Lord_Trollingham for pointing this out. That's it.Please post corrections if you spot anything I did wrong. Links to the spreadsheets proper are below, if anyone wants to make a copy to play with the data. [link] [comments] |
Posted: 23 Feb 2021 03:37 AM PST |
Lenovo Updates ThinkPads: New Intel and AMD Processors, New Displays, New Designs Posted: 23 Feb 2021 07:38 AM PST |
Apple M1 Macs appear to be chewing through their SSDs Posted: 23 Feb 2021 08:00 AM PST |
[Gamers Nexus] Corsair 5000D Airflow vs. Solid Case Review: Thermals, Noise, Build Quality Posted: 22 Feb 2021 12:34 PM PST |
ASGARD announces DDR5-4800 memory for Intel 12th Gen Core "Alder Lake" series - VideoCardz.com Posted: 22 Feb 2021 03:08 PM PST |
ScaleFlux: Our computational storage drives are bigger, faster and cheaper than ordinary SSDs Posted: 22 Feb 2021 05:15 PM PST |
Posted: 22 Feb 2021 08:49 AM PST |
Techspot concludes Rtx is to blame for AMD usb issues. Posted: 22 Feb 2021 01:22 PM PST www.techspot.com/amp/news/88695-amd-acknowledges-usb-connectivity-issues-x570-b550-motherboards.html While AMD has only begun investigating usb dropout issues that have been reported since the launch of x570 and B550 platforms, Techspot seems to have already concluded the issue is with Nvidia gpus. Relevant portion
Good advice or irresponsible? What do you think? [link] [comments] |
You are subscribed to email updates from /r/hardware: a technology subreddit for computer hardware news, reviews and discussion.. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment