Hardware support: GeForce RTX 3060: What is 12 GB good for? |
- GeForce RTX 3060: What is 12 GB good for?
- Exclusive: Intel 12th Gen Core "Alder Lake-S" platform detailed - VideoCardz.com
- Other than space saving, what benefits does package on package chips have?
- [ServeTheHome] Glorious Complexity of Intel Optane DIMMs and Micron Exiting 3D XPoint
- Xbox Analysis Part 2 - die shot walk-through with I/O focus (TL;DW in the comment section)
- Do L3 cache usually contain a ton of redundant data??
- Are type 1113-x flash chips actually used?
- Ian Interviews #4: Forrest Norrod, AMD SVP Datacenter
- MSI MEG Z590 ACE Review
- RDNA 2 vs. RDNA vs. GCN: IPC and CU scaling in the test (RDNA 2 vs. RDNA vs. GCN: IPC und CU-Skalierung im Test)
- Intel introduces Adaptive Boost Technology for Core i9-11900K and Core i9-11900KF
- How much leadtime does it take to change the silicon on an integrated circuit?
- Pat Gelsinger Will Get Intel ‘Revved Up Again:’ Michael Dell
GeForce RTX 3060: What is 12 GB good for? Posted: 20 Mar 2021 09:55 PM PDT PC Games Hardware has tried to work out where the 12 GB graphics card memory of the GeForce RTX 3060 is already of practical use today. For this purpose, they picked out the appropriate game titles with RayTracing use (for heavier use of the graphics card memory), but stay within the limits of the resolution where the card is usually used: FullHD (1080p) as well as WQHD (1440p), the latter with DLSS Boost. For comparison, the GeForce RTX 2060 with 6 GB and the GeForce RTX 3060 Ti with 8 GB were invited - the latter is nominally the stronger card, but equipped with the smaller amount of memory. Among the selected 6 game titles, there is a clear effect in favor of the larger memory configuration, when the GeForce RTX 3060 is already +13% ahead of the (nominally stronger) GeForce RTX 3060 Ti in FullHD - it is even +30% for the smaller card in WQHD.
However, this result is primarily determined by two game titles whose (reasonable) utilization of the more memory can at least be doubted: Under Wolfenstein: Youngblood, the highest texture resolution doesn't actually bring anything in terms of higher image quality, whereas under Minecraft RTX more memory is indeed usable in terms of higher viewing distances. But you can also push this infinitely - there is actually not enough graphics card memory for Minecraft RTX at all, which means you have to limit yourself anyway. Without these two titles, the GeForce RTX 3060 Ti is +25% ahead of the GeForce RTX 3060, at least under FullHD. And since these are all already selected game titles (under RayTracing), you can also look at the whole thing in the opposite direction - as an indication that the GeForce RTX 3060 does not currently take advantage of its memory capacity above 8 GB, except for special cases.
The benchmarks of the GeForce RTX 2060 with only 6 GB graphics card memory then indicate that the aforementioned 8 GB memory size is the current limit for memory-guzzling scenarios even under FullHD. No matter the benchmark constellation, the card comes out very clearly below its nominal performance - most likely resulting from the memory shortage between existing 6 GB and required 8 GB. And of course, you can always argue in favor of the GeForce RTX 3060's memory amount, that you should buy bigger than currently necessary, just for a better future capability. In this respect, the 12 GB of the GeForce RTX 3060 is never wrong and should certainly please its buyers in the long run. GeForce RTX 3060 Ti and GeForce RTX 3070, on the other hand, should eventually reach a natural limit, which is probably memory-limited. The only question is whether this point is still within the runtime of the graphics card's first user - or whether the card might not have been delegated to a second PC with a lower task load by then.
Sources: 3DCenter.org & PC Games Hardware [link] [comments] | ||||||||||||||||||||||||
Exclusive: Intel 12th Gen Core "Alder Lake-S" platform detailed - VideoCardz.com Posted: 20 Mar 2021 05:41 AM PDT | ||||||||||||||||||||||||
Other than space saving, what benefits does package on package chips have? Posted: 20 Mar 2021 10:50 PM PDT The processors on the iPhone and most Android phones have the RAM stacked on top of the processor, which makes sense because you have very little space in a phone, but the Raspberry Pi Zero also has this packaging style. So did a few of the regular Raspberry Pi models before they switched to a metal capped chip. The Raspberry Pi boards are small, but they still have plenty of empty board space, so wouldn't it be a lot easier to just use two separate chips? Also, I thought package on package was expensive, so it seems weird that the $5 Pi Zero would use it unless it was absolutely necessary or it provided a significant benefit. Is PoP actually not much more expensive compared to 2D routing? Are there more benefits than just space saving? [link] [comments] | ||||||||||||||||||||||||
[ServeTheHome] Glorious Complexity of Intel Optane DIMMs and Micron Exiting 3D XPoint Posted: 20 Mar 2021 12:09 PM PDT | ||||||||||||||||||||||||
Xbox Analysis Part 2 - die shot walk-through with I/O focus (TL;DW in the comment section) Posted: 20 Mar 2021 07:38 AM PDT | ||||||||||||||||||||||||
Do L3 cache usually contain a ton of redundant data?? Posted: 20 Mar 2021 06:53 PM PDT The main reason i think that may be the case is because L1 and L2 are usually separate for each core whereas L3 is usually shared, so the reasoning goes that say core 1 and core 2 both read from RAM some read only data and then move there own copy over to there separate L1 cache, once it fills up it gets moved to L2 and once that fills up it gets moved to L3 shared , as it was read only data both the copies end up being exactly the same and thus redundant. I wonder if this is the main reason why Threadrippers have such gigantic L3 cache, to compensate for increased redundancy. As a bonus question is it actually more optimal to both read and write to the same variable rather that keeping one variable for only reading and one for writing (in cases where that makes sense). [link] [comments] | ||||||||||||||||||||||||
Are type 1113-x flash chips actually used? Posted: 20 Mar 2021 11:49 PM PDT As far as I know, type 1113-x is the smallest standard single-chip SSD, at only 11.5 by 13 mm. I couldn't find anything about it other than one tech news post, and no chips with the keyword "1113-x" at all on Aliexpress or Alibaba. Just curious, is it just a paper standard that isn't used, or are there actually full-blown single-chip SSDs this small in common use? [link] [comments] | ||||||||||||||||||||||||
Ian Interviews #4: Forrest Norrod, AMD SVP Datacenter Posted: 20 Mar 2021 01:14 PM PDT | ||||||||||||||||||||||||
Posted: 20 Mar 2021 07:45 PM PDT | ||||||||||||||||||||||||
Posted: 20 Mar 2021 03:57 AM PDT | ||||||||||||||||||||||||
Intel introduces Adaptive Boost Technology for Core i9-11900K and Core i9-11900KF Posted: 20 Mar 2021 02:35 PM PDT | ||||||||||||||||||||||||
How much leadtime does it take to change the silicon on an integrated circuit? Posted: 20 Mar 2021 12:07 PM PDT I was thinking about the claims that nVidia made about cryptocurrency restrictions on some GPUs at the hardware level so they are unhackable. Wouldn't you have to change the silicon mask pattern and redo all verification testing to make sure nothing else is broken. Doesn't that take months if not years to do, far too slow to react to rapid market changes? [link] [comments] | ||||||||||||||||||||||||
Pat Gelsinger Will Get Intel ‘Revved Up Again:’ Michael Dell Posted: 19 Mar 2021 10:08 AM PDT |
You are subscribed to email updates from /r/hardware: a technology subreddit for computer hardware news, reviews and discussion.. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment