Hardware support: TFTCentral: "HDMI 2.1a Certification Announced Including New Source-Based Tone Mapping (SBTM) Feature" |
- TFTCentral: "HDMI 2.1a Certification Announced Including New Source-Based Tone Mapping (SBTM) Feature"
- VideoCardz: "MSI GeForce RTX 3090 Ti SUPRIM X to launch on January 27th, a leaked document confirms"
- Anandtech: "Intel Alder Lake DDR5 Memory Scaling Analysis With G.Skill Trident Z5"
- Question about cache impact on gaming performance when looking at both intel and amd.
- MSI GeForce RTX 3090 Ti SUPRIM X to launch on January 27th, a leaked document confirms - VideoCardz.com
- "Samsung Electronics Delivers Premium HDR Gameplay With HDR10+ GAMING Standard Support for Its New Screens"
Posted: 23 Dec 2021 05:25 AM PST |
VideoCardz: "MSI GeForce RTX 3090 Ti SUPRIM X to launch on January 27th, a leaked document confirms" Posted: 23 Dec 2021 06:46 AM PST |
Anandtech: "Intel Alder Lake DDR5 Memory Scaling Analysis With G.Skill Trident Z5" Posted: 23 Dec 2021 06:02 AM PST |
Question about cache impact on gaming performance when looking at both intel and amd. Posted: 23 Dec 2021 07:19 AM PST Hello there!I've been scratching my head for days about the following:Looking at benchmarks and tests conducted by Hardware Unboxed it is shown that cache has a great impact in performance.The tests Steve conducted involved disabling some cores on the higher end chips to compare the performance and observing how it scales in cpus with more L3 cache.The results look very accurate and indicate that an increase in L3 cache improves gaming performance. Also there are some articles and presentations about 'data oriented design' for game development which show that optimizing the use of cache results in better performance in game engines.So it is not deniable that more cache (or better use of it) implies more performance. Then here's my question:Why some amd processors that have the double amount of cache compared to intel's counterpart end up performing so similarly?I mean we are not talking about a small difference we are talking about 16 vs 32 or 25 vs 64, what's going on?Is it that the ccd's organization ends up splitting the cache or something like that?I am guessing that a single core for instance does not have access to the full amount of cache, is that it? [link] [comments] |
Posted: 23 Dec 2021 06:50 AM PST |
Posted: 23 Dec 2021 06:48 AM PST |
You are subscribed to email updates from /r/hardware: a technology subreddit for computer hardware news, reviews and discussion.. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment