• Breaking News

    Tuesday, December 29, 2020

    Hardware support: (GamersNexus) "Don't Buy a "Gaming Chair" - Office Chair vs. Gaming Chair Round-Up & Review" on YouTube

    Hardware support: (GamersNexus) "Don't Buy a "Gaming Chair" - Office Chair vs. Gaming Chair Round-Up & Review" on YouTube


    (GamersNexus) "Don't Buy a "Gaming Chair" - Office Chair vs. Gaming Chair Round-Up & Review" on YouTube

    Posted: 28 Dec 2020 05:54 PM PST

    (AHOC/Buildzoid)Almost DDR4-6000! on air cooling feat. Crucial Ballistix and MSI

    Posted: 28 Dec 2020 10:16 PM PST

    Intel Xe Graphics Are Looking Great On Linux 5.11 With Nice Performance Uplift

    Posted: 28 Dec 2020 09:01 AM PST

    [VideoCardz] AMD Ryzen 9 5900H is 25% faster than Ryzen 9 4900H in single-core Geekbench benchmark

    Posted: 29 Dec 2020 12:33 AM PST

    GPUs Across Generations & Price Ranges: Comparing Inter & Intra Generational Performance Gains Since 2014

    Posted: 28 Dec 2020 12:06 PM PST

    Hey,

    With the dramatic GPU market of today and the ample claims of "best generational advances ever", I wanted to see in an easy to compare manner whether these claims are exaggerated or not, and if so then to what degree. Further, as the happy owner of a GTX 1080Ti, I was intrigued to see just how much of an improvement today's offerings are, and if they thus make for worthy upgrades.

    To do so, I spent a couple lazy afternoons plotting three charts, comparing GPUs from the following families:

    • Maxwell
    • Pascal
    • Vega
    • Turing
    • RDNA
    • Ampere, and
    • RDNA2

    across five price categories:

    1. $349-$399
    2. $449-$499
    3. $549-$599
    4. $649-$799, and
    5. $1k+

    and spanning three resolutions:

    1. 1080p
    2. 1440p
    3. 4K

    I'd like to share these charts with the community at large to spur discussion and help whomever finds it useful. Here they are, with some details and my conclusions below:

    📷

    Average 1080p Performance: https://imgur.com/11OseY8

    📷

    Average 1440p Performance: https://imgur.com/WCNA7DU

    📷

    Average 4K Performance: https://imgur.com/C2WCsHa

    All performance figures have been taken from Hardware Unboxed's RX 6900XT review, which averages GPU performance across 18 games at these resolutions at High settings. For those GPUs not on this list, TechPowerUp's GPU database has been used to estimate their numbers. I've verified that HBU's and TPU's numbers are within error margin by cross-referencing for those GPUs found on both sources.

    Conclusions:

    1. Turing was bad. Really bad. Almost 2.5 years after Pascal, it offered barely 15% gains at and below $500 while offering nearly nothing above. To cement it's place in GPU history hell, it also introduced the $1000+ MSRP category for non-Titan GPUs
    2. Turing being so bad played a key role in Ampere's narrative, with many heralding Ampere as God-sent and swallowing Jensell's words of Ampere being the best as if it were gospel
    3. This is not to say that Ampere is bad, it's quite good, but certainly not the "greatest generational leap ever"
    4. This chart itself demonstrates the above with Pascal: the gains brought about by GTX 1070 and GTX 1080Ti as compared to Maxwell consistently outpaces their Ampere counterpart's performance figures Vs. Turing
    5. Ampere performs better the lower you go in it's price stack and the higher you go in resolution: the 3060Ti's performance gain over the 2060S is greater than that of the 3070's gain over the 2070S, which is greater than that of the 3080's gain over the 2080S, which is greater the 3090's over the 2080Ti. Similarly, this lead increases the higher the resolution is
    6. While Ampere outperforms it's predecessors better the higher the resolution is, it's only just about good enough for 4K gaming with the 3080 and 3090 just about attaining the 100FPS mark
    7. For a GPU with thrice the raw core/shader count manufactured on half the node (8nm vs 16nm) and still guzzling more power, the 3080 is just not very impressive against the 3.5 year older GTX 1080Ti, failing to be twice as fast on average at any resolution
    8. RDNA1 was a respective gain over Vega, but RDNA2 is really good
    9. Again, RDNA2 is very impressive, matching Nvidia's performance across every price category it competes in, justifying its price tag
    10. The only significant advantage Nvidia possesses over RDNA2 at this stage is DLSS and while AMD has promised it's own implementation of such technology soon, do not buy current hardware based on promises of future feature releases: just ask the Vega owners how they're enjoying their Draw Stream Binning Rasterizers and Rapid Packed Math advantages!
    11. Nvidia relies heavily on DLSS at 4K, especially for newer titles such as Cyberpunk
    12. For 16:9 monitors, 2560x1440 is the ideal resolution today hitting the sweet spot on pricing, size, variety and pixel density
    13. Ultrawide resolutions (21:9), specifically 2560x1080 and 3440x1440 are excellent because of their immersive sizes and intermediate positioning between 10800p and 1440p and 1440p and 4K respectively, providing excellent immersion while not being as demanding as 4K

    Anything I missed? Do share your views below.

    submitted by /u/AbheekG
    [link] [comments]

    More evidence of HBM in Sapphire Rapids

    Posted: 28 Dec 2020 11:29 AM PST

    [VideoCardz] NVIDIA AD102 (Lovelace) GPU rumored to offer up to 18432 CUDA cores

    Posted: 28 Dec 2020 02:46 AM PST

    Wouldn't a 128-core Apple GPU become the fastest GPU in 2021?

    Posted: 29 Dec 2020 12:27 AM PST

    By now, we all know about Bloomberg's report that Apple is testing a 128-core GPU for the Mac Pro.

    Simple extrapolation from the M1's 8-core 2.6Tflops iGPU suggests that a 128-core Apple GPU would have 41.6 Tflops. That would make it faster than the fastest current Nvidia Ampere card.

    This is assuming that the GPU cores scale close to linear in performance, which GPUs tend to be.

    Now Apple will likely make this a dedicated card that goes in the Mac Pro, which likely means Apple can increase the wattage greatly and increase clocks from the M1. It may be faster than 41.6 Tflops if clocks increase. This is all speculation of course.

    Another thing is, we all know tflops can't exactly be compared between different architectures as the RDNA2 is much more efficient than GCN at the same tflops. But we don't have this information so we can just speculate that Apple's tflops is similar to Nvidia's tflops.

    I wasn't surprised with the M1's CPU performance because we've known for years that Apple's CPU designs are world-class. I'm thoroughly surprised that Apple is so aggressive with their GPU goals so soon.

    Ok, we know "closed system", "apple tax", "iSheeps", and all the things that the DIY community tend to label Apple products.

    I hope we can focus the discussion on the potential of Apple's future GPUs itself. It's exciting that Apple is coming out swinging at Nvidia's very best.

    submitted by /u/senttoschool
    [link] [comments]

    [VideoCardz] Intel Alder Lake-S 16-core and 24-thread CPU appears on Geekbench

    Posted: 29 Dec 2020 02:03 AM PST

    Overview of USB Encoder Cards used for Custom Hardware in Simulators

    Posted: 28 Dec 2020 08:33 AM PST

    Issues with using a Laptop without Battery?

    Posted: 29 Dec 2020 01:45 AM PST

    Hello. I recently removed battery from my laptop as the battery was not giving proper supply and I ordered a new battery which will take a week or two but I had to use my laptop for making project and some other work so I am using my laptop without battery. Till now it is working fine as I am using just some coding and internet softwares only for short periods.

    But Can I play 🙄 games on it without battery. My laptop is Dell Inspiron 15 series 2019 model.

    Are their any issues with it?

    submitted by /u/YashuC
    [link] [comments]

    Clock Tuner for Ryzen 2.0 gets Ryzen 5000 support and HYBRID OC mode, available end of January - VideoCardz.com

    Posted: 28 Dec 2020 03:09 AM PST

    Why Multi-Phase Clocks were ditched?

    Posted: 28 Dec 2020 02:05 PM PST

    If we used these multiple pulses to drive each individual CPU core, it would theoretically be easier to improve performance of single-threaded processes by making each core "take turns" so the performance would be 3.2 times better (theoretically 4x but in real life is closer to 3x) when the CPU has 4 cores and the clock is 4-Phase.

    I'm talking about this: Wikipedia/Clock signal/Two-phase clock

    submitted by /u/Rudxain
    [link] [comments]

    CIPA's October report shows camera market has mostly recovered from its COVID-19 downturn: Digital Photography Review

    Posted: 28 Dec 2020 06:53 AM PST

    Isro eyeing new chip unit as more firms take to skies

    Posted: 28 Dec 2020 06:40 AM PST

    [Hardware Busters / YouTube] The Building of a Chassis Loader - Making the ultimate tool for Chassis Reviews

    Posted: 28 Dec 2020 08:45 AM PST

    How would you feel if a $750 Macbook is faster than any Windows laptop in the world?

    Posted: 28 Dec 2020 07:33 PM PST

    Reports suggest that Apple will release an "affordable" Macbook in 2022.

    Suppose that Apple releases a $750 Macbook SE with the latest SoC (M2) in 2022. The M2 line is likely to retain its crown of having the fastest single-core available on a laptop, fastest iGPU, competitive multi-threaded performance with the fastest AMD/Intel laptop chips, dedicated AI neural engine, dedicated hardware encoder/decoder, and other accelerators that Apple puts into its SoCs.

    How would you feel as a Windows user? Would you feel like you'd be paying more for less if you buy a Windows laptop instead?

    There's a precedence for this as the iPhone SE has an MSRP of $400 and the A13 chip inside it is still faster than any Android SoC in the world. It's highly likely that we're going to see a repeat of this in the laptop world as Apple's entry-level M1 is already faster than the fastest AMD/Intel has to offer on laptops.

    submitted by /u/senttoschool
    [link] [comments]

    No comments:

    Post a Comment