• Breaking News

    Wednesday, June 9, 2021

    Hardware support: NVIDIA Geforce RTX 3070Ti Review Megathread

    Hardware support: NVIDIA Geforce RTX 3070Ti Review Megathread


    NVIDIA Geforce RTX 3070Ti Review Megathread

    Posted: 09 Jun 2021 05:58 AM PDT

    For CUSTOM MODELS, you are free to submit as link post rather than in this post.

    Please note that any reviews of the 3070 Ti should be discussed in this thread bar special cases (Please consult moderators through modmail if you think it warrants a seperate post). Post will be updated periodically over the next 2-3 days.

    Written Reviews:

    BabelTech

    Eurogamer / Digital Foundry

    Hexus

    HotHardware

    Guru3D

    KitGuru

    OC3D

    PCMag

    PC World

    Techpowerup

    Tom's Hardware

    Other Laguages in written:

    Computerbase(in German)

    Golem (in German)

    Hardwareluxx (in German)

    Igor's Lab (in German)

    PC Watch (in Japanese)

    Tweakers (Netherlands)

    XFastest (in Traditional Chinese)

    Videos:

    Der8auer

    Eurogamer / Digital Foundry

    Gamers Nexus

    Hardware Unboxed

    Igor's Lab (German)

    JayzTwoCents

    KitGuru

    LTT

    Paul's Hardware

    Tech Yes City

    submitted by /u/Nekrosmas
    [link] [comments]

    SK Hynix Admits that a Batch of its DRAM Wafers is Defective, Downplays Scale of the Problem

    Posted: 09 Jun 2021 03:35 AM PDT

    Samsung Preps PCIe 4.0 and 5.0 SSDs With 176-Layer V-NAND

    Posted: 09 Jun 2021 08:39 AM PDT

    Anandtech: "Xilinx Expands Versal AI to the Edge: Helping Solve the Silicon Shortage"

    Posted: 09 Jun 2021 07:00 AM PDT

    Faildozer vs. NetBust - two biggest CPU architecture failures analyzed

    Posted: 08 Jun 2021 10:56 PM PDT

    TLDR at bottom!

    Well, we all know the two really big flops in terms of CPU design: AMD's ill-fated Bulldozer module-based design, and Intel's nearly decade-long quest for clockspeed that was P4/NetBurst. But which one was actually worse, versus its competition at the time?

    Anandtech's article of the original Pentium 4 at the time showed most tests with the 1.4GHz P4 just barely keeping its nose above Intel's P3 at 1GHz, and AMD's competing Athlon at 1.2GHz a good way ahead of the 1.5GHz NetBurst-based chip. Not a great start for the Pentium 4. It gained a bit of ground in special workloads, like those using features like SSE2, but on the whole it was just a smidgen ahead of its predecessors and barely competing with AMD's parts. This with a new motherboard and even power supply requirement - remember the P4 connector?

    The reason why the architecture itself was so slow was due to two main reasons - a shitty branch prediction scheme and a very long pipeline. That is a terrible combination on many levels.

    A quick explanation on those two things: Branch prediction is essentially the processor trying to find out, what is this workload going to throw at me? It makes guesses and loads up the pipeline to execute what work it thinks is going to show up. If it's wrong, though, the pipeline has to be flushed - essentially meaning it loses the time it took loading the pipeline. NetBurst's branch prediction was supposed to be much better than its predecessors and the competition, but it clearly wasn't.

    The pipeline is what actually executes the work the CPU needs to do. The longer it is, the more clock speed can be achieved. Intel's P3 pipeline was 10 stages, P4 had 20 stages. So why not push it to 100, or even 1,000 stages to get up to a ridiculous number like a THz? Well, again... if that branch predictor done f's up, you have to reprogram the pipeline. That takes time - more time the longer the pipeline is. See where this is going?

    Because it had those factors - or maybe it was the driving reason for those factors? - clock rate had to be pushed WAY up at the expense of power consumption. Sound familiar?

    The result was an architecture with the worst relative IPC we'd ever seen. If I had to sum up NetBurst in the form of a person, I'd say something along the lines of an extremely fat guy that for some reason showed up for tryouts for a track team. Consumes a ton, sweats even more, and is only slightly faster than a more than usually lethargic sloth.

    But what about Bulldozer?

    We're going to be using Anandtech here again. Keeping it to the first generation parts, like we did with P4, the FX-8150 (quad-module, eight-core chip running at 3.6GHz) lost badly to Sandy Bridge in Cinebench R10 single threaded, and even dropped a bit behind the lower-clocked Phenom II's (Bulldozer's predecessor). It gained it back a bit in the multi-threaded benchmark, where it was able to beat out the i5-2500K that was its main competition. This continued to be the theme - AMD was able to compete reasonably in multi-threaded workloads, but at the expense of single threaded performance that was, to speak candidly, a dumpster fire.

    Diving deeper into the architecture: well, there was an interesting concept. Sharing lesser-used resources from two cores to pack more into one package! It sounded like a great idea at the time, and IMO it's still an interesting concept. AMD technically still uses part of it - they share L3 cache between all 8 cores in a Zen 3 CCD. But back to Bulldozer. FP, L2 cache, fetch and decode were shared in a module; the integer "cores" contained L1 and 4 pipelines.

    Back to the pipelines for a moment. Phenom II had 12 stages, a reasonable number. Bulldozer's pipeline has 20 stages. Uh-oh. Isn't that what P4 had? You'd be correct. Seeing a theme here? Both of these architectures have a 20-stage pipeline.

    As for branch prediction, that other factor for straightline IPC, Bulldozer's was actually decent. It was decoupled from the pipeline - meaning even if the pipeline did need to be flushed, those clock cycles wouldn't be wasted on the branch predictor - it could go ahead and use those cycles to look for the correct target. This would in theory improve IPC, and it undoubtedly did - the single threaded performance, while bad, could have been much worse.

    AMD was able to keep their power consumption relatively in check - 125W wasn't unheard of for enthusiast chips back then. Phenom II burned just a touch less than that, after all, and fell behind by around the same amount on average in multithreaded workloads, so vs. P4, at least efficiency didn't go down.

    Thermals were also reasonably well controlled. The chip used solder TIM, so that was a plus. The power consumption was higher than Intel's, so the chip was naturally a bit hotter, but it wasn't P4 levels.

    Bulldozer introduced some interesting concepts - notably the module design - but it also introduced a whole host of problems that stopped it from becoming a mainstream success. Finally, the analogy to a person. Since we're now dealing with a multi-core design, I'm going to sum up Bulldozer as a bunch of extremely fat guys showing up for a track meet, and all running at around the same pace - again, that of a lethargic sloth. The amount of distance they cover in total is going to be more, sure, but if one of those guys had to go against a star sprinter, who do you think would come out with the W?

    Conclusion:

    Bulldozer sucks. It really does. But we've got more factors here to consider. Better thermals. Reasonable power consumption. The ability to use existing AM3+ motherboards. And for those who really wanted the most possible multi-threaded performance for the dollar - those who did rendering at the time, for instance - had a legitimate alternative option. The FX-8100 was the cheapest 8-core chip one could get at the time, and someone with a little knowledge and decent cooling could OC it to 4.5GHz or even higher and possibly even compete with the much pricier i7-2600K. So yes, these chips, and the architecture as a whole, were fails - hence the name Faildozer - but they had a niche. A small one, but it was there.

    NetBurst, on the other hand, is an absolute clusterf*** of an architecture. Shitty performance was a staple of Bulldozer too, yes, but here you've got other things to consider. Extreme power consumption. Extreme thermals. The requirement for an entirely new PSU, as well as a motherboard. Not to mention Intel's shady and anti-competitive business practices at the time. P4 had a few side cases - such as those involving SSE2 instructions - but on the whole, it was a swift kick in the balls to those who actually ended up buying one expecting a decent CPU.

    So Pentium 4... Fuck you! You're the winner, or the loser, depending on how one wants to look at it. The worst of the worst. NetBurst will go down in history as the slowest, the hottest, and the hungriest architecture of all time.

    TLDR: NetBust loses. Bulldozer may have been bad, but Pentium 4 was... I'm out of superlatives.

    submitted by /u/rip-droptire
    [link] [comments]

    Before Beta: Sony's 1969 "Camcorder"

    Posted: 09 Jun 2021 01:54 AM PDT

    Ultra-high-density hard drives made with graphene store ten times more data and are more durable

    Posted: 09 Jun 2021 02:16 AM PDT

    What, if anything, do we know about nVidia's lovelace arch?

    Posted: 09 Jun 2021 11:13 AM PDT

    I know that it tops out at 144 SMs, and that apparently the super switch is supposed to by powered by a derivative, but what else?

    submitted by /u/Jeep-Eep
    [link] [comments]

    Some of macOS Monterey’s best features aren’t coming to Intel Macs

    Posted: 09 Jun 2021 10:23 AM PDT

    No comments:

    Post a Comment