• Breaking News

    Tuesday, November 17, 2020

    Hardware support: AMD RX 6800 Series Unboxing Thread

    Hardware support: AMD RX 6800 Series Unboxing Thread


    AMD RX 6800 Series Unboxing Thread

    Posted: 16 Nov 2020 09:31 AM PST

    Apologies for being hours late - none of us are free to make the thread earlier.

    Because unboxing videos usually are popular but doesn't fit our subreddit's theme, this thread is for all things related to the unboxing videos. Please use this thread as discussion on ALL unboxing info - other leaks are of course excluded.

    HardwareCanucks

    HardwareUnboxed

    Jayz2Cents

    Level1Techs

    OptinumTech (ITX test fit)

    Paul's Hardware

    Short Circuit / LTT

    Tech YES City

    submitted by /u/Nekrosmas
    [link] [comments]

    AMD says Smart Access Memory isn't proprietary it's just it only works on AMD hardware right now

    Posted: 16 Nov 2020 08:17 AM PST

    Worn-out NAND flash blamed for Tesla vehicle gremlins, such as rearview cam failures and silenced audio alerts

    Posted: 16 Nov 2020 08:31 PM PST

    AMD Radeon RX 6800 XT & RX 6800 launch stock expected to be almost as bad as RTX 3080

    Posted: 16 Nov 2020 05:50 PM PST

    M1 Mac Benchmarks

    Posted: 16 Nov 2020 01:33 PM PST

    It looks like the first M1 based Macs have been delivered. Here are the benchmarks that I've found.

    Cinebench R23 Single/Multi core (ARM native) (MacBook Pro 13"): 1498/7508

    Geeckbench Single/Multi core (ARM native) (MacBook Pro"): 1745/7308

    Various FCP benchmarks: English translations

    submitted by /u/AWildDragon
    [link] [comments]

    MSI RTX 3080 Ventus - misleading/false advertising

    Posted: 16 Nov 2020 06:54 PM PST

    [x-posting from /r/nvidia]

    All right, this might sound a bit nitpicky but I believe I have sound arguments.

    The MSI 3080 Ventus page advertises "Core Pipe" cooler technology.

    The MSI 3080 Gaming X Trio advertises the same "Core Pipe" cooler technology with the same description and an additional graphic.

    I believe that these graphics and descriptions would lead a reasonably informed person to assume that both of these heatsinks make use of machined, flat-bottom heatpipes in direct contact with the GPU die.

    Here is the MSI 3080 Gaming X Trio heatsink (taken from TechPowerUp's review). The machined, flat-bottom heatpipes are clearly visible.

    Here is my MSI 3080 Ventus' heatsink. It looks like it has a polished aluminum coldplate with heatpipes running beneath it. I cannot be certain if the heatpipes underneath are machined or not. Inconclusive pictures: 1, 2.

    In the grand scheme of things I'm not sure this matters that much to the average consumer, but I am disappointed because I wanted to try out liquid metal TIM which I can do with machined copper heatpipes, but cannot do with an aluminum coldplate.

    EDIT: From MSI's livestream, the phrase "Core Pipes" is directly referencing the direct-contact heatpipes. Link

    submitted by /u/Last_Jedi
    [link] [comments]

    The fallacy of ‘synthetic benchmarks’

    Posted: 17 Nov 2020 01:25 AM PST

    Preface

    Apple's M1 has caused a lot of people to start talking about and questioning the value of synthetic benchmarks, as well other (often indirect or badly controlled) information we have about the chip and its predecessors.

    I recently got in a Twitter argument with Hardware Unboxed about this very topic, and given it was Twitter you can imagine why I feel I didn't do a great job explaining my point. This is a genuinely interesting topic with quite a lot of nuance, and the answer is neither 'Geekbench bad' nor 'Geekbench good'.

    Note that people have M1s in hand now, so this isn't a post about the M1 per se (you'll have whatever metric you want soon enough), it's just using this announcement to talk about the relative qualities of benchmarks, in the context of that discussion.

    What makes a benchmark good?

    A benchmark is a measure of a system, the purpose of which is to correlate reliably with actual or perceived performance.
    That's it. Any benchmark which correlates well is Good. Any benchmark that doesn't is Bad.

    There a common conception that 'real world' benchmarks are Good and 'synthetic' benchmarks are Bad. While there is certainly a grain of truth to this, as a general rule it is wrong. In many aspects, as we'll discuss, the dividing line between 'real world' and 'synthetic' is entirely illusionary, and good synthetic benchmarks are specifically designed to tease out precisely those factors that correlate with general performance, whereas naïve benchmarking can produce misleading or unrepresentative results even if you are only benchmarking real programs. Most synthetic benchmarks even include what are traditionally considered real-world workloads, like SPEC 2017 including the time it takes for Blender to render a scene.

    As an extreme example, large file copies are a real-world test, but a 'real world' benchmark that consists only of file copies would tell you almost nothing general about CPU performance. Alternatively, a company might know that 90% of their cycles are in a specific 100-line software routine; testing that routine in isolation would be a synthetic test, but it would correlate almost perfectly for them with actual performance.

    On the other hand, it is absolutely true there are well-known and less-well-known issues with many major synthetic benchmarks.

    Boost vs. sustained performance

    Lots of people seem to harbour misunderstandings about instantaneous versus sustained performance.

    Short workloads capture instantaneous performance, where the CPU has opportunity to boost up to frequencies higher than the cooling can sustain. This is a measure of peak performance or burst performance, and affected by boost clocks. In this regime you are measuring the CPU at the absolute fastest it is able to run.

    Peak performance is important for making computers feel 'snappy'. When you click an element or open a web page, the workload takes place over a few seconds or less, and the higher the peak performance, the faster the response.

    Long workloads capture sustained performance, where the CPU is limited by the ability of the cooling to extract and remove the heat that it is generating. Almost all the power a CPU uses ends up as heat, so the cooling determines an almost completely fixed power limit. Given a sustained load, and two CPUs using the same cooling, where both of which are hitting the power limit defined by the quality of the cooling, you are measuring performance per watt at that wattage.

    Sustained performance is important for demanding tasks like video games, rendering, or compilation, where the computer is busy over long periods of time.

    Consider two imaginary CPUs, let's call them Biggun and Littlun, you might have Biggun faster than Littlun in short workloads, because Biggun has a higher peak performance, but then Littlun might be faster in sustained performance, because Littlun has better performance per watt. Remember, though, that performance per watt is a curve, and peak power draw also varies by CPU. Maybe Littlun uses only 1 Watt and Biggun uses 100 Watt, so Biggun still wins at 10 Watts of sustained power draw, or maybe Littlun can boost all the way up to 10 Watts, but is especially inefficient when doing so.

    In general, architectures designed for lower base power draw (eg. most Arm CPUs) do better under power-limited scenarios, and therefore do relatively better on sustained performance than they do on short workloads.

    On the Good and Bad of SPEC

    SPEC is an 'industry standard' benchmark. If you're anything like me, you'll notice pretty quickly that this term fits both the 'good' and the 'bad'. On the good, SPEC is an attempt to satisfy a number of major stakeholders, who have a vested interest in a benchmark that is something they, and researchers generally, can optimized towards. The selection of benchmarks was not arbitrary, and the variety captures a lot of interesting and relevant facets of program execution. Industry still uses the benchmark (and not just for marketing!), as does a lot of unaffiliated research. As such, SPEC has also been well studied.

    SPEC includes many real programs, run over extended periods of time. For example, 400.perlbench runs multiple real Perl programs, 401.bzip2 runs a very popular compression and decompression program, 403.gcc tests compilation speed with a very popular compiler, and 464.h264ref tests a video encoder. Despite being somewhat aged and a bit light, the performance characteristics are roughly consistent with the updated SPEC2017, so it is not generally valid to call the results irrelevant from age, which is a common criticism.

    One major catch from SPEC is that official benchmarks often play shenanigans, as compilers have found ways, often very much targeted towards gaming the benchmark, to compile the programs in a way that makes execution significantly easier, at times even because of improperly written programs. 462.libquantum is a particularly broken benchmark. Fortunately, this behaviour can be controlled for, and it does not particularly endanger results from AnandTech, though one should be on the lookout for anomalous jumps in single benchmarks.

    A more concerning catch, in this circumstance, is that some benchmarks are very specific, with most of their runtime in very small loops. The paper Performance Characterization of SPEC CPU2006 Integer Benchmarks on x86-64 Architecture (as one of many) goes over some of these in section IV. For example, most of the time in 456.hmmer is in one function, and 464.h264ref's hottest loop contains many repetitions of the same line. While, certainly, a lot of code contains hot loops, the performance characteristics of those loops is rarely precisely the same as for those in some of the SPEC 2006 benchmarks. A good benchmark should aim for general validity, not specific hotspots, which are liable to be overtuned.

    SPEC2006 includes a lot of workloads that make more sense for supercomputers than personal computers, such as including lots of Fortran code and many simulation programs. Because of this, I largely ignore the SPEC floating point; there are users for whom it may be relevant, but not me, and probably not you. As another example, SPECfp2006 includes the old rendering program POV-Ray, which is no longer particularly relevant. The integer benchmarks are not immune to this overspecificity; 473.astar is a fairly dated program, IMO. Particularly unfortunate is that many of these workloads are now unrealistically small, and so can almost fit in some of the larger caches.

    SPEC2017 makes the great decision to add Blender, as well as updating several other programs to more relevant modern variants. Again, the two benchmarks still roughly coincide with each other, so SPEC2006 should not be altogether dismissed, but SPEC2017 is certainly better.

    Because SPEC benchmarks include disaggregated scores (as in, scores for individual sub-benchmarks), it is easy to check which scores are favourable. For SPEC2006, I am particularly favourable to 403.gcc, with some appreciation also for 400.perlbench. The M1 results are largely consistent across the board; 456.hmmer is the exception, but the commentary discusses that quirk.

    (and the multicore metric)

    SPEC has a 'multicore' variant, which literally just runs many copies of the single-core test in parallel. How workloads scale to multiple cores is highly test-dependent, and depends a lot on locks, context switching, and cross-core communication, so SPEC's multi-core score should only be taken as a test of how much the chip throttles down in multicore workloads, rather than a true test of multicore performance. However, a test like this can still be useful for some datacentres, where every core is in fact running independently.

    I don't recall AnandTech ever using multicore SPEC for anything, so it's not particularly relevant.

    On the Good and Bad of Geekbench

    Geekbench does some things debatably, some things fairly well, and some things awfully. Let's start with the bad.

    To produce the aggregate scores (the final score at the end), Geekbench does a geometric mean of each of the two benchmark groups, integer and FP, and then does a weighted arithmetic mean of the crypto score with the integer and FP geometric means, with weights 0.05, 0.65, and 0.30. This is mathematical nonsense, and has some really bad ramifications, like hugely exaggerating the weight of the crypto benchmark.

    Secondly, the crypto benchmark is garbage. I don't always agree with his rants, but Linus Torvald's rant is spot on here: https://www.realworldtech.com/forum/?threadid=196293&curpostid=196506. It matters that CPUs offer AES acceleration, but not whether it's X% faster than someone else's, and this benchmark ignores that Apple has dedicated hardware for IO, which handles crypto anyway. This benchmark is mostly useless, but can be weighted extremely high due to the score aggregation issue.

    Consider the effect on these two benchmarks. They are not carefully chosen to be perfectly representative of their classes.

    M1 vs 5900X: single core score 1742 vs 1752

    Note that the M1 has crypto/int/fp subscores of 2777/1591/1895, and the 5900X has subscores of 4219/1493/1903. That's a different picture! The M1 actually looks ahead in general integer workloads, and about par in floating point! If you use a mathematically valid geometric mean (a harmonic mean would also be appropriate for crypto), you get scores of 1724 and 1691; now the M1 is better. If you remove crypto altogether, you get scores of 1681 and 1612, a solid 4% lead for the M1.

    Unfortunately, many of the workloads beyond just AES are pretty questionable, as many are unnaturally simple. It's also hard to characterize what they do well; the SQLite benchmark could be really good, if it was following realistic usage patterns, but I don't think it is. Lots of workloads, like the ray tracing one, are good ideas, but the execution doesn't match what you'd expect of real programs that do that work.

    Note that this is not a criticism of benchmark intensity or length. Geekbench makes a reasonable choice to only benchmark peak performance, by only running quick workloads, with gaps between each bench. This makes sense if you're interested in the performance of the chip, independent of cooling. This is likely why the fanless Macbook Air performs about the same as the 13" Macbook Pro with a fan. Peak performance is just a different measure, not more or less 'correct' than sustained.

    On the good side, Geekbench contains some very sensible workloads, like LZMA compression, JPEG compression, HTML5 parsing, PDF rendering, and compilation with Clang. Because it's a benchmark over a good breadth of programs, many of which are realistic workloads, it tends to capture many of the underlying facets of performance in spite of its flaws. This means it correlates will with, eg., SPEC 2017, even though SPEC 2017 is a sustained benchmark including big 'real world' programs like Blender.

    To make things even better, Geekbench is disaggregated, so you can get past the bad score aggregation and questionable benchmarks just by looking at the disaggregated scores. In the comparison before, if you scroll down you can see individual scores. M1 wins the majority, including Clang and Ray Tracing, but loses some others like LZMA and JPEG compression. This is what you'd expect given the M1 has the advantage of better speculation (eg. larger ROB) whereas the 5900X has a faster clock.

    (and under Rosetta)

    We also have Geekbench scores under Rosetta. There, one needs to take a little more caution, because translation can sometimes behave worse on larger programs, due to certain inefficiencies, or better when certain APIs are used, or worse if the benchmark includes certain routines (like machine learning) that are hard to translate well. However, I imagine the impact is relatively small overall, given Rosetta uses ahead-of-time translation.

    (and the multicore metric)

    Geekbench doesn't clarify this much, so I can't say much about this. I don't give it much attention.

    (and the GPU compute tests)

    GPU benchmarks are hugely dependent on APIs and OSs, to a degree much larger than for CPUs. Geekbench's GPU scores don't have the mathematical error that the CPU benchmarks do, but that doesn't mean it's easy to compare them. This is especially true given there are only a very limited selection of GPUs with 1st party support on iOS.

    None of the GPU benchmarks strike me as particularly good, in the way that benchmarking Clang is easily considered good. Generally, I don't think you should have much stock in Geekbench GPU.

    On the Good and Bad of microarchitectural measures

    AnandTech's article includes some of Andrei's traditional microarchitectural measures, as well as some new ones I helped introduce. Microarchitecture is a bit of an odd point here, in that if you understand how CPUs work well enough, then they can tell you quite a lot about how the CPU will perform, and in what circumstances it will do well. For example, Apple's large ROB but lower clock speed is good for programs with a lot of latent but hard to reach parallelism, but would fair less well on loops with a single critical path of back-to-back instructions. Andrei has also provided branch prediction numbers for the A12, and again this is useful and interesting for a rough idea.

    However, naturally this cannot tell you performance specifics, and many things can prevent an architecture living up to its theoretical specifications. It is also difficult for non-experts to make good use of this information. The most clear-cut thing you can do with the information is to use it as a means of explanation and sanity-checking. It would be concerning if the M1 was performing well on benchmarks with a microarchitecture that did not suggest that level of general performance. However, at every turn the M1 does, so the performance numbers are more believable for knowing the workings of the core.

    On the Good and Bad of Cinebench

    Cinebench is a real-world workload, in that it's just the time it takes for a program in active use to render a realistic scene. In many ways, this makes the benchmark fairly strong. Cinebench is also sustained, and optimized well for using a huge number of cores.

    However, recall what makes a benchmark good: to correlate reliably with actual or perceived performance. Offline CPU ray tracing (which is very different to the realtime GPU-based ray tracing you see in games) is an extremely important workload for many people doing 3D rendering on the CPU, but is otherwise a very unusual workload in many regards. It has a tight rendering loop with very particular memory requirements, and it is almost perfectly parallel, to a degree that many workloads are not.

    This would still be fine, if not for one major downside: it's only one workload. SPEC2017 contains a Blender run, which is conceptually very similar to Cinebench, but it is not just a Blender run. Unless the work you do is actually offline, CPU based rendering, which for the M1 it probably isn't, Cinebench is not a great general-purpose benchmark.

    (Note that at the time of the Twitter argument, we only had Cinebench results for the A12X.)

    On the Good and Bad of GFXBench

    GFXBench, as far as I can tell, makes very little sense as a benchmark nowadays. Like I said for Geekbench's GPU compute benchmarks, these sort of tests are hugely dependent on APIs and OSs, to a degree much larger than for CPUs. Again, none of the GPU benchmarks strike me as particularly good, and most tests look... not great. This is bad for a benchmark, because they are trying to represent the performance you will see in games, which are clearly optimized to a different degree.

    This is doubly true when Apple GPUs use a significantly different GPU architecture, Tile Based Deferred Rendering, which must be optimized for separately.

    On the Good and Bad of browser benchmarks

    If you look at older phone reviews, you can see runs of the A13 with browser benchmarks.

    Browser benchmark performance is hugely dependent on the browser, and to an extent even the OS. Browser benchmarks in general suck pretty bad, in that they don't capture the main slowness of browser activity. The only thing you can realistically conclude from these browser benchmarks is that browser performance on the M1, when using Safari, will probably be fine. They tell you very little about whether the chip itself is good.

    On the Good and Bad of random application benchmarks

    The Affinity Photo beta comes with a new benchmark, which the M1 does exceptionally well in. We also have a particularly cryptic comment from Blackmagicdesign, about DaVinci Resolve, that the "combination of M1, Metal processing and DaVinci Resolve 17.1 offers up to 5 times better performance".

    Generally speaking, you should be very wary of these sorts of benchmarks. To an extent, these benchmarks are built for the M1, and the generalizability is almost impossible to verify. There's almost no guarantee that Affinity Photo is testing more than a small microbenchmark.

    This is the same for, eg., Intel's 'real-world' application benchmarks. Although it is correct that people care a lot about the responsiveness of Microsoft Word and such, a benchmark that runs a specific subroutine in Word (such as conversion to PDF) can easily be cherry-picked, and is not actually a relevant measure of the slowness felt when using Word!

    This is a case of what are seemingly 'real world' benchmarks being much less reliable than synthetic ones!

    On the Good and Bad of first-party benchmarks

    Of course, then there are Apple's first-party benchmarks. This includes real applications (Final Cut Pro, Adobe Lightroom, Pixelmator Pro and Logic Pro) and various undisclosed benchmark suites (select industry-standard benchmarks, commercial applications, and open source applications).

    I also measured Baldur's Gate 3 in a talk running at ~23-24 FPS at 1080 Ultra, at the segment starting 7:05.
    https://developer.apple.com/videos/play/tech-talks/10859

    Generally speaking, companies don't just lie in benchmarks. I remember a similar response to NVIDIA's 30 series benchmarks. It turned out they didn't lie. They did, however, cherry-pick, specifically including benchmarks that most favoured the new cards. That's very likely the same here. Apple's numbers are very likely true and real, and what I measured from Baldur's Gate 3 will be too, but that's not to say other, relevant things won't be worse.

    Again, recall what makes a benchmark good: to correlate reliably with actual or perceived performance. A biased benchmark might be both real-world and honest, but if it's also likely biased, it isn't a good benchmark.

    On the Good and Bad of the Hardware Unboxed benchmark suite

    This isn't about Hardware Unboxed per se, but it did arise from a disagreement I had, so I don't feel it's unfair to illustrate with the issues in Hardware Unboxed's benchmarking. Consider their 3600 review.

    Here are the benchmarks they gave for the 3600, excluding the gaming benchmarks which I take no issue with.

    3D rendering

    • Cinebench (MT+ST)
    • V-Ray Benchmark (MT)
    • Corona 1.3 Benchmark (MT)
    • Blender Open Data (MT)

    Compression and decompression

    • WinRAR (MT)
    • 7Zip File Manager (MT)
    • 7Zip File Manager (MT)

    Other

    • Adobe Premiere Pro video encode (MT)

    (NB: Initially I was going to talk about the 5900X review, which has a few more Adobe apps, as well as a crypto benchmark for whatever reason, but I was worried that people would get distracted with the idea that "of course he's running four rendering workloads, it's a 5900X", rather than seeing that this is what happens every time.)

    To have a lineup like this and then complain about the synthetic benchmarks for M1 and the A14 betrays a total misunderstanding about what benchmarking is. There are a total of three real workloads here, one of which is single threaded. Further, that one single threaded workload is one you'll never realistically run single threaded. As discussed, offline CPU rendering is an atypical and hard to generalize workload. Compression and decompression are also very specific sorts of benchmarks, though more readily generalizable. Video encoding is nice, but this still makes for a very thin picking.

    Thus, this lineup does not characterize any realistic single-threaded workloads, nor does it characterize multi-core workloads that aren't massively parallel.

    Contrast this to SPEC2017, which is a 'synthetic benchmark' of the sort Hardware Unboxed was criticizing. SPEC2017 contains a rendering benchmark (526.blender) and a compression benchmark (557.xz), and a video encode benchmark (525.x264), but it also contains a suite of other benchmarks, chosen specifically so that all the benchmarks measure different aspects of the architecture. It includes workloads like Perl, GCC, workloads that stress different aspects of memory, plus extremely branchy searches (eg. a chess engine), image manipulation routines, etc. Geekbench is worse, but as mentioned before, it still correlates with SPEC2017, by virtue of being a general benchmark that captures most aspects of the microarchitecture.

    So then, when SPEC2017 contains your workloads, but also more, and with more balance, how can one realistically dismiss it so easily? And if Geekbench correlates with SPEC2017, then how can you dismiss that, at least given disaggregated metrics?

    In conclusion

    The bias against 'synthetic benchmarks' is understandable, but misplaced. Any benchmark is synthetic, by nature of abstracting speed to a number, and any benchmark is real world, by being a workload you might actually run. What really matters is knowing how each workload is represents your use-case (I care a lot more about compilation, for example), and knowing the issues with each benchmark (eg. Geekbench's bad score aggregation).

    Skepticism is healthy, but skepticism is not about rejecting evidence, it is about finding out the truth. The goal is not to have the benchmarks which get labelled the most Real World™, but about genuinely understanding the performance characteristics of these devices—especially if you're a CPU reviewer. If you're a reviewer who dismisses Geekbench, but you haven't read the Geekbench PDF characterizing the workload, or your explanation stops at 'it's short', or 'it's synthetic', you can do better. The topics I've discussed here are things I would consider foundational, if you want to characterize a CPU's performance. Stretch goals would be to actually read the literature on SPEC, for example, or doing performance counter-aided analysis of the benchmarks you run.

    Normally I do a reread before publishing something like this to clean it up, but I can't be bothered right now, so I hope this is good enough. If I've made glaring mistakes (I might've, I haven't done a second pass), please do point them out.

    submitted by /u/Veedrac
    [link] [comments]

    NVIDIA Announces A100 80GB: Ampere Gets HBM2E Memory Upgrade

    Posted: 16 Nov 2020 07:29 AM PST

    NVIDIA official GeForce RTX 3060 Ti performance leaked - VideoCardz.com

    Posted: 16 Nov 2020 10:21 AM PST

    [VideoCardz] PowerColor (finally) shows off its Radeon RX 6800 XT Red Devil

    Posted: 16 Nov 2020 11:43 PM PST

    (NotebookCheck.net) The XtendTouch Pro is the world's first portable 15.6-inch AMOLED touchscreen monitor promising 10-bit color, full DCI-P3 coverage, and 4K resolution

    Posted: 17 Nov 2020 01:27 AM PST

    AMD Announces World’s Fastest HPC Accelerator for Scientific Research¹ [AMD Instinct MI100]

    Posted: 16 Nov 2020 07:18 AM PST

    Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

    Posted: 16 Nov 2020 06:55 PM PST

    AMD Ryzen 5000 IPC Performance Tested

    Posted: 16 Nov 2020 09:27 PM PST

    TSMC 5nm and 7nm processes are fully loaded until H2 2021 -

    Posted: 16 Nov 2020 06:27 AM PST

    AMD CDNA Whitepaper

    Posted: 16 Nov 2020 07:23 AM PST

    https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf

    Summary:

    1. 16-wide SIMD across 4-clock cycles, for Wave64. ("Like Vega")

    2. 128 compute units

    3. FP16, BFloat16, FP32 matrix operations supported with a new "MFMA instruction".

    4. 4 stacks of HBM2 at 1.2 TBps of bandwidth.

    5. No infinity cache: standard 8MB L2 cache like back in Vega.

    6. Oak Ridge National Labs had a test...

    The AMD Instinct™ MI100 GPU is built to accelerate today's most demanding HPC and AI workloads. Oak Ridge National Laboratory tested their exascale science codes on the MI100 as they ramp users to take advantage of the upcoming exascale Frontier system. Some of the performance results ranged from 1.4x faster to 3x faster performance compared to a node with V100. In the case of CHOLLA, an astrophysics application, the code was ported from CUDA to AMD ROCm™ in just an afternoon while enjoying 1.4x performance boost over V100

    submitted by /u/dragontamer5788
    [link] [comments]

    A Guide to HDMI 2.1

    Posted: 16 Nov 2020 08:23 AM PST

    Xbox Series S vs Series X Console Review: Can the cut-down console cut It? - Digital Foundry

    Posted: 16 Nov 2020 12:40 PM PST

    ASRock launches Radeon RX 6800 graphics cards - VideoCardz.com

    Posted: 17 Nov 2020 12:15 AM PST

    AMD RDNA™ 2 "Hangar 21" Technology Demo Trailer

    Posted: 16 Nov 2020 12:16 PM PST

    Is there a list of "great/trusted" in-depth reviewers for various components of a pc?

    Posted: 16 Nov 2020 08:02 AM PST

    With the videocards coming out for amd basically all the "big" releases have come out. I have been eyeing to get a new computer some time and while logical increments is an ok start point it leaves a lot to be desired.

    I have been following Gamers Nexus and they seem great so far, but as far as I can tell they mainly focus on cases, CPUs, and GPUs.

    I would like to expand my sources for reviews. These are what I got so far which isn't much at all.

    Cases:
    Gamers Nexus

    GPU:
    Gamers Nexus
    Anandtech Hardware Unboxed Digital Foundry igor'sLAB

    CPU:
    Gamers Nexus
    Anandtech Hardware Unboxed
    Digital Foundry igor'sLAB

    Motherboard:
    (Often mentioned get cheapest that fits all your components, seems iffy.)
    HUB (amd motherboards) Hardware Unboxed
    buildzoid

    PSU:
    JonnyGuru (inactive?)
    Aris Mpitziopoulos (crmaris)

    RAM:
    (Consensus seems get whatever is cheapest depending on frequency?)

    SSD/HDD:
    Anandtech

    Monitor:
    TFTcentral
    Hardware Unboxed
    rtings

    bit of everything/misc
    Techpowerup
    Guru3D
    Bit-Tech

    submitted by /u/BiscuitCookie
    [link] [comments]

    Huawei officially announces the sale of Honor smartphone business

    Posted: 17 Nov 2020 02:06 AM PST

    how much space i should leave in my ssd?

    Posted: 17 Nov 2020 01:59 AM PST

    i have a built in 512gb ssd on my laptop, right now i have 170gb space left, should i leave it be? and how much free space i need so it doesnt start to slow down my laptop?

    submitted by /u/farruwu
    [link] [comments]

    Ansys® CFX and AMD EPYC™ 7Fx2 Processors: Superior Computational Fluid Dynamics Performance

    Posted: 16 Nov 2020 08:55 AM PST

    Major Galaxy S21 series leak reveals all key specs | Gsmarena

    Posted: 16 Nov 2020 06:30 AM PST

    No comments:

    Post a Comment