• Breaking News

    Sunday, May 17, 2020

    Hardware support: Reminder: Nvidia is in the process of adding VESA Adaptive-Sync/Freesync Support to monitors with G-Sync modules

    Hardware support: Reminder: Nvidia is in the process of adding VESA Adaptive-Sync/Freesync Support to monitors with G-Sync modules


    Reminder: Nvidia is in the process of adding VESA Adaptive-Sync/Freesync Support to monitors with G-Sync modules

    Posted: 16 May 2020 03:32 PM PDT

    TLDR: It is possible for a monitor to have a hardware G-Sync module, and for Freesync to work with an AMD GPU. This is beginning to emerge and will become more common over time. Feel free to read the below if you want the long-winded version.


    I want to post this in an attempt to alleviate confusion and some outright misinformation that I'm seeing lately.

    Background:

    VESA DisplayPort Adaptive-Sync is the open-standard (not open-sourced) that AMD's Freesync is based off of. Nvidia's "G-Sync Compatible" uses the same underpinnings. Any monitor that supports DP Adaptive-Sync should work with AMD Freesync and/or G-Sync Compatible even if not certified. Certification from Nvidia and AMD just means that the monitor was tested and confirmed to meet their respective criteria. A lack of certification does not mean it won't work. It just means that it's not guaranteed to work at the quality that Nvidia and AMD require for their respective certifications.

    Nvidia also offers a G-Sync module with its own hardware scaler. This increases the cost for consumers and is not part of the open-standard. However, it generally offers better quality and consistency than monitors without the scaler (GENERALLY, as there are some top-notch Freesync monitors, and some garbage G-Sync ones). Up until recently, these monitors ONLY supported G-Sync with an Nvidia GPU.

    G-Sync Certification Levels

    In early 2019, Nvidia started to expand their offerings with 3 levels of G-Sync certification. They are:

    • G-Sync Ultimate | Uses a higher-end version of the G-Sync module, supports HDR (VESA 1000) as well as higher resolution/refresh rate combinations (IE, 4k/144hz, or 3440x1440/200hz).
    • G-Sync | Uses a lower-end (by comparison) G-Sync module that lacks HDR support, and isn't on the bleeding edge in terms of refresh rate and resolution combos, but it's no slouch either.
    • G-Sync Compatible | A driver-based implementation of the VESA DisplayPort Adaptive-Sync open standard. Essentially, this is "Nvidia Freesync." And while G-Sync can now be enabled on monitors that are not certified, the ones that are certified have been tested and confirmed to meet the minimum criteria that Nvidia has set forth.

    Traditionally, the first two categories did not work with Freesync. And THAT is what is currently changing.

    G-Sync Modules with Freesync Support

    In November of 2019, Simon over at TFTCentral reported the following:

    We started to see hints of further change to NVIDIA's approach over the last few months. Firstly in September 2019 the Acer Predator X27P appeared, featuring a minor update to the original X27 model, adding VRR support over HDMI for compatible games consoles. This screen features the v2 G-sync module still, but the addition of HDMI-VRR was a new feature and not something possible on any previous G-sync module screen.

    Ok, so that was only over HDMI, targeting consoles (and yes, it should work with AMD GPUs as well). But there's more:

    Then very recently in November 2019 we saw news of the Acer Predator XB273 X, which in its specs on Taiwanese retail stores suggested that it would support HDMI-VRR (like the X27P advertised previously), and then also adaptive-sync over DisplayPort. We reached out to NVIDIA to understand more about what was happening.

    NVIDIA confirmed for us that future G-sync module screens can be capable of supporting both HDMI-VRR and adaptive-sync for HDMI and DisplayPort, as the XB273 X's specs had suggested. A firmware update is being made to the v1 and v2 G-sync hardware modules for future use which allows these new features.

    So yes, it's true. Monitors with a G-Sync module can support Freesync. But what's the criteria? First, the monitor must use a G-Sync module that has been updated with Nvidia's more recent firmware. Second, the monitor manufacturer has to incorporate this updated module into their monitor. And given that they'd rather you buy new monitors than keep your old one, you can probably guess that firmware updates for existing models aren't going to be very common, if deployed at all. So in reality, it's going to be new monitors going forward (IE, the X27P release instead of a firmware update for X27 users).

    Which monitors support it?

    At this point, the list is so tiny that I only have one model that I can 100% confirm supports it (and I'm assuming the two models listed above work too) - the Alienware AW3420DW. This is a 34", 3440x1440, 120hz, G-Sync display that does use the G-Sync module AND supports Freesync out of the box when paired with an AMD GPU. G-Sync Monitor Listing showing G-Sync Module level certification | RTINGS Review Showing Freesync Support. It should be noted that it technically supports the open-standard over DisplayPort and is NOT Freesync certified by AMD.

    The list of monitors will grow and, within a year or two most likely, the vast majority of monitors releasing with G-Sync modules will also support Freesync. That means Freesync support with a wider range, a better scaler, and variable overdrive, among other features. You will soon be able to aim for better performance (G-Sync module) or better value (traditional scaler) and not have to worry about GPU vendor lock-in.

    submitted by /u/jaykresge
    [link] [comments]

    Ampere gaming is going to be far more different from Ampere HPC than Volta vs Turing

    Posted: 16 May 2020 06:43 AM PDT

    • While Jensen Huang might have implied that they Ampere was one unified architecture for gaming and HPC, I believe it's in name only.

    One key metric is transistor per stream processor (i.e. CUDA cores). From Volta to Ampere, the number of SPs went up from 5374 (full die) to 8192 (full die), a 52% increase in SPs for a massive 2.5x increase in transistors. All things equal, the increase in transistor count should have been more linear. After all, you would a priori need 52% more memory bandwidth and logic, 52% more scheduler/dispatch logic, 52% more cache, 52% more FP64 etc.

    All things equal, a larger Volta would have been about 32B transistor, maybe 10% more accounting for some non-linear increase in transistor count.

    This is not what happened, Strangely enough, the number of tensor cores went down from 8 per SM (two by processing unit) to 4 by SM (1 per processing unit). And L2 was increased from 6MB to 40MB, a sevenfold increase.

    • What does it mean for Ampere HPC? This series of tweets explain it very well.

    https://twitter.com/ernstdj/status/1260965941720027141

    But in short, the new tensor cores do FP64 matrix math (x2.5 perf), there's also x2.5 perf for FP16/FP32 FMA, new data structures like TF32 and BF16, and some logic to accelerate the computation of sparse matrixes. All of this with two times less tensor cores per SM.

    What we're looking at is very changed tensor core with hardware dedicated to very specialised workloads. That results in much larger tensor cores, the huge increase of cache is also likely a result of needing to feed the tensor cores. The throughput can increase twentyfold when sparsity involved.

    Not to mention the hardware dedicated to Multi Instance GPU (MIG).

    • What does it mean for Ampere gaming?

    No MIG, less memory and core isolation. Probably no scenarios with a twentyfold throughput in specialised deep learning workloads involving sparsity, meaning the cache could be cut down drastically. No need for FP64 either in SP or in tensor cores. Possibly fewer data structures support needed.

    All in all, I think there's a decent chance that Nvidia will decide to keep the tensor cores from Volta with a few modifications rather than use the full-fat tensor cores from Ampere.

    Ampere HPC is a chip with a lot of transistors dedicated to specialised workloads. Ampere gaming doesn't need to be. With a normalised number of SMs and SPs Ampere gaming could be a much much smaller and leaner chip than Ampere HPC.

    submitted by /u/redsunstar
    [link] [comments]

    The one and only Intel i4 processor

    Posted: 16 May 2020 03:13 PM PDT

    [Gamers Nexus] HW News (05/17/20) - AMD Closes, then Opens "Open Source" Code, 6.5GB/s SSD, & Unpatchable Vulnerability

    Posted: 16 May 2020 10:42 PM PDT

    This CPU specific tech limitation always confused me....

    Posted: 16 May 2020 11:09 AM PDT

    Why is it that CPUs are limited to 2-threads per core?

    We have multi-core CPUs, 4,8,16,32,64, etc

    However, Threads per Core are stuck at 2.

    Why don't we have say 4 Cores, 12 Threads for example?

    3 cores, 12 Threads or 2 cores 10 threads, etc.

    Any reason for that?

    submitted by /u/Ahmed360
    [link] [comments]

    120mm AIO CPU coolers - why aren't they better?

    Posted: 16 May 2020 03:58 PM PDT

    It's generally understood that 120mm AIO CPU coolers are not performance- and cost-effective relative to a decent air cooler.

    I used to have a Radeon 295X2. This used a 120mm AIO cooler to cool both GPU dies (the fan on the card cooled RAM and VRMs). That means it was dissipating north of 500W at full load. And it did all that while keeping temperatures around 70C.

    So why do 120mm AIO coolers only achieve around that same temperature when cooling CPUs that are like 1/4 of that wattage?

    submitted by /u/Last_Jedi
    [link] [comments]

    Nvidia's Drive Orin presentation could reveal some info on consumer chips

    Posted: 16 May 2020 07:50 AM PDT

    TL;DR: What is probably GA102 is ~780 mm2, 4 HBM stacks, 25 TFLOPS FP32, 800 TOPS int8, ??? TDP. Maybe.

    As we all expected, nvidia's GTC conference was entirely focused on professional applications. Mostly HPC and AI with the A100 announcement, but they had a drive orin video too. And that featured a very interesting slide.

    At 50 seconds into the presentation, they show three different setups for differing capabilities. Or here's a still taken from their blog post on the same matter.

    Play close attention to the level 5 platform on the right. Here's a zoomed-in version. It's two Orin SoCs with a pair of Ampere GPUs. These are not A100. If you look closely you can see 4 HBM stacks for each GPU - and the layout doesn't match A100 with two stacks removed. In the past, nvidia has used the smaller-dies used for consumer cards - most recently TU104 on pegasus.

    Doing some very rough pixel counting (from a very low resolution source at an isometric angle!) I get an area of about 780 mm2 (plus or minus a lot) based on the die size against the HBM. Even accounting for a big variation though I don't see this being anything other than the top end consumer chip, which will be called GA102 if history is anything to go by.

    With the L5 platform providing 2000 TOPS and each Orin unit delivering 200, that works out to 800 TOPS apiece for the mysterious ampere die. Assuming the same ratio as A100, that'd mean 25 TFLOPS fp32. The listed TDP is alarming at 355W apiece, but nvidia has highballed these in the past (and the motherboard will use some, naturally). For instance with pegasus nvidia allocated 220W apiece to what ended up having the same specs as the 75W tesla T4. I don't expect the same ratio this time - ~120W would be far too good to be true - but I'd guess it'll end up at the same 250ish Watts we've seen from their previous high end chips.

    To speculate on the actual specs, I'd start by guessing the same divide we've always seen. Stripped out fp64 (and fp64 tensor now), smaller L1$/shared memory, half the ld/st bandwidth. I suspect tf32 will be gone too. L2$ vastly smaller. Plus RT cores of some description. And, to reach the TFLOPS numbers above, 8 GPCs, 7 TPCs per GPC, ~1744 MHz.

    Finally, to give credit where it's due, this was actually first noticed by ToTTenTranz at b3d, though the wild speculation is all me.

    submitted by /u/Qesa
    [link] [comments]

    What resource causes the number of displays limitation?

    Posted: 16 May 2020 02:53 PM PDT

    For example, every pre-Ice Lake Intel GPU can only drive three displays. I wonder what resource shortage causes that and whether it's possible to work around that in software. Here are a few things which are not explanations:

    1. Number of connectors. The same quad port MST hub that runs four 1920 x 1080 @ 60 Hz monitors with an nVidia or AMD GPU won't run four with Intel. And this requires only one DP 1.2 connector on the host.
    2. The number of pixels / display bandwidth. Even if a laptop has a 4K display, two external monitors can still be added. (Coincidentally 4K is the exact same amount of pixels as four FHD displays.)

    So ... why?

    submitted by /u/chx_
    [link] [comments]

    Mainland Chinese Foundry SMIC Builds Its First 14nm FinFET SoC for Huawei

    Posted: 16 May 2020 03:19 AM PDT

    Best cooling solution in 0 G environment

    Posted: 16 May 2020 07:28 AM PDT

    How would liquid cooling or heatpipe cooling work in let's say orbit, liquid cooling could work pretty well (but I don't know how the pump would like it), but how would heatpipes deal with it?

    submitted by /u/Monabuntur
    [link] [comments]

    ‘More Than Moore’ Reality Check

    Posted: 16 May 2020 09:43 AM PDT

    No comments:

    Post a Comment