• Breaking News

    Sunday, October 17, 2021

    Hardware support: Canon sued for disabling scanner when printers run out of ink

    Hardware support: Canon sued for disabling scanner when printers run out of ink


    Canon sued for disabling scanner when printers run out of ink

    Posted: 16 Oct 2021 10:49 AM PDT

    Is the performance (and quality) of DLSS limited by the number of Tensor Cores in an RTX GPU?

    Posted: 16 Oct 2021 09:17 PM PDT

    Is the performance and visual fidelity of an image processed with DLSS being limited by the number of Tensor Cores (and their performance) in current RTX GPUs?

    For example, Nvidia's performance target for DLSS (2.0) is 2ms, and so that places a restriction on how complex the model can be, and how long it takes for a prediction based on (current) Tensor performance.

    Is it reasonable to conclude that with a greater budget to retrieve a prediction (for example, doubling the threshold from 2ms to 4 ms, or doubling overall Tensor performance), that the returned prediction (visual fidelity) could improve significantly?

    Or, shortly:

    1. If a larger model/more Tensor cores (to accelerate prediction) can significantly improve visual fidelity, but (current) Tensor performance doesn't allow for it within a 2ms threshold, does that mean that DLSS 2.0 has a ceiling in terms of the visual fidelity possible (based on that 2ms threshold?)
    2. Assuming a larger model/faster prediction does result in increased visual fidelity, is it then reasonable to assume that RTX 4000 and/or future versions of DLSS might increase the model size and/or prediction speed?

    I'm curious about the ceiling for visual fidelity based on predictions, and what implications a larger model/faster prediction speed might mean; for example, might a future version of DLSS be slower on older generations, or afford greater image fidelity, but at reduced performance.

    submitted by /u/Shidell
    [link] [comments]

    How expensive would GPUs get if the GDDR6/GDDR6X chips have to be soldered on both sides of the circuit board?

    Posted: 16 Oct 2021 08:22 AM PDT

    One of my friends asked why don't GPUs have more than 512-bit memory interfaces, and many of them top out at 384-bit instead.

    When I showed a picture of a GPU with a 512-bit memory interface where the GPU core was almost entirely surrounded with memory chips, they asked, "why not on the other side as well?".

    So, how messy would the circuit board tracing be for something like a 768-bit memory interface? Also, TFW when the backplate is heatsink for the second side of memory chips.

    submitted by /u/COMPUTER1313
    [link] [comments]

    TechTechPotato (Dr Ian Cutress): "What is an Intel Confidential CPU Anyway?"

    Posted: 16 Oct 2021 09:28 AM PDT

    Is it possible to have software-based FRC (frame rate control) driven by a GPU on monitors with high refresh rates? Would it damage the monitor?

    Posted: 16 Oct 2021 05:28 PM PDT

    I would like to display 10bits worth of color on an 8bit display. The content would be a static image and it would be nice if people on 8bit monitors could see what I would on an 10bit monitor.

    For a 3D game this would be obviously impractical but for a static image I don't see why it would be a problem since the majority of 10bit monitors achieve their color range by flickering between two different colors. Most GPUs would have no problem saturating a 144hz to 240hz framerate with static images.

    However, if this isn't an issue, why do applications such as photoshop not offer this as a feature? Even 60hz is capable of FRC, provided the pixel response times are high enough. Would flickering back and forth between static images induce burn-in or something?

    submitted by /u/Bosphoramus
    [link] [comments]

    Review: Asus GeForce RTX 3070 Noctua Edition OC - Graphics - HEXUS.net

    Posted: 16 Oct 2021 08:44 AM PDT

    CNBC: "Secretive Giant TSMC's $100 Billion Plan To Fix The Chip Shortage"

    Posted: 16 Oct 2021 12:28 PM PDT

    Architecting Interposers- when the interposer is moved inside a package the impact is significant [SemiEngineering]

    Posted: 16 Oct 2021 12:28 PM PDT

    No comments:

    Post a Comment