Hardware support: Canon sued for disabling scanner when printers run out of ink |
- Canon sued for disabling scanner when printers run out of ink
- Is the performance (and quality) of DLSS limited by the number of Tensor Cores in an RTX GPU?
- How expensive would GPUs get if the GDDR6/GDDR6X chips have to be soldered on both sides of the circuit board?
- TechTechPotato (Dr Ian Cutress): "What is an Intel Confidential CPU Anyway?"
- Is it possible to have software-based FRC (frame rate control) driven by a GPU on monitors with high refresh rates? Would it damage the monitor?
- Review: Asus GeForce RTX 3070 Noctua Edition OC - Graphics - HEXUS.net
- CNBC: "Secretive Giant TSMC's $100 Billion Plan To Fix The Chip Shortage"
- Architecting Interposers- when the interposer is moved inside a package the impact is significant [SemiEngineering]
Canon sued for disabling scanner when printers run out of ink Posted: 16 Oct 2021 10:49 AM PDT |
Is the performance (and quality) of DLSS limited by the number of Tensor Cores in an RTX GPU? Posted: 16 Oct 2021 09:17 PM PDT Is the performance and visual fidelity of an image processed with DLSS being limited by the number of Tensor Cores (and their performance) in current RTX GPUs? For example, Nvidia's performance target for DLSS (2.0) is 2ms, and so that places a restriction on how complex the model can be, and how long it takes for a prediction based on (current) Tensor performance. Is it reasonable to conclude that with a greater budget to retrieve a prediction (for example, doubling the threshold from 2ms to 4 ms, or doubling overall Tensor performance), that the returned prediction (visual fidelity) could improve significantly? Or, shortly:
I'm curious about the ceiling for visual fidelity based on predictions, and what implications a larger model/faster prediction speed might mean; for example, might a future version of DLSS be slower on older generations, or afford greater image fidelity, but at reduced performance. [link] [comments] |
Posted: 16 Oct 2021 08:22 AM PDT One of my friends asked why don't GPUs have more than 512-bit memory interfaces, and many of them top out at 384-bit instead. When I showed a picture of a GPU with a 512-bit memory interface where the GPU core was almost entirely surrounded with memory chips, they asked, "why not on the other side as well?". So, how messy would the circuit board tracing be for something like a 768-bit memory interface? Also, TFW when the backplate is heatsink for the second side of memory chips. [link] [comments] |
TechTechPotato (Dr Ian Cutress): "What is an Intel Confidential CPU Anyway?" Posted: 16 Oct 2021 09:28 AM PDT |
Posted: 16 Oct 2021 05:28 PM PDT I would like to display 10bits worth of color on an 8bit display. The content would be a static image and it would be nice if people on 8bit monitors could see what I would on an 10bit monitor. For a 3D game this would be obviously impractical but for a static image I don't see why it would be a problem since the majority of 10bit monitors achieve their color range by flickering between two different colors. Most GPUs would have no problem saturating a 144hz to 240hz framerate with static images. However, if this isn't an issue, why do applications such as photoshop not offer this as a feature? Even 60hz is capable of FRC, provided the pixel response times are high enough. Would flickering back and forth between static images induce burn-in or something? [link] [comments] |
Review: Asus GeForce RTX 3070 Noctua Edition OC - Graphics - HEXUS.net Posted: 16 Oct 2021 08:44 AM PDT |
CNBC: "Secretive Giant TSMC's $100 Billion Plan To Fix The Chip Shortage" Posted: 16 Oct 2021 12:28 PM PDT |
Posted: 16 Oct 2021 12:28 PM PDT |
You are subscribed to email updates from /r/hardware: a technology subreddit for computer hardware news, reviews and discussion.. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment