• Breaking News

    Tuesday, January 21, 2020

    Hardware support: Intel Core i9-10980HK Comet Lake CPU For Notebooks Spotted

    Hardware support: Intel Core i9-10980HK Comet Lake CPU For Notebooks Spotted


    Intel Core i9-10980HK Comet Lake CPU For Notebooks Spotted

    Posted: 20 Jan 2020 01:54 PM PST

    The Future of Computing, Moore’s Law is Not Dead, Law of Accelerating Returns, and More from Rock Star CPU Architect Jim Keller (AMD, Tesla, Apple, Intel)

    Posted: 20 Jan 2020 09:36 PM PST

    Future Nvidia GPU Pricing

    Posted: 20 Jan 2020 06:31 PM PST

    So I know that it's very common sentiment to respond to new Nvidia GPU rumors with, "Oh I'm sure it's gonna cost like $2000 this time" or "We're not gonna see any improvement in performance-per-dollar", and while I can understand a bit of cynicism here given Nvidia's prime market advantage, I still most of this thinking is based on super simplified thinking that ignores a lot of factors.

    I should start with the biggest thing - Turing die sizes are enormous. Like, on a completely unprecedented level. People balked at a $1200 top end GPU, quite understandably, but this GPU was also 750mm², which is just astronomically big, where previous completely MAMMOTH GPU's were really only edging the 600mm² bar. And it needs to be remembered that yields drop significantly with larger die sizes. Basically, Turing GPU's were never going to bring us fantastic value.

    But also important to realize is that these higher prices have led to the RTX GPU's being a disappointment in terms of sales with a notable drop in their gaming sector revenue. Which has to be a concern for them.

    "But Nvidia still own the performance crown so can do whatever they want". Ok, so let's look at a few other factors that should lead Nvidia to think about offering better value:

    - AMD will have a Big Navi product out sometime this year. While this probably wont be cheap by any means, it's still probably going to offer >2080Ti performance at a sub 2080Ti price. So Nvidia will need to offer some kind of improvement in performance-per-$.

    - Next gen consoles will be coming out soon, with a believable rumored GPU performance range of 9-12TF(of RDNA power). And keep in mind that Nvidia need PC gamers to stay on PC, since anybody leaving for consoles is a boost for their competitor. It will simply not be acceptable for Nvidia to only be able to offer GPU's superior to what's in the consoles at the $500+ range. That will be embarrassing and could easily see a lot of PC folks defecting to consoles because upgrading their PC's to keep up with next gen gaming is just becoming way too expensive.

    - Die sizes should shrink a fair bit. I dont think Nvidia are going to be maxing out 7nm reticle limits by any means, and will use the density improvements to bring their GPU's back down to more sane sizes. And they'll be doing so on a *very* mature 7nm process, which should hopefully mean that wafer costs aren't gonna offset things here too dramatically.

    Now I'm not saying that the new series will mean Nvidia go all-in on providing super great value. I just dont think these, "Everything will get more expensive cuz Nvidia" alarmists will be right. There's every reason for Nvidia to at *least* provide a large leap in power for the same price point, which is really the most important thing.

    submitted by /u/Seanspeed
    [link] [comments]

    EVGA GeForce RTX 2060 KO Review

    Posted: 20 Jan 2020 06:12 AM PST

    The Strix X570i Buildzoid Edition // ASUS Strix X570i Gaming modded for better memory voltages

    Posted: 20 Jan 2020 12:35 PM PST

    Rumored NVIDIA GeForce RTX 3070, RTX 3080 7nm Ampere GPU Specs Leak

    Posted: 20 Jan 2020 10:37 AM PST

    Zhaoxin 7-Series x86 CPUs Mitigated For Spectre V2 + SWAPGS

    Posted: 20 Jan 2020 12:31 PM PST

    HDMI 2.1 TV as main desktop PC monitor?

    Posted: 21 Jan 2020 12:51 AM PST

    After watching this video from Digital Trends:

    https://www.youtube.com/watch?v=Qu1bH2gGXes

    I've become really interested in using one of the upcoming LG CX OLEDs as my main home PC monitor later this year. I known HDMI 2.1 TVs are not out yet, but does any one here currently have such a similar setup for their home PC? Is anyone else planning to create a setup around such a TV this year?

    What are your thoughts and experiences on the pros and cons? How would you configure your set up? Would you have some smaller secondary monitors to avoid issues such as OLED burn in? Are there potentially better sets to use than the 48 in LG CX?

    submitted by /u/JigglymoobsMWO
    [link] [comments]

    Raspberry pi alternative that runs windows 10, has BT and WiFi?

    Posted: 20 Jan 2020 04:11 PM PST

    Apologies if this isn't the right sub. Title says it all, my google-fu turned me with no results. Does it exist?

    submitted by /u/2inchesofdoom
    [link] [comments]

    [Theory] DDR5 affect on integrated graphics / APUs

    Posted: 20 Jan 2020 09:23 AM PST

    This may be a long post, but TL:DR at the bottom. I was looking into iGPU/APU performance, and specifically looking at Intel's new Gen11 G7 graphics vs AMD's Vega 8 CU graphics. It seems that any GPU bigger than that is completely ineffective because of RAM bandwidth, and I'm wondering if DDR5 would help fix that.

    Point 1: Memory speed is crucial to modern GPUs.

    For years a huge indicator of GPU performance has been the memory bus width. You can put all the cores in the world in a GPU, but if it can't use it's memory, it's useless. Just look at the new Nvidia Super cards if you want to see the benefit of faster memory on a GPU. IMO the old 1060 often lost to the RX 580 because of the faster memory bandwidth.

    Point 2: Integrated DDR4 can't provide adequate memory bandwidth.

    GTX1060 has 192GB/s. RX580 has 256GB/s. 2080ti has 616GB/s. 5700XT has 448GB/s. Dual Channel DDR4 @ 3000MT/s has 48GB/s, and is shared with the CPU. That's equivalent to a 9600GT from 2009.

    Point 3: Memory bandwidth kills APU/GPU performance.

    Look into the Ryzen 2200g/2400g/3200g/3400g performance with single vs dual channel memory. Basically when using single channel memory (and cutting your memory bandwidth in half), GPU performance tanks. Hard. If you're using an AMD APU, you need to be using dual channel memory, and faster memory also improves APU/GPU performance quite a lot.

    Point 4: AMD has done high CU APUs, but it used HBM2/GDDR5/GDDR6.

    Compared to most APUs with anywhere from 3 to 8 CUs, AMD has used a 24 CU GPU in Intel's 8809G hybrid CPU/APU monstrosity. The difference is that chip used a dedicated HBM2, reserving the DDR4 for the CPU alone. Also, both the PS5 and Xbox Series X are using custom APUs with 36 and 56 CUs respectively. They're also not using DDR4, but GDDR6 with 448GB/s and 560GB/s, respectively (Side note: 5700XT uses 40 CUs).

    Point 5: DDR5 is set to double the memory bandwidth of DDR4.

    DDR4/DDR5 bandwidth is largely a factor of speed, and the official spec is set to top out at 6400MT/s, which is over double what the official DDR4 spec allows. Give it a couple of years, and I think we'll be seeing 8000MT/s DDR5. This is double what we have today, either mainstream or heavy overclocking.

    Point 6: More channels means more memory bandwidth.

    DDR4 already allows single, dual, quad, octa, etc channels for memory. Normal desktops only use dual channel, while quad is for HEDT and octa is for server space. I'm wondering if we start seeing quad channel DDR5 APUs. DDR5-6400 in quad channel is over 200GB/s of bandwidth. Even with current GPU/APUs, that would be enough for a significant GPU increase and corresponding performance increase. Now imagine some monster Threadripper sized APU with octa channels: 16c/32t and a 36 CU GPU that uses one heatsink for all of it, and it sits in a tiny SFF case.

    TL:DR

    Current iGPU/APUs are limited by memory bandwidth, and DDR5 is poised to help fix that. The increased memory bandwidth with DDR5 will allow for faster/bigger iGPU/APUs. This is not an Intel/AMD issue, this is a Micron/Samsung/Hynix problem.

    submitted by /u/crazyates88
    [link] [comments]

    Imagination Technologies charts its future with new Apple deal and post-MIPS strategy

    Posted: 20 Jan 2020 11:23 AM PST

    Why is DDR is still single channel per DIMM?

    Posted: 20 Jan 2020 11:27 AM PST

    I never really understood why is still DDR single channel per dimm (1 read 1 write lane), why not have 2 channels per dim. so 4 total lanes. it would be a trade off as ram would cost probably a bit more. but it would have huge benefits hell. you could have a option to be single channel or dual channel DIMMs and I don't see why thats not possible

    it would hugely benefit some bandwidth intensive purposes like IGPU or just high core count that needs the bandwidth.

    submitted by /u/GegaMan
    [link] [comments]

    Would we see enough of a performance boost to 3d stack CPUs but to add a mass storage layer?

    Posted: 20 Jan 2020 10:15 AM PST

    AMD is going to release a 64 core CPU soon. I'm wondering if the next step should be something different instead of a 128core work station. For instance AMD could take a 16core thread ripper, delete 8 cores, add a GPU die, and perhaps implement a mass storage layer such as 128 or 256gb.

    APUs have exists for quite some time but they're physically smaller than a TR chip.

    submitted by /u/The_toast_of_Reddit
    [link] [comments]

    Renoir Geekbench 4 is out.

    Posted: 20 Jan 2020 10:29 AM PST

    https://browser.geekbench.com/v4/cpu/15154654 Here is a 3700X for comparison: https://browser.geekbench.com/v4/cpu/15153086 The 3700X has a 10% higher integer score with 5% higher clock speeds. Looks like Renoir is losing IPC.

    submitted by /u/tamz_msc
    [link] [comments]

    No comments:

    Post a Comment