• Breaking News

    Monday, December 6, 2021

    Hardware support: Anandtech: "Imagination Launches Catapult Family of RISC-V CPU Cores: Breaking Into Heterogeneous SoCs"

    Hardware support: Anandtech: "Imagination Launches Catapult Family of RISC-V CPU Cores: Breaking Into Heterogeneous SoCs"


    Anandtech: "Imagination Launches Catapult Family of RISC-V CPU Cores: Breaking Into Heterogeneous SoCs"

    Posted: 06 Dec 2021 08:15 AM PST

    Intel to list shares in self-driving car unit Mobileye - WSJ

    Posted: 06 Dec 2021 05:20 PM PST

    [VideoCardz] AMD 4800S Desktop Kit supports Radeon RX 6600, launches next year

    Posted: 06 Dec 2021 10:17 AM PST

    Reuters: "EU regulators pause investigation into Nvidia, ARM deal"

    Posted: 06 Dec 2021 06:41 AM PST

    AWS Goes Wide And Deep With Graviton3 Server Chip

    Posted: 06 Dec 2021 06:20 AM PST

    Is there a reason why most modern CPUs only have 1 FPU per core, but multiple ALUs?

    Posted: 06 Dec 2021 12:40 AM PST

    I know the exception to this is AMDs FX series like the original Bulldozer architecture. Where there was only 1 FPU shared among 2 cores. As well as some other shared resources such as cache. That got them into hot water legally as an 8 core CPU really only had 4 total FPUs. But I still wonder if the idea of more shared resources on a CPU really isn't that bad of an idea. I understand the bulldozer architecture was a mess, but I feel like there could be some very efficient about this approach if done correctly.

    It just seems like such a waste of die space to have a CPU that is loading one or two threads to their full load, while the resources on 6 other cores are only being used to under 50%. What if you could push those idle resources to the main thread?

    Would it be possible to design a CPU that goes the opposite way in a sense? Instead of 2 cores sharing 1 FPU and fighting over it like Bulldozer, lets say have each core have 2 small FPUs, and be capable of stealing a 3rd one and some other resources from its neighbor? Essentially turning itself into a "Big" core, and downgrading it's neighbor into a "Little" core. Except much more dynamically than a static design such as Alder Lake. Where when it really needs to it can just go back from the 3-1 configuration to a more balanced 2-2 configuration with all cores equal. Essentially getting a "Big.Medium.Little" design.

    The main reason I ask this is because of the comments made by Robert Hallock in their 5 Years of Ryzen presentation a few months ago. It really doesn't sound at all like they are taking the big.little approach in the traditional sense, and are instead "are looking at the dynamic range of what those cores can do - from low power to high performance.". They are simply expanding their architecture in both directions, but how do you really do that?

    submitted by /u/bubblesort33
    [link] [comments]

    Why did ARM agree to get bought by SoftBank in the 1st place?

    Posted: 06 Dec 2021 12:36 AM PST

    I don't understand why such a big and critical company agreed to be aquired. They didn't have enough money? Their licensing business model was not sustainable? Was it because of the shareholders?

    submitted by /u/Bak840
    [link] [comments]

    Reuters: "U.S. says Nvidia-Arm deal harms market for networking, self-driving car chips"

    Posted: 06 Dec 2021 08:31 PM PST

    XDA Developers: "Chromebooks, Windows and how Mediatek plans to win computing"

    Posted: 05 Dec 2021 06:21 PM PST

    Elegant PCB Business Card made with Inkscape, Kicad and Svg2Shenzen.

    Posted: 06 Dec 2021 06:31 PM PST

    Bosch Gives Go-Ahead For Volume Production Of Silicon Carbide Chips

    Posted: 04 Dec 2021 06:51 PM PST

    Pics of Apple M1 Max Die Hint at Future Chiplet Designs | Tom's Hardware

    Posted: 04 Dec 2021 06:59 PM PST

    Theoretical question about CPU vs. GPU RAM

    Posted: 05 Dec 2021 06:35 PM PST

    I've been thinking about chips and custom implementations of all kinds of stuff recently, and I was wondering what the theory was behind CPU RAM being slower than GPU RAM, and I was wondering if it would be viable to have a new generation shared memory system, with most of the processing done in the form of shaders or something similar on the GPU section, where the CPU would be much smaller and only would handle small things such as inputs or networking

    submitted by /u/Impending-Coom
    [link] [comments]

    No comments:

    Post a Comment