• Breaking News

    Sunday, September 20, 2020

    Hardware support: Apple Books TSMC’s Entire 5nm Production Capability

    Hardware support: Apple Books TSMC’s Entire 5nm Production Capability


    Apple Books TSMC’s Entire 5nm Production Capability

    Posted: 19 Sep 2020 03:35 PM PDT

    GDDR6X at the limit? Over 100 degrees measured inside of the chip with the GeForce RTX 3080 FE! | Investigative

    Posted: 20 Sep 2020 12:28 AM PDT

    Xiaomi takes on Asus with rumored 240 Hz and 360 Hz external gaming monitors with prices from just 999 yuan (US$148)

    Posted: 19 Sep 2020 12:13 PM PDT

    NVIDIA GeForce RTX 3090 gaming performance review leaks out - VideoCardz.com

    Posted: 19 Sep 2020 06:01 AM PDT

    Resellers Used Bots to Dominate the RTX 3080 Launch

    Posted: 19 Sep 2020 09:07 PM PDT

    Gamers Nexus - Custom RTX 3080 Overclocking: EVGA FTW3, ASUS TUF, & Gigabyte Eagle Results (Stream Recap)

    Posted: 19 Sep 2020 07:02 PM PDT

    Netgear Firmware Requires Online Registration

    Posted: 20 Sep 2020 01:57 AM PDT

    What exactly does it mean when Samsung’s 8nm yields are “bad”?

    Posted: 19 Sep 2020 06:23 PM PDT

    I'm a programmer so I never really got into hardware that much.

    I've heard Samsung's new 8nm process yields are trash

    What exactly does it mean when chips yields are bad?

    Does it mean that sometimes when you make cpus, graphics processing chips, etc it will just not work correctly so you have to throw it away? Kind of like making a car but the horsepower is way less than advertised?

    submitted by /u/ineedandlove_acid
    [link] [comments]

    [EuroGamer] This is how Xbox Series S backwards compatibility really works

    Posted: 19 Sep 2020 07:44 AM PDT

    ADATA XPG Launches a PCIe 4.0 x4 NVMe SSD for Notebooks: Gammix S50 Lite

    Posted: 20 Sep 2020 02:23 AM PDT

    MSI RTX 3080 Gaming X Trio Review, Thermals, Overclocking & Gaming Benchmarks

    Posted: 19 Sep 2020 09:35 AM PDT

    RTX 3080 Launch... and discussion of bot countermeasures and impact

    Posted: 19 Sep 2020 01:06 PM PDT

    Intel Core i7 10700K vs AMD Ryzen 5 3600X with NVIDIA RTX 3080 | Gaming Benchmark

    Posted: 20 Sep 2020 02:48 AM PDT

    The Hardware Lottery

    Posted: 19 Sep 2020 04:17 AM PDT

    Huang’s Law Is the New Moore’s Law, and Explains Why Nvidia Wants Arm -- WSJ

    Posted: 19 Sep 2020 06:40 AM PDT

    https://www.wsj.com/articles/huangs-law-is-the-new-moores-law-and-explains-why-nvidia-wants-arm-11600488001

    During modern computing's first epoch, one trend reigned supreme: Moore's Law.

    Actually a prediction by Intel Corp. INTC -0.85% co-founder Gordon Moore rather than any sort of physical law, Moore's Law held that the number of transistors on a chip doubles roughly every two years. It also meant that performance of those chips—and the computers they powered—increased by a substantial amount on roughly the same timetable. This formed the industry's core, the glowing crucible from which sprang trillion-dollar technologies that upended almost every aspect of our day-to-day existence.

    As chip makers have reached the limits of atomic-scale circuitry and the physics of electrons, Moore's law has slowed, and some say it's over. But a different law, potentially no less consequential for computing's next half century, has arisen.

    I call it Huang's Law, after Nvidia Corp. chief executive and co-founder Jensen Huang. It describes how the silicon chips that power artificial intelligence more than double in performance every two years. While the increase can be attributed to both hardware and software, its steady progress makes it a unique enabler of everything from autonomous cars, trucks and ships to the face, voice and object recognition in our personal gadgets.

    Power Surge (graph)

    Nvidia's latest microchip tailored for AI is many times faster andmore efficient than it was in 2012.Speed and energy efficiency of Nvidia's chips as a multiple ofperformance in 2012

    Source: the company

    Between November 2012 and this May, performance of Nvidia's chips increased 317 times for an important class of AI calculations, says Bill Dally, chief scientist and senior vice president of research at Nvidia. On average, in other words, the performance of these chips more than doubled every year, a rate of progress that makes Moore's Law pale in comparison.

    Nvidia's specialty has long been graphics processing units, or GPUs, which operate efficiently when there are many independent tasks to be done simultaneously. Central processing units, or CPUs, like the kind that Intel specializes in, are on the other hand much less efficient but better at executing a single, serial task very quickly. You can't chop up every computing process so that it can be efficiently handled by a GPU, but for the ones you can—including many AI applications—you can perform it many times as fast while expending the same power.

    Intel was a primary driver of Moore's Law, but it was hardly the only one. Perpetuating it required tens of thousands of engineers and billions of dollars in investment across hundreds of companies around the globe. Similarly, Nvidia isn't alone in driving Huang's Law—and in fact its own type of AI processing might, in some applications, be losing its appeal. That's probably a major reason it has moved to acquire chip architect Arm Holdings this month, another company key to ongoing improvement in the speed of AI, for $40 billion.

    The pace of improvement in AI-specific hardware will make possible a range of applications both utopian and dystopian, from the end of automobile accidents to ubiquitous surveillance. But it's also enabling, right now, a less fantastical application with huge implications for how we shop and the fate of millions of retail jobs: cashierless checkout.

    Standard's checkout technology tracks customers and the products they pick up using cameras and a Nvidia-powered system in the back of the store that performs tens of trillions of calculations a second. Photo: Standard AI

    San Francisco-based tech company Standard recently announced a deal with Circle K to turn some of its stores into "grab and go" experiences in the mold of Amazon.com Inc.'s Amazon Go stores. The three-year-old startup installs cameras throughout stores, then routes video from them to Nvidia-powered systems in the back, which perform tens of trillions of calculations a second. As shoppers grab objects off store shelves, the system tallies it all, and bills them through their mobile devices as they walk out.

    For perspective, a system performing this many operations a second is faster than the most powerful supercomputer in the world was as recently as 2012, at least at AI inference tasks.

    "Honestly we could do nothing and just wait and Nvidia will drop our prices every year," says Jordan Fisher, Standard's founder and CEO.

    TuSimple's autonomous truck has some of the latest AI computing power installed in its cab. Photo: TuSimple

    Another category that Huang's Law affects is autonomous vehicles. At San Diego-based TuSimple, a rapidly expanding autonomous-trucking startup, the challenge is making a self-driving system that can fit the power and space limitations of a diesel-powered semi-trailer truck. On a typical TuSimple vehicle, that means cramming the entire system, which can't draw more than 5 kilowatts, into an air-cooled cabinet in the sleeper cab.

    Given such power constraints, what matters most is performance per watt. TuSimple is seeing performance double every year on its Nvidia-powered systems, says Xiaodi Hou, the company's co-founder and chief technology officer.

    Similar boosts in performance have been occurring since the mid-2000s in a very different area of AI: our mobile phones.

    In 2017, Apple introduced the iPhone 8, which included its Neural Engine. Apple designed the chip specifically to run machine-learning tasks, which are important to many kinds of AI. (Its chip-manufacturing partner is Taiwan Semiconductor Manufacturing Co.)

    Apple's decision to make the chip accessible to any app on the phone—as well as the introduction of comparable chips and software on Android phones—allowed for new kinds of AI businesses, says Bruno Fernandez-Ruiz, co-founder and chief technology officer of Nexar, a company that makes AI-powered dashboard cameras for cars. By processing on users' phones streams of video captured by dashboard cameras, Nexar's technology can alert drivers to imminent hazards.

    Uses of mobile AI are multiplying, in phones and smart devices ranging from dishwashers to door locks to lightbulbs, as well as the millions of sensors making their way to cities, factories and industrial facilities. And chip designer Arm Holdings—whose patents Apple, among many tech companies large and small, licenses for its iPhone chips—is at the center of this revolution.

    Over the last three to five years, machine-learning networks have been increasing by orders of magnitude in efficiency, says Dennis Laudick, vice president of marketing in Arm's machine-learning group. "Now it's more about making things work in a smaller and smaller environment," he adds. Arm's smallest and most energy-sipping chips, tiny enough to be powered by a watch battery, can now enable cameras to recognize objects in real time.

    This movement of AI processing from the cloud to the "edge"—that is, on the devices themselves—explains Nvidia's desire to buy Arm, says Nexar co-founder and CEO Eran Shir. Nvidia has a near monopoly on AI processing in the cloud. But where two years ago, Nexar performed 40% of its data processing in the cloud, Arm-based chips have enabled it to do much more of that processing in mobile devices, and faster, since it doesn't have to be transmitted over the internet first. Today, the cloud is doing only 15% of the work. In addition, some functions, like a vision-based parking assistant, were not even possible until recently, when the chips in phones became much more capable.

    Experts agree that the phenomenon I've labeled Huang's Law is advancing at a blistering pace. However, its exact cadence can be difficult to nail down. The nonprofit Open AI says that, based on a classic AI image-recognition test, performance doubles roughly every year and a half. But it's been a challenge even agreeing on the definition of "performance." A consortium of researchers from Google, Baidu, Harvard, Stanford and practically every other major tech company are collaborating on an effort to better and more objectively measure it.

    Another caveat for Huang's Law is that it describes processing power that can't be thrown at every application. Even in a stereotypically AI-centric task like autonomous driving, most of the code the system is running requires the CPU, says TuSimple's Mr. Hou. Dr. Dally of Nvidia acknowledges this problem, and says that when engineers radically speed up one part of a calculation, whatever remains that can't be sped up naturally becomes the bottleneck.

    It's also possible that, like Moore's Law before it, Huang's Law will run out of steam. That could happen within a decade, says Steve Roddy, vice president of product marketing in Arm's machine-learning group. But it could enable much in that relatively short time, from driverless cars to factories and homes that sense and respond to their environments.

    Copyright ©2020 Dow Jones & Company, Inc.

    submitted by /u/robmak3
    [link] [comments]

    No comments:

    Post a Comment