Christopher Mims reports in the Wall Street Journal:
“Most of the advances come from [chip] design and software.” Modern microchips contain not only CPUs but also two dozen or more separate digital-signal processors, as well as a graphics processing unit. Each element is optimized for a different task, from handling images to listening for the phone’s “wake word.” Microchips illustrate Steve Jobs’s famous quote paraphrasing Alan Kay : “People who are really serious about software should make their own hardware.”
Two of the biggest semiconductor companies made announcements last week that might seem unrelated, but are linked. Intel announced its acquisition of Israeli startup Mobileye, which makes chips and software for self-driving cars. Nvidia announced the latest generation of a system intended to speed up machine learning, which is necessary for artificial intelligence.
Both were driven by “specialized computing,” that is, the transforming of specific software tasks into physical silicon chips instead of depending on an ever-faster do-it-all CPU, or central processing unit. It has existed in some form or another for decades, but it has lately become the driving force behind pretty much everything cool in technology, from artificial intelligence to self-driving cars. Why? Because those CPUs aren’t getting faster at the pace they once were. Moore’s Law is dying.
Moore’s Law is the notion that, every two years or so, the number of transistors in a chip doubles. Its popular conception is that computers keep getting faster, smaller and more power-efficient. That isn’t happening the way it used to. “It’s not like Moore’s Law is going to hit a brick wall—it’s going to kind of sputter to an end,” says Daniel Reed, chair of computational science and bioinformatics at the University of Iowa.
As Intel and the other chip foundries spend fortunes to keep the wheel turning, chip designers across the industry are finding creative ways to continue at the old pace of Moore’s Law, and in many cases increase device performance even more quickly.
“Most of the advances today come from [chip] design and software,” says Nvidia chief scientist William Dally. “For us it’s been a challenge because we feel under a lot of pressure to constantly deliver twice the performance per generation,” he adds. So far, Nvidia has accomplished that cadence even when the size of the elements on the chip doesn’t change, and the only thing that does is its design, or “architecture.”
Here’s a less-than-exhaustive list of all the applications to which the principle of specialized computing has been applied: Artificial intelligence, image recognition, self-driving cars, virtual reality, bitcoin mining, drones, data centers, even photography. Pretty much every technology company that makes hardware or supplies it—including Apple, Samsung, Amazon, Qualcomm, Nvidia, Broadcom, Intel, Huawei and Xiaomi—is exploiting this phenomenon. Even companies that only produce chips for their own use, including Microsoft, Google, and Facebook, are doing it.
Many years ago, almost all computing was done with the CPU, one thing after another in sequence, says Keith Kressin, a senior vice president at Qualcomm. Gradually, often-used but processor-intensive tasks were diverted to specialized chips. Those tasks were processed in parallel, while the CPU did only what was absolutely required.
These task-focused chips come in a wide variety, reflecting the breadth of their uses, and the lines between them can be blurry. One kind, the graphics processing unit—think Nvidia and gamers—found wider use for tasks to which it’s uniquely suited, including artificial intelligence. Later on, the rise of smartphones created a gigantic need for another type, digital signal processing chips, designed to enhance photography, for example.
“Our goal is to minimize the amount of software running in the CPU,” says Mr. Kressin. As a result, modern microchips like Qualcomm’s Snapdragon, found in dozens of Android smartphones, can contain not only CPUs but also two dozen or more separate digital-signal processors, as well as a graphics processing unit. Each element is optimized for a different task, from handling images to listening for the phone’s “wake word.”
Moving chores like image enhancement to digital signal processors typically speeds them up by a factor of 25, says Mr. Kressin. That’s one reason your smartphone is able to do so many things that your desktop computer can’t, even if the phone’s CPU isn’t as powerful.
In phones, it was Apple that proved the utility of designing chips in-house, optimizing every nanometer of silicon for exactly the tasks it would handle, says Ben Bajarin, an analyst at market-research firm Creative Strategies. Apple’s huge investment in custom silicon is key to the smooth operation of its devices, and for new features, like the Touch ID fingerprint sensor.
For tasks involving artificial intelligence, switching from CPUs to graphics processors sped them up by anywhere from 10 to 100 times, says Nvidia’s Mr. Dally.
Similar performance gains for the AI behind self-driving technology are the reason that semiautonomous cars are already on America’s roads, and the reason that Intel bought Mobileye. Intel declined to comment for this piece, but integrating Mobileye’s silicon and software into Intel’s own chip technology would be a classic example of specialized computing.
Nvidia’s move into AI and self-driving tech is analogous, even if the company’s graphics processors are much more capable computers than the kind of ultra-specialized silicon typically used to speed up a process. Mr. Dally says the company has been tuning its graphics processors to be better for AI of every sort since 2010.
There are limits to the advantages gained by focusing on more specialized parts, as recent aviation history illustrates. “A [new Boeing] 777 doesn’t fly any faster than a 707 did at beginning of the 1960s,” says Prof. Reed at the University of Iowa. But innovations, from lighter materials to computer controls, have yielded real benefits—planes are safer and more fuel-efficient. By the logic of his analogy, while raw chip performance might not improve, enabling a chip to handle dozens of specific tasks makes it more capable overall.
There’s also a problem of scale. Certain chip buyers—for instance, drone manufacturers—have had to make do with existing general-purpose microprocessors until they could demonstrate enough market demand, says Mr. Kressin. Qualcomm now makes Snapdragon chips specialized for drones: They have to crunch sensor information fast enough to keep the little autonomous copters from crashing or falling out of the sky.
The upside of all this specialization is that making faster chips is now primarily dependent on the cleverness of the chip designer, as opposed to the ability of manufacturers to etch ever more minuscule circuits into silicon, Prof. Reed says. As a result, microchips more than ever illustrate Steve Jobs’s famous quote paraphrasing computer scientist Alan Kay : “People who are really serious about software should make their own hardware.”
0 comments:
Post a Comment