The prototype’s data processing and memory circuits use less than a tenth as much electricity as any comparable electronic device. And yet despite its size, researchers designed it to perform many advanced computing feats.
Electronic computing was born in the form of massive machines in air-conditioned rooms, migrated to desktops and laptops, and lives today in tiny devices like watches and smartphones.
But why stop there, researchers asked? Why not build an entire computer onto a single chip? It could have processing circuits, memory storage, and power supply to perform a given task, such as measuring moisture in a row of crops. Equipped with machine learning algorithms, the chip could make on-the-spot decisions such as when to water. And with wireless technology it could send and receive data over the internet.
Engineers call this vision of ubiquitous computing the Internet of Everything. But to achieve it they’ll need to develop a new class of chips to serve as its foundation. That’s where the new prototype comes in.
“This is what engineers do,” says Subhasish Mitra, a professor of electrical engineering and of computer science at Stanford University who worked on the chip. “We create a whole that is greater than the sum of its parts.”
BETTER MEMORY
The prototype is built around a new data storage technology called RRAM (resistive random access memory), which has features essential for this new class of chips: storage density to pack more data into less space than other forms of memory; energy efficiency that won’t overtax limited power supplies; and the ability to retain data when the chip hibernates, which researchers designed it to do as an energy-saving tactic.
RRAM has another essential advantage. Engineers can build RRAM directly atop a processing circuit to integrate data storage and computation into a single chip. Researchers have pioneered this concept of uniting memory and processing into one chip because it’s faster and more energy efficient than passing data back and forth between separate chips as is the case today. A French team at the CEA-LETI research institute in Grenoble, France was responsible for grafting the RRAM onto a silicon processor.
In order to improve the storage capacity of RRAM, the researchers made a number of changes. One was to increase how much information each storage unit, called a cell, can hold. Memory devices typically consist of cells that can store either a zero or a one. The researchers devised a way to pack five values into each memory cell, rather than just the two standard options.
A second enhancement improved the endurance of RRAM. Think about data storage from a chip’s point of view: As data is continuously written to a chip’s memory cells, they can become exhausted, scrambling data and causing errors. The researchers developed an algorithm to prevent such exhaustion. They tested the endurance of their prototype and found that it should have a 10-year lifespan.
THE FUTURE OF COMPUTERS
Mitra says the team’s computer scientists and electrical engineers worked together to integrate many software and hardware technologies on the prototype, which is currently about the diameter of a pencil eraser.
Although that is too large for futuristic, Internet of Everything applications, scientists could incorporate the way that the prototype combines memory and processing into the chips found in smartphones and other mobile devices.
Chip manufacturers are already showing interest in this new architecture, which was one of the goals of the team. Mitra says experience gained manufacturing one generation of chips fuels efforts to make the next iteration smaller, faster, cheaper, and more capable.
The researchers will unveil the computer-on-a-chip prototype at the International Solid-State Circuits Conference in San Francisco. Additional researchers from Stanford, CEA-LETI, and Nanyang Technical University in Singapore contributed to the work. The Defense Advanced Research Projects Agency, the Stanford SystemX Alliance, the National Science Foundation, the Semiconductor Research Corporation, and the Carnot Chair of Excellence at CEA-LETI supported the research.
Source: Stanford University