Via Andrew Sullivan, I learned of Douglas Fox's article in October's Discover Magazine, covering Stanford Professor Kwabena Boahen's work on a new type of computer chip -- one with architecture styled after the human brain. 

Why do we want a brain-like chip? Well, it turns out that the human brain is not only an amazingly powerful computer, but an amazingly efficient one as well.

"The human brain runs on only about 20 watts of power, equal to the dim light behind the pickle jar in your refrigerator. By contrast, the computer on your desk consumes a million times as much energy per calculation. If you wanted to build a robot with a processor as smart as the human brain, it would require 10 to 20 megawatts of electricity. “Ten megawatts is a small hydroelectric plant,” Boahen says dismissively... Today’s transistors are 1/100,000 the size that they were a half century ago, and computer chips are 10 million times faster—we still have not made meaningful progress on the energy front."


This level of energy consumption precludes not only the creation of human-style robots like Data on Star Trek, but even, as the article explains, 'medical[s] implant to replace just 1 percent of the neurons in the brain, for use in stroke patients'. These devices would require just too damn much energy.

Why are our computers such juice hogs? It's all about eliminating errors stemming from signal-to-noise ratios.

"Traditional digital computers depend on millions of transistors opening and closing with near perfection, making an error less than once per 1 trillion times... Engineers ensure that the millions of transistors on a chip behave reliably by slamming them with high voltages—essentially, pumping up the difference between a 1 and a 0 so that random variations in voltage are less likely to make one look like the other."

And -- voila! -- we have the biggest culprit in the energy consumption levels of digital computers.

How can we reduce the power consumption levels associated with digital computers? The short answer is that we can not do all that much, because digital computing relies on these high levels of signal accuracy, and no one has found an alternative to power boosting to ensure these levels of reliability.

The brain has an alternative method for overcoming the signal-to-noise problem. While digital computing's answer is brute force (power to enhance the accuracy of individual reporters), neural computing's answer is overwhelming force (sheer quantity to enhance the sample size). "The brain manages noise by using large numbers of neurons whenever it can. It makes important decisions ... by having sizable groups of neurons compete with each other—a shouting match ... in which the accidental silence (or spontaneous outburst) of a few nerve cells is overwhelmed by thousands of others. The winners silence the losers so that ambiguous, and possibly misleading, information is not sent to other brain areas."

Rather than boosting the accuracy of the neurons, the brain employs an evaluation layer to consider incoming messages, which can account for 'fuzziness', ambiguity and errors. In order for this mechanism to function reliably, you need lots and lots of inputs, which is why we all have so many neurons. And in order to support that many neurons, the brain evolved an architecture that consumes relatively little energy. In fact, it appears from recent research that the neural network in our brain has evolved to consume only 15% of the brain's energy on firing neurons, while it seems that 80% of the energy is reserved for this evaluation layer.

Boahen's new chip, the Neurogrid, is based on this neural architecture, and would represent nothing less than a true revolution in computing. "Neurogrid’s 1 million neurons are expected to sip less than a watt... Most modern supercomputers are the size of a refrigerator and devour $100,000 to $1 million of electricity per year. Boahen’s Neurogrid will fit in a briefcase, run on the equivalent of a few D batteries."

While the power savings are impressive, honestly, that's just one of the many benefits I see in neural computing, if it can be executed successfully. In particular, this article addresses three other primary limitations of digital architecture that could be addressed by neural computing.

The first is error handling. Because digital computing relies on models of extreme accuracy, we build systems from, and on, these digital devices that assume perfect accuracy. And, when, failure occurs (as it inevitably does, sometimes often), our digital systems can not handle them. "A single transistor accidentally flipping can crash a computer or shift a decimal point in your bank account". In comparison, in the brain, "synapses fail to fire 30 percent to 90 percent of the time," and yet most of us do not reboot at such occurrences, nor do we (most of the time) lapse into a nervous breakdown. The structure of our neural computers are stunningly effective at error-handling, which could mean much more flexible, stable and accommodating computing experiences.

The second relevant issue is that digital architecture shows signs of nearing the end of it's course. Moore's law is reaching it's end. "As transistors shrink to the width of just a few dozen silicon atoms, the problem of noise is increasing." Thus, the power boosts will cease being able to overcome the 'errors' inherent in the circuitry. Thus, continued reliance on digital architectures might mean that continued advances of computing power and experiences would only be achieved through the increased networking of devices -- exploiting Metcalfe's law, as we leave Moore's law on the wayside.

The third limitation of digital computing is one that has long intrigued me. Simply stated, computers aren't good at being creative. And, with such a rigid, predictive architecture underpinning all of digital computing, it's not surprising. Neural computing has the potential to address this. As the article tantalizingly alludes, "some scientists even see neural noise as the key to human creativity."

How might this be possible?

Returning to the brain, I would argue that one mechanical model of creativity could be explained as follows:

- mechanically, humans perceive through the firing of neurons
- these messages from the neurons bubble up to the consciousness
- consciousness continuously seeks patterns and connections from these perceptions
- neurons fire accidentally (on a stunningly frequent basis)
- the mechanical systems of the brain would have no knowledge of whether the firing of the neuron was accidental
- thus, the erroneous message would bubble up to the consciousness, just as a correct one would, for evaluation
- during evaluation, the consciousness can consider the validity and importance of the message; as well, the consciousness will evaluate errors for patterns and connections
- and thus we have the potential for creativity, with the consciousness conceiving and considering ideas that are not necessarily present, that may not have previously existed

If this is a plausible model for the existence of creativity in human intelligence, it seems to me that the same mechanism could be ported to support creativity in artificial intelligence (even if that is not the primary goal of the mechanism). As the article explains, "Neurogrid’s noisy processors will not have anything like a digital computer’s rigorous precision. They may, however, allow us to accomplish everyday miracles that digital computers struggle with, like prancing across a crowded room on two legs or recognizing a face."

Let's use an analogy to state this in different terms. First, let's consider a prison. To the extent prisons operate effectively, they do so because of rules -- predictive indications that determine everything that will happen and when it will happen. Theoretically, I should be able to state where every single prisoner will be at 4:29P tomorrow afternoon. Since deviations from these rules leads to a break-down in predictability, the system is setup to reduce deviations (try getting out to the rec yard when you're locked in your cell), and why these deviations are treated so harshly when they do occur.

Now, let's consider an artist's studio. To the extent that an artist's studio operates effectively, it does so regardless of the rules (or, more specifically, standard, global rules). There are some standard rules, of course -- you should always cap the paints when you're done, since not doing so is a silly waste of money. And you have to produce a certain amount of work -- otherwise, why are you there? But, beyond these basic rules (which are intended to provide basic operational stability), the creativity of the artists and the studio in large part depends on the existence of a free, loose and serendipitous environment. Attire is not prescribed -- people wear what they want. The start time might be 'before noon'. Keeping your desk a complete mess is likely tolerated, if not encouraged -- as with drinking beer during the workday. Creativity is fostered by an absence of global rules, relying on the presence of intelligent actors to evaluate circumstances as they occur, applying dynamically created rules in real time -- but damned if I'll be able to tell you where everyone's gonna be at 4:29P tomorrow, or even 15 seconds from now!

In this analogy, the model of digital computing is akin to that of the prison, actively reducing the number of unexpected inputs, whereas neural computing is more like the artist's studio, assuming these variances will occur. One environment is clearly and vastly superior at generating standardization and the other is superior at generating creativity. Digital computing is based on an architecture that ensures standards are adhered to, while neural computing relies on an architecture that assumes these standards will be breached.

To me, this shift in thinking on chip architecture from digital to neural evokes the history of modern physics. It seems that digital computing, just like the elegant and predictive models of Newtonian physics, breaks down at very, very small levels. In the case of physics, we approached that breaking point with our study at the subatomic level and we saw the emergence of the probabilistic (not predictive) study of quantum mechanics. And so, in the world of computing, as we appear to rapidly approach the limits of Moore's law, we have to turn to what at first glance might seem like crazy, psychedelic mechanisms to continue explaining and exploiting the world around us.

Share and enjoy!

-r