Monday, August 18, 2014

IBM's "TrueNorth" neural net chips -- increased density and decreased power consumption will lead to surprising applications.

Big Data led to qualitative improvement in search -- will Big Neural Nets lead to qualitative improvement in artificial intelligence?

The field of artificial intelligence (AI) has two historical roots, dating from the 1950s. The first is information processing models like the chess playing program of Newell, Simon and Shaw, who sought to understand how people did things like play chess by interviewing experts, then programming computers to emulate them.

Herbert Simon and Alan Newell asked chess
experts to think out loud while playing and wrote
programs that used the same heuristics.

The second branch of AI was exemplified by the self-organizing neural networks of researchers like Clark and Farley, who sought to build programs that could learn to recognize patterns by emulating the neurons of a brain. For example, their programs could learn to discriminate between horizontal and vertical images.


(For a lot more on the history of AI, see The Quest for Artificial Intelligence, a History of Ideas and Achievements by Nils Nilsson).

We've come a long way since then, and neural nets have been applied to many "subconscious" pattern recognition problems like speech and character recognition, robot control and spotting cats in images. The logical, algorithmic approach to AI has led to expert systems that emulate conscious thinking processes. IBM AI researchers characterize the difference by saying traditional AI programs are left-brained and neural nets are right-brained. Their goal is to create holistic systems that combine both approaches.


Those same IBM researchers have announced a dramatic improvement in neural net hardware -- they have sharply reduced the size and, equally important, the power requirements of simulated neurons and synapses.


The IBM researchers are able to tile their "TrueNorth" chips to create larger systems, as shown here:


Their next goal is a 4,096 chip system with 4 billion neurons and 1 trillion synapses while consuming ~4kW of power.

That is still not comparable to a human brain, which has roughly 86 billion neurons and 10^14–10^15 and consumes only 20-40 watts of power, and even if they could build a system of comparable complexity, it would not be a brain -- it would be a system inspired by the architecture of the brain.

We will not be able to follow the "reasoning" of neural nets as we can the descendents of early chess-playing programs, but, if they succeed, we will be surprised by the performance and applications of systems containing massive, low-cost, low-power neural nets.

No comments:

Post a Comment