Dharmendra Modha has an array of 48 circuit boards, lined 6 by 8 on a rack, each with its own processor. Modha describes this set-up as a small rodent. Or, more accurately, a digital recreation of a small rodent brain, and one that he wants to put in your smartphone.
Modha works for IBM and has been developing the neuromorphic TrueNorth chip, which mimics the brain of a rodent with its cluster of 48 million artificial nerve cells, since 2008 as the Head of its Cognitive Computer Group. Researchers in Colorado working with the processor have developed software for it that can recognise spoken language and identify images, using deep learning algorithms. The project is backed by a $53.5 million grant from the US Department of Defense’s research arm, DARPA.
“What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” Brian Van Essen, a computer scientist for the Lawrence Livermore National Laboratory, said. “It lets us tackle new problems in new environments.”
The TrueNorth CPU is a low-power conduit for the kind of deep learning artificial intelligence that is being utilised by Google, Facebook, and Microsoft, usually through more powerful GPUs. The low power consumption of the TrueNorth means it has the potential to outperform its GPU and FPGA-powered alternatives.
Though TrueNorth cannot yet be described as a digital brain, the rodent synapse-inspired chip is certainly a step in the right direction. “You don’t need to model the fundamental physics and chemistry and biology of the neurons to illicit useful computation,” Modha says. “We want to get as close to the brain as possible while maintaining flexibility.”
Thank you Wired for providing us with this information.