Energy-Friendly Chip Could Boost Neural Networks

The quest to gain a greater insight into artificial Intelligence has been exciting and has also opened up a range of possibilities that have included “convolutional neural networks”, these are large visual networks of simple information-processing units which are loosely modelled on the anatomy of the human brain.

These networks are typically implemented using the more familiar graphics processing units (GPUs). A mobile GPU might have as many as 200 cores or processing units, this means that it is suited to “simulating a network of distributed processors”. Now, a further development in this area could lead to the potential for a specifically designed chip that has a sole purpose of implementing a neural network.

MIT researchers have presented the aforementioned chip at the “International Solid-State Circuits Conference in San Francisco”. The advantages of this chip include the notion that it is 10 times more efficient than an average mobile GPU, this could lead, in theory, to mobile devices being able to run powerful artificial intelligence algorithms locally, rather than relying on the cloud to process data.

The new chip, coined “Eyeriss” could, lead to the expansion of capabilities that includes the Internet of things, or put simply, where everything from a car to a cow, (yes apparently) would have sensors that are able to submit real-time data to networked servers. This would then open up horizons for artificial intelligence algorithms to make those important decisions.

Before I sign off I wanted to further delve into the workings of a neural network, the workings are that it is typically organised into layers, each of these layers contains a processing node. Data is then divided up among these nodes within the bottom layer, each node then manipulates the data it receives before passing it on to nodes within the next layer. This process is then repeated until “the output of the final layer yields the solution to a computational problem.” It is certainly fascinating and opens up a world of interesting avenues with which to explore, when you combine science and tech, the outcome is at the very least educational with the potential for it to be life changing.         .

Nvidia Introduces Its Drive PX Self-Driving Car Computer

Nvidia has introduced its Drive PX, a self-driving computer for the automotive industry, which is now available as a developer kit. The device is said to go on sale this May for a price tag of $10.000, which is not a huge price for the car industry. Nvidia’s CEO, Jen-Hsun Huang, believes that this is where the industry is heading.

Nvidia has compared the Drive PX with one of its former DARPA projects called Dave, which as a small self-driving car based on deep learning neural network that processed a lot of images in order to learn how to drive and avoid obstacles.

DARPA’s Dave had 3.1 million connections, 12 frames per second and was capable of 38 million connections a second. Nvidia now has the AlexNet deep learning network that can do 630 million connections at 184 frames per second and, with 116 billion connects a second.

The Drive PX comes equipped with 12 camera inputs and has 2.3 Teraflops of computing power, as well as ADAS which runs multiple cameras and combines the data from them. Nvidia might be taking the automotive industry to the next level with this new project.

Thank you Fudzilla for providing us with this information