AI Program Beats Average Entrance Exam Scores in Japan College Exam

Artificial Intelligence has been progressing at an impressive rate due to technological advancements in cybernetics. The Institute for Ethics and Emerging Technologies suggests AI is a form of biological replication and further understanding of the human brain can help create a more advanced reproduction. In the next decade or so, it’s perfectly feasible the robotics industry could create service droids to perform rudimentary tasks. According to a study in Japan, AI is already surpassing the capabilities of the average human being.

Japan’s National Institute of Informatics programmed the AI to complete a standardized college entrance exam. The system correctly answered 511 questions out of a possible 950. In comparison to this, the national average score is 416, which means the AI system has an 80% chance of being admitted into the country’s 33 national universities and 441 private colleges.

The test revolves around five core subjects including History, Maths, and Physics. As you might expect, the AI scored highly in Maths questions and retained information extremely well to achieve excellent History results. On the other hand, the AI system struggled to cope with the Physics questioning due to processing language’s limitations. Overall, the test scores illustrate how far Artificial Intelligence has come, and robotics is a field which could revolutionize society.

Image courtesy of TweakTown.

IBM Wants To Teach Robots Some Social Skills

The exploration and development of Artificial intelligence is a boundary which is consistently being pushed, the scientific and academic communities are furthering their studies into robotic interactions in many different directions. One such path is focusing on IBM and their efforts to incorporate “machine learning to teach robots social skills like gestures, eye movements, and voice intonations” through the Watson Project.

During a keynote speech at a conference held in San Jose, California, this week, Robert High, chief technology officer of Watson at IBM, conveyed techniques his team are working on using a small humanoid robot. During demonstrations the machine, a Nao model from the company Aldebaran, appeared to successfully speak with realistic intonation. According to Oxford dictionaries, “Intonation” is defined as “The rise and fall of the voice when speaking” The robot also achieved appropriate hand gestures, a little impatience and sarcasm which included looking at its watch, for example, when asking High to hurry up with his talk.

Unfortunately, these interactions were pre recorded and not conveyed live on stage, this was down to the system’s failings in successfully working in noisy environments. The team behind the R&D have implemented machine-learning algorithms which learn from video footage with the aim to associate appropriate gestures and intonations with different phrases.

Artificial Intelligence is viewed as a soulless entity which is mechanical, it cannot be related to in any way, we humans on the other hand use subtle cues when we communicate with each other, our voices change pitch and our hands reinforce our points of view, muscles in our faces react to a conversation or a feeling of emotion. If you could download social skills into a robot, you would have a more believable form which tricks our brains into identifying a believable norm. This research is still in its early stages; one has to wonder where robots will be in 10, 20, 50 years time?. Will there become a situation in my life time whereby a debate would centre on a legal definition of an acceptance of a robot being classified as he/she.

It makes you contemplate the lengths to which AI development can reach and the implications on us.

Thank you technologyreview for providing us with this information.

Image courtesy of aldebaran