IBM Wants To Teach Robots Some Social Skills

The exploration and development of Artificial intelligence is a boundary which is consistently being pushed, the scientific and academic communities are furthering their studies into robotic interactions in many different directions. One such path is focusing on IBM and their efforts to incorporate “machine learning to teach robots social skills like gestures, eye movements, and voice intonations” through the Watson Project.

During a keynote speech at a conference held in San Jose, California, this week, Robert High, chief technology officer of Watson at IBM, conveyed techniques his team are working on using a small humanoid robot. During demonstrations the machine, a Nao model from the company Aldebaran, appeared to successfully speak with realistic intonation. According to Oxford dictionaries, “Intonation” is defined as “The rise and fall of the voice when speaking” The robot also achieved appropriate hand gestures, a little impatience and sarcasm which included looking at its watch, for example, when asking High to hurry up with his talk.

Unfortunately, these interactions were pre recorded and not conveyed live on stage, this was down to the system’s failings in successfully working in noisy environments. The team behind the R&D have implemented machine-learning algorithms which learn from video footage with the aim to associate appropriate gestures and intonations with different phrases.

Artificial Intelligence is viewed as a soulless entity which is mechanical, it cannot be related to in any way, we humans on the other hand use subtle cues when we communicate with each other, our voices change pitch and our hands reinforce our points of view, muscles in our faces react to a conversation or a feeling of emotion. If you could download social skills into a robot, you would have a more believable form which tricks our brains into identifying a believable norm. This research is still in its early stages; one has to wonder where robots will be in 10, 20, 50 years time?. Will there become a situation in my life time whereby a debate would centre on a legal definition of an acceptance of a robot being classified as he/she.

It makes you contemplate the lengths to which AI development can reach and the implications on us.

Thank you technologyreview for providing us with this information.

Image courtesy of aldebaran

Hitchhiking Robot Is Vandalised After Just Two Weeks in the US

You did indeed read this correctly, a hitchhiking robot which was created by a team of intelligent minds with the aim of experimenting with artificial intelligence and human interaction, has become unstuck after entering and travelling through the US.

Below is a full length image of Hitchbot, which sounds like a futuristic Will Smith dating movie, was vandalised in Philadelphia after having spent a little over two weeks visiting sites in Boston, Salem, Gloucester, Marblehead, and New York City.   Ironically all went well for the robot when it previously travelled through Canada; the US, on the other hand, was less kind.

The team behind this experiment has vowed to continue this innovative project and will also analyse the very nature of human and AI interaction, more details of any future plans will be detailed on the 5th August 2015.

The design of the robot may look slightly malevolent, but I do think this experiment has so far allowed for research to be undertaken within the effects of a machine has on the general public. The website for Hitchbot is written in first person as if the robot is narrating his story; this includes “Family” for the team which created it. As humans we have become accustomed to interacting with technology on a more human level, from Apple’s Siri to battle warzone robots which are being developed with the aim of “thinking” for themselves.

It will be thought-provoking moving forward when the day arrives for both calling a robot “Him or Her” rather than it and also how we interact with a non-human entity.

Thank You to both Hitchbot and Instagram account for providing us with this information.

Google Hates Lag as Much as You

Remember that time you wanted to show someone that awesome picture you took of an ice-cream, but your phone lagged out and it required a hard restart? Or maybe that game which requires immediate feedback that just doesn’t read your input when you need it to. Well, you’re not alone, Google also hates the input lag and is trying to counter-act it in an ingenious way.

The technology is Chrome TouchBot, “an OptoFidelity-made machine that gauges the touchscreen latency on Android and Chrome OS devices”. The bot uses a compatible stick to interact with the touch-screen device in a series of ways such as taps and swipes in a web-based test rota that can help pinpoint problems in code and/or hardware.

Now this isn’t the only lag monitoring device at Google, but it could be the most important given how involved the company is with touch screen devices and operating systems. With the use of this device, we could hope to see much more interactive and responsive devices in the near future.

Do you own a Google OS based device, do you experience any input lag? Let us know in the comments.

Thank you to engadget for providing us with this information.

Oculus Rift Team Working on New Face Tracking Technology

Virtual reality technology hasn’t even hit the market yet, but that won’t stop the Oculus Rift team from developing new technology it seems. Word is that the Facebook-owned company is now looking into a way to capture and display your facial expressions in real-time.

Oculus already provides a way for users to interact with a completely different reality, but this new technology may skyrocket the realism even further by adding a key feature to making the virtual environment more lifelike.

In other words, picture two avatars inside a virtual world, each with the ability to interact and express their emotions through facial expressions. It’s as cool as it is scary, isn’t it? This may even fully immerse you in the virtual world and make you forget you are actually there.

The Oculus team is working with a team of researchers from the University of California in order to develop this new technology. They apparently came up with two designs, one involving a foam padding that covers the forehead. The latter was able to capture brow and some eye muscle movement.

The second design is a bit weird, involving a short adjustable boom attached to the headset. Both designs are able to capture data and send it to be analyzed by the software, which in turn transforms it into facial expressions. Though the technology is currently used only for research purposes, it could in theory be modified to a consumer ready device. The question is, will users be interested in taking their facial expressions to the virtual world?

Thank you Phys.org for providing us with this information

Intel Revealed new Tiny Long-Range RealSense Camera for Smartphones

Intel’s RealSense camera has found its way on PCs, Laptops, tablets and even drones. The company’s technology uses the power of gestures and 3D scanning to improve user interactions.

Smartphones have been a bit tricky to fit with Intel’s tech, but the company finally managed to do it in the end. Intel’s CEO, Brian Krzanich, revealed the latest addition at IDF in Shenzen, emphasising that the new module is significantly smaller and slimmer than the previous version, has a lower thermal output, and claims to have a longer detection range as well.

Intel has also taken advantage of the opportunity to announce a partnership with Chinese online retail giant JD in an attempt to help improve its warehouse management. The company displayed how a tablet with integrated RealSense depth camera can quickly measure the required box sizes for products of all shapes, and consequently summing up the space needed for shipment or storage.

The new RealSense integration has not been given any detailed specs or an availability date just yet, but Intel is bound to release some information soon.

Thank you Endgadget for providing us with this information