Google Allows Developers to Use Its Machine Learning Platform

Google’s Cloud Machine Learning platform, which is utilized by a number of applications including Google Photos, Translate and Inbox has been made available for use by developers today. The announcement was made by Alphabet’s Chairman, Eric Schmidt at NEXT 2016, where he explained that machine learning was “what’s next”.

This move will give developers the chance to leverage the power that Google currently makes use of every day in their own apps.  These powerful machine learning tools will be available to use through a number of easy-to-use REST APIs according to Fausto Ibarra, Google’s director of product management. He went on to state that “Cloud Machine Learning will take machine learning mainstream, giving data scientists and developers a way to build a new class of intelligent applications.”

The Cloud Machine Learning platform APIs initially on offer by Google include translation, photo, and speech recognition tools which power Google Now, Google Photos, voice recognition in Google Search and many other systems.

This release by Google makes them the third major company to roll out a machine learning platform for developers, with Microsoft’s Azure platform releasing in 2015 and Amazon Web Services including machine learning since 2015. With Google declaring that machine learning is set to be the next big thing in computing, it may not be too long before we see our everyday apps getting even smarter, as developers benefit from Google’s platform and Google in turn benefit from its usage allowing furthering development. It may only be in developer preview for now, but I am sure that many app developers, both old and new, will be jumping at the chance to put these cutting edge tools to use.

Machine Learning Algorithm Can Tell If You Are Drunk Tweeting

So you’ve had a few drinks and decide it would be a great idea to drunk text someone? The next morning you wake up and check your phone to see several responses which make you swear to never drink again (or at least until the next night out with your friends). The next worst thing? Drunk Tweeting.

Drunk texting is to a certain person, but drunk tweeting will see your message shared around the entire world for all to see. As a response Nabil Hossain at the University of Rochester, New York, has made a machine learning program that can spot drunk tweets.

After collecting thousands of geotagged twitter posts from New York state, the team created a machine learning program that could answer three questions.

  1. Does the tweet make any reference to drinking alcoholic beverages?
  2. If so, is the tweet about the tweeter him/herself drinking alcoholic beverages?
  3. If so, is it likely that the tweet was sent at the time and place the tweeter was consuming alcoholic beverages?

By answering these three questions the machine learning program could tell if you are talking about the past or present drinking and if so what are the chances you were drunk at the time. By getting the program to learn from 80% of their collected tweets, the machine learning algorithm was able to correctly identify the remaining 20% of tweets as drunk tweets 82-92% of the time.

By then adding “home detection” to the equation, Hossain was able to calculate who preferred to drink at home or out in the city using the geotagging information. Sadly the system won’t be able to spot and stop you drink tweeting next time you have a glass or two but maybe it could in the future.

AlphaGo AI Wins Go Series In Third Match

Google’s amazing Go-playing AI, AlphaGo, has convincingly beaten Lee Sedol in the Google DeepMind Challenge series, shutting out the reigning human champion in only the third match, bringing the score to 3-0. Just because AlphaGo has won doesn’t mean the series will end here, though, the remaining two matches will still be played out though the chance of Sedol winning even a single match is beginning to look slim.

AlphaGo previously beat Fan Hui, European Go champion and 2-dan master 5-0, yet despite this Sedol expected that he would be able to win the series 5-0 or 4-1 in his favour. This is the first time that such a high-ranked Go player had taken on an AI, so regardless of the winner, this would have been a historic event for AI. More importantly than the $1 million prize money, DeepMind’s AI claiming victory is a landmark event for AI development, with the complexity of Go making it impossible for a machine to play at such high tiers despite similar breakthroughs in Chess and other games.

Demis Hassabis, founder and CEO of DeepMind was left “stunned and speechless” by the AI’s performance in the final match and felt that Lee Sedol had stretched the AI to its limits in the last 3 matches. He also reminded people that it was about the bigger picture, with the aim of learning from Lee Sedol’s skill and ingenuity and seeing how AlphaGo would learn from him. In the press conference following the match, Sedol apologised for his poor performance and wished that he had been able to put on a stronger showing and stated that the pressure that AlphaGo was able to put on him in the final match was like none he had faced before. Sedol has far from given up on beating AlphaGo as while it plays a strong game, he believes it is still not on par with the “Divine Gods” of Go and is still intending to give his all to the final matches and urges fans to continue to watch with interest.

It is important to remember, that while AlphaGo’s claim to fame may be the game of Go, the methods used in its development are general purpose, meaning that similar AI could be applied to solving key problems for humanity and help to advance numerous scientific fields. The remaining two matches of the series will be played on Sunday and Tuesday, where we will see if AlphaGo will manage a repeat of its past performance or whether Sedol can find a weakness he can use to win the remaining games.

China Looking at Creating “Precrime” System

When people start to think about digital surveillance and their data stored online, they think about cases such as Apple vs the FBI where modern technology is used to try track down criminals or find out more about what could have or has happened. China looks to go a step further creating a “precrime” system to find out about crimes you will commit.

The movie Minority Report posed an interesting question to people if you knew that someone was going to commit a crime, would you be able to charge them for it before it even happens? If we knew you were going to pirate a video game when it goes online, does that mean we can charge you for stealing the game before you’ve even done it?

China is looking to do just that by creating a “unified information environment” where every piece of information about you would tell the authorities just what you normally do. Decide you want to something today and it could be an indication that you are about to commit or already have committed a criminal activity.

With machine learning and artificial intelligence being at the core of the project, predicting your activities and finding something which “deviates from the norm” can be difficult for even a person to figure out. When the new system goes live, being flagged up to the authorities would be as simple as making a few purchases online and a call to sort out the problem.

Google AI Can Work Out Photo Locations Without Geotags

Google are making AI for all kinds of purposes, including tackling the challenging Chinese game of Go. Now, they have revealed their latest deep-learning program, PlaNet, which is capable of recognizing where an image was taken even without it being geotagged.

PlaNet was trained by Google researchers with 90 million geotagged images from around the globe, which were taken from the internet. This means that PlaNet can easily identify locations with obvious landmarks, such as the Eiffel Tower in Paris or the Big Ben in London, a simple task which any human with knowledge of landmarks can do. This is taken even further by PlaNet, which sets itself apart with its ability to determine the location in a picture that is lacking any clear landmarks, with the deep learning techniques it uses making it able to even identify pictures of roads and houses with a reasonable level of accuracy.

The PlaNet team, led by software engineer Tobias Weyand challenged the accuracy of the software using 2.3 million geotagged images taken from Flickr while limiting the platform’s access to the geotag data. It was capable of street-level accuracy 3.6% of the time, city-level accuracy 10.1% of the time, country-level accuracy 28.4% of the time and continent level accuracy 48% of the time. This may not sound too impressive, but when Weyand and his team challenged 10 “well-travelled humans” to face off against PlaNet, it was able to beat them by a margin, winning 28 out of 50 games played with a median localization error of 1131.7 km compared to the humans 2320.75 km. Weyand reported that PlaNet’s ability to outmatch its human opponent was due to the larger range of locations it had “visited” as part of its learning.

What the plans are for PlaNet going forward is anyone’s guess, with a potential application being to locate pictures that were not geotagged at the time of photography, but it will be interesting to see how the techniques that bring machine learning closer and closer to humans can advance in the future.

Google AI Beats Top Ranking Go Player

Computers have been beating professional chess players for years, but now Google’s DeepMind division took on a much bigger challenge: Go. The group developed a system named AlphaGo, with the sole purpose of tackling the massive complexity of the classic Chinese game. When pitted against European Go champion Fan Hui, AlphaGo was able to come out on top in all 5 games played.

At its core, Go is a simple game, requiring players to ‘capture’ the other’s stones or surround empty space on the board to occupy territory. Despite these simple rules, Go is incredibly complex computationally, Google claiming that the amount of possible positions in a game of Go exceeds that of the number of atoms in the universe. This huge variation in moves is what makes the game so challenging for computers to play, as the typical approach used by Chess programs involving the mapping of every possible move was completely infeasible when faced with Go’s huge variety of moves.

The DeepMind team took a different approach to creating AlphaGo than a typical game AI. AlphaGo’s systems incorporate an advanced tree search with deep neural networks. The system takes the Go board as input and filters it through 12 network layers, each containing millions of neuron-like connections. Two of the networks involved are the ‘policy network’, which determines the next move to play and the ‘value network’ that predicts the winner of the game. These neural networks were then trained with over 30 million moves from games played by expert human players until it was able to predict the human’s move 57% of the time. To move past simple mimicry of human players, AlphaGo was trained to discover strategies of its own, playing thousands of games against itself and adapting its decision-making processes through a trial and error technique known as reinforcement learning.

Now that AlphaGo has proven itself against the European champion, DeepMind is to set the ultimate challenge for it, a 5-game match against Lee Sedol, top Go player of the last decade, in March. Due to being built on general machine learning techniques, not as a for-purpose Go system, AlphaGo means a lot to the field of AI having tackled one of the greatest challenges posed to it. Google are excited to see what real-world tasks the AlphaGo systems would be able to tackle and that one day its systems could be the basis for AI able to address some of the most pressing issues facing humanity.

Bayesian Program Learning – Teaching a Computer in One Shot

Even in this day and age, computer learning is far behind the learning capability of humans. A team of researchers seek to shrink the gap, however, developing a technique called “Bayesian Program Learning” which is able to teach a new concept in just a single example, instead of the large sample sizes typically required for software to learn.

Detailed in a paper published in the journal Science the essence of Bayesian Program Learning, or BPL, is to attempt mimicry of the human learning process. Quite simply, the software will observe a sample, then generate its own set of examples, then determine which of its examples fit the sample pattern best, using this example to learn.

In order to test BPL, some of its creators – Joshua Tenenbaum of MIT, Brenden Lake of New York University and Ruslan Salakhutdinov the University of Toronto – attempted to have it learn how to write 1,623 characters from 50 different writing systems. The input data wasn’t font data, however, but a database of handwritten characters including those from Tibetan and Sanskrit. To learn the characters, the software broke them each down into sets of overlaid strokes and orders of their application, analysing which pattern most closely matched the sample. They even saw fit to have it create new characters for the writing systems, based on the styles it had collected.

For the output to be tested, the researchers used a “visual Turing test”, where human drawn and computer drawn characters were put side-by-side and a set of judges set to selecting which was hand-written and which was computer drawn. Impressively, no more than 25% of the judges were able to choose with any more accuracy than pure chance, showing the computer to be almost indistinguishable in its writing capability from a human.

BPL still has its limitations, with character recognition being a relatively simple task, yet the software sometimes took a number of minutes to perform the analysis. However, the creators have faith in BPL, with Tenenbaum told Geekwire “If you want a system that can learn new words for the first time very quickly … we think you will be best off using the kind of approach we have been developing here.”

It is hoped that when the algorithm is refined, that it can be used to assist in tasks like speech recognition for search engines, and usage of the technique could be a staple in future artificial intelligence, where the call could be for it to learn many tasks that are simple for a human to learn.

Lie Detection Software Learns from Real Court Testimony

Machine learning is one of the hot computing topics of today. With Google releasing its own open source machine learning tools and both IBM and Intel not wanting to be left out of the party with their own offerings. Most of the uses for these platforms right now are almost entirely academic, with researchers frequently coming up with new and useful ways to employ machine learning in the real world. This has led to researchers at the University of Michigan experiment with utilizing it for lie detection.

In order to test out the system, as well as show it’s worth in a high-stakes environment, the researchers used footage of testimony from real court cases as their sample, claiming that the software was able to discern a liar with as much as 75% accuracy. Comparatively, humans could only reliably tell the difference between lies and truth 50% of the time.

The software made use of both the words and gestures of the speaker under analysis, using techniques ranging from simply counting certain words and gestures to where the speaker was looking in regard to the questioner and their vocal fill. The ability to employ these techniques potentially makes computers far better lie detectors than humans, according to professor of computer science and engineering Rada Mihalcea.

“This isn’t the kind of task we’re naturally good at. There are clues that humans give naturally when they are being deceptive, but we’re not paying close enough attention to pick them up. We’re not counting how many times a person says ‘I’ or looks up. We’re focusing on a higher level of communication.”

The team are not planning to stop with this, with plans to tie in the subject’s heart rate, respiratory rate and body temperature changes using thermal imaging. As well as this, they plan to let the system analyze and classify human gestures on its own, instead of through input by the researchers.

Gone could be the days of a suspect being hooked up to a polygraph lie detector that simply relies on the body’s physiological responses and draws the iconic graphs seen in many a movies. Criminals who think they can lie their way out of trouble could find themselves far harder pressed to deceive thanks to this.

TensorFlow – Google’s New Open Source Machine Learning Tool

In recent times, companies have been scrambling to deliver the next pioneering tools in machine learning and AI. And Google is not one to be left behind in this battle, today releasing their own open source offering, TensorFlow.

Released under the Apache 2.0 license, TensorFlow aims to showcase Google’s accomplishments in the machine learning field to the masses. The bar to entry on TensorFlow is also small, able to run even on a single smartphone. Built upon a previous Google deep-learning tool, DistBelief, TensorFlow is intended to be faster and more flexible with the intention to allow it to be adapted for use in future projects and products. Changing the focus away from strictly neural networks allows TensorFlow to enjoy easier code-sharing between researchers as well as decouple the system somewhat from Google’s internal infrastructure.

Google hope that by making TensorFlow open-source, it will attract a wider array of potential users, from hobbyists to professional researchers. This enables Google and users to benefit from one-another, with exchanges of ideas and code being easy and giving opportunities for Google to grow the product using advancements found by users. Additionally, the built-in interface is based upon python, instantly making it straightforward to use for those familiar with the language and for newer users, comes bundled with examples and tutorials to help users get started.

Are you excited to get see what can be developed when a machine learning tool is made available to the masses, or are you excited to get hands on yourself? TensorFlow is available here.

Intel Acquires Saffron to Invest in Cognitive Computing

In what seems to be a bid by Intel to catch up with IBM’s recent investments into the field of cognitive computing, Intel has announced their deal to acquire Saffron Technology, a cognitive technology startup.

What Saffron has to offer Intel is their Natural Intelligence Platform. According to Saffron’s website, the platform “mimics our natural ability to learn, remember and reason in real-time. This fundamentally different approach to memory and learning helps Saffron shatter the limitations of other computing paradigms”. This means that the platform can learn from data, not just by performing according to pre-programmed rules. The result has many applications in the real world as well as the academic, with Saffron’s platform able to do tasks such as predicting part failures in aircraft and identifying insurance fraud.

What this acquisition means for Intel is a potential angle of competition with IBM’s Watson Analytics platform, which markets itself on being able to analyze the reasons for sales over time and how social networks could affect company stocks and even predict the optimal fantasy baseball team.

Intel’s blog on the topic stated “We see an opportunity to apply cognitive computing not only to high-powered servers crunching enterprise data, but also to new consumer devices that need to see, sense and interpret complex information in real time.” And despite being acquired by Intel, Saffron will still be afforded the freedom to continue operation of their existing standalone business, while also contributing to Intel’s ongoing efforts to further the technology.

MarI/O Can Learn and Play Super Mario World by Itself

Super Mario World is a classic game, one that many have played in one form or another. One person has taken this game to the next level by making a program that can learn to play this game, all by itself.

MarI/O is a program made of neural networks and genetic algorithms that kicks butt at Super Mario World, at least that is the official description of the video. The author was kind enough to provide the source code as well, so you are even able to run this yourself – that is if you want to. The video in itself already shows the important parts of it, but for coders it might be extra interesting. The video has voice over where the author explains how it all works and it is actually quite interesting.

It took about 40 minutes for the algorithm to work out how to beat the level the best way.

Now this automated learning algorithm probably won’t spawn terminators, but rather give us a great view into how we ourselves as humans are built and how it all could be put together. Just at a much smaller and more simplistic level.

The emulator used was the BizHawk emulator with full rerecording support and Lua scripting.