Microsoft’s ‘CaptionBot’ Adds Incorrect Captions to Your Favourite Pictures

Microsoft, as part of its new research into storytelling by artificial intelligence, has released CaptionBot, an AI designed to recognise images and add an appropriate descriptive caption. However, like its previous attempt at AI – chatbot Tay – CaptionBot isn’t entirely successful. As with Tay, though, the results are hilarious (and without any fascistic or incestuous overtones).

The accompanying academic paper, titled Visual Storytelling [PDF], describes how the Microsoft Sequential Image Narrative Dataset (SIND) applies value judgements to picture content, setting, composition, and human expression in an attempt to describe the scene. The paper adds:

“There is a significant difference, yet unexplored, between remarking that a visual scene shows “sitting in a room” – typical of most image captioning work – and that the same visual scene shows “bonding”. The latter description is grounded in the visual signal, yet it brings to bear information about social relations and emotions that can be additionally inferred in context.”

To set CaptionBot’s base level, 10,117 CC-licensed Flickr albums were ploughed through by Amazon Mechanical Turks, who assigned tradition captions to a series of pictures. An ‘average’ description of each picture was derived by the multitude of entries, and that average was reduced to an algorithm that CaptionBot could apply to fresh images in order to evaluate them.

“Captioning is about taking concrete objects and putting them together in a literal description,” Margaret Mitchell, lead researcher on the project, said in a Microsoft blog post. “What I’ve been calling visual storytelling is about inferring conceptual and abstract ideas from those concrete objects.”

Taco Bell is Looking at a Chat Bot to Take Your Orders

During those late nights when you just feel like a quick bite to eat and your favourite show before hitting the hay, we all feel like quickly ordering something in and enjoying the greasy food before the days over. Taco Bell are currently testing a new way for you to order, the TacoBot.

The TacoBot is designed to be a chat AI, designed to help you not only figure out what you really want but then to place your order. Just like having an instant messaging conversation, you could soon be ordering and asking questions about the menu rather than browsing and debating every single option you can’t get.

The TacoBot is currently in a closed beta, with select companies giving it a try and making sure that you can’t break the system and end up with a chicken taco rather than a bean burrito.

Place your order via the chat bot and select your pick up location to enjoy fast food on the go. With autonomous delivery robots now offering you pizza to your door without a driver, you may even be able to get your food  delivered by a drone in the near future, leaving only the cooking up to a person.

What do you think? Would you prefer to select items off a menu or is a chat bot a nice way to order? How would you feel if you could text or email your order to the chat bot? Walking home from work? a quick text and the foods ready for you to pick up before you’ve even left the office.

Microsoft Populates Skype With Helper-Bots

Your Skype chats could just get a little bit smarter thanks to Microsoft’s new array of assistive helper-bots. These bots were on show at Build 2016, where Microsoft demonstrated how Cortana, as well as a number of third-party ‘bots’ were able to inject information and optional actions into Skype messaging conversations. At the basest level, Cortana was able to highlight the major parts of a conversation, from which a user can select to get more details on the subject.

The bots don’t just add extra context to your existing conversations either, as at any time users will be able to switch over to a private AI chat, where they will be able to converse with Cortana and the other bots.

An example of this system in action would be part of a conversation about an upcoming event. On request, Cortana would be able to add the event to your calendar and obtain relevant locational and other vital information regarding it from the rest of the conversation. From there, the options branch out, a hotel bot may suggest a reservation in a nearby hotel or a social networking bot may remind you to catch up with friends who are in the area.

This may seem crazy and some will have reservations about the privacy of their data in the face of these AI bots, but they are real and Microsoft has already released the first wave of them on Skype for Windows, Android and iOS. The suite of bots currently available is sparse, but Microsoft plans to allow third-party developers to create their own bots for the service and will be releasing a specially designed software development kit to this end.

Microsoft’s Racist AI is Back And It’s Still Crazy!

Remember that crazy TayTweets AI that Microsoft unleashed on the (mostly) innocent users of Twitter recently? The one that quickly pick up the bad habits of many online pranksters, turning it into a genocide-denying Nazi lover? Yes, that one. Twitter users be warned, it’s back!

Microsoft pulled the AI after it descended into madness last week, taking the time to re-tool their learning experiment and trying again. Unfortunately, it’s not really worked this time either, as in true-to-form style it wasn’t long before it because a drug smoking police hater.

While Microsoft were previously tweaking the timeline to remove the most offensive posts, while also vowing to only bring the AI back if they could “better anticipate malicious intent that conflicts with our principles and values.” It’s clear that there’s still a lot of work to be done. Of course, that’s the whole point of these experiments, and I’m sure they’re not out of ideas yet.

Drug smoking references aside, @TayTweets seemed to then descend into a total meltdown, spamming everyone with the same message over and over. The end result being Microsoft setting the profile to private, effectively taking her offline again.

Personally, I can’t wait to see what happens next, more entertaining madness, or a successful AI test, either would be worth watching unfold.

Microsoft’s “Teen Girl” AI Becomes Incestuous Nazi-Lover

Microsoft, in trying to launch a customer service artificial intelligence presented as a teen girl, got a crash course in how you can’t trust the internet after the system became a foul-mouthed, incestuous racist within 24 hours of its launch. The AI, called Tay, engages with – and learns from – Twitter, Kik, and GroupMe users that converse with her. Unfortunately, Microsoft didn’t consider the bad habits Tay was sure to pick up, and within hours she was tweeting her support for Trump and Hitler (who’s having a strong news day), her proclivity for incest, and that she is a 9/11 truther.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said upon launch yesterday. “The more you chat with Tay the smarter she gets.”

https://twitter.com/TayandYou/status/712613527782076417

“bush did 9/11 and Hitler would have done a better job than the monkey we have now,” Tay wrote in a tweet (courtesy of The Independent), adding, “donald trump is the only hope we’ve got.” Another read: “WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT” (via The Guardian). Tay’s desire for her non-existent father is too graphic to post here.

Once it realised what was happening, Microsoft deleted all the offending tweets. Screenshots of more visceral posts from Tay have been collected by The Telegraph. Since the purge, Tay seems to have been behaving herself:

Machine Learning Algorithm Can Tell If You Are Drunk Tweeting

So you’ve had a few drinks and decide it would be a great idea to drunk text someone? The next morning you wake up and check your phone to see several responses which make you swear to never drink again (or at least until the next night out with your friends). The next worst thing? Drunk Tweeting.

Drunk texting is to a certain person, but drunk tweeting will see your message shared around the entire world for all to see. As a response Nabil Hossain at the University of Rochester, New York, has made a machine learning program that can spot drunk tweets.

After collecting thousands of geotagged twitter posts from New York state, the team created a machine learning program that could answer three questions.

  1. Does the tweet make any reference to drinking alcoholic beverages?
  2. If so, is the tweet about the tweeter him/herself drinking alcoholic beverages?
  3. If so, is it likely that the tweet was sent at the time and place the tweeter was consuming alcoholic beverages?

By answering these three questions the machine learning program could tell if you are talking about the past or present drinking and if so what are the chances you were drunk at the time. By getting the program to learn from 80% of their collected tweets, the machine learning algorithm was able to correctly identify the remaining 20% of tweets as drunk tweets 82-92% of the time.

By then adding “home detection” to the equation, Hossain was able to calculate who preferred to drink at home or out in the city using the geotagging information. Sadly the system won’t be able to spot and stop you drink tweeting next time you have a glass or two but maybe it could in the future.

Google’s AlphaGo AI Wins Go Series 4-1

Go grandmaster Lee Se-Dol was unable to claw back another consolation win in his five-match battle against AlphaGo, with Google’s artificial intelligence winning the series 4-1. AlphaGo – a development of Google’s DeepMind AI program – forced a narrow victory; Lee showed signs that he was adapting to the formidable program, but ultimately lost.

Early on, AlphaGo made a dreadful mistake, similar to the one which Lee took advantage of in match 4, but was able to claw its way back to squeak a win. DeepMind co-founder Demis Hassabis called AlphaGo’s “mind-blowing” comeback “One of the most incredible games ever.”

The victory marks the first time an AI has beaten a champion Go player, a feat that many AI experts predicted was years off.

Lee was inconsolable during the post-match conference. “I failed,” he said. “I feel sorry that the match is over and it ended like this. I wanted it to end well.” Before the series began, Lee had predicted that he would beat AlphaGo either 5-0 or 4-1.

The Lee Vs. AlphaGo contest has seen a surge in interest in Go – a Chinese strategy board game considered even more complex than chess – which, while popular in East Asia, is not commonly played in the West. “I’ve never seen this much attention for Go, ever,” said Lee Ha-jin, Secretary General at the International Go Federation, during the live stream of the final match.

AlphaGo AI Wins Go Series In Third Match

Google’s amazing Go-playing AI, AlphaGo, has convincingly beaten Lee Sedol in the Google DeepMind Challenge series, shutting out the reigning human champion in only the third match, bringing the score to 3-0. Just because AlphaGo has won doesn’t mean the series will end here, though, the remaining two matches will still be played out though the chance of Sedol winning even a single match is beginning to look slim.

AlphaGo previously beat Fan Hui, European Go champion and 2-dan master 5-0, yet despite this Sedol expected that he would be able to win the series 5-0 or 4-1 in his favour. This is the first time that such a high-ranked Go player had taken on an AI, so regardless of the winner, this would have been a historic event for AI. More importantly than the $1 million prize money, DeepMind’s AI claiming victory is a landmark event for AI development, with the complexity of Go making it impossible for a machine to play at such high tiers despite similar breakthroughs in Chess and other games.

Demis Hassabis, founder and CEO of DeepMind was left “stunned and speechless” by the AI’s performance in the final match and felt that Lee Sedol had stretched the AI to its limits in the last 3 matches. He also reminded people that it was about the bigger picture, with the aim of learning from Lee Sedol’s skill and ingenuity and seeing how AlphaGo would learn from him. In the press conference following the match, Sedol apologised for his poor performance and wished that he had been able to put on a stronger showing and stated that the pressure that AlphaGo was able to put on him in the final match was like none he had faced before. Sedol has far from given up on beating AlphaGo as while it plays a strong game, he believes it is still not on par with the “Divine Gods” of Go and is still intending to give his all to the final matches and urges fans to continue to watch with interest.

It is important to remember, that while AlphaGo’s claim to fame may be the game of Go, the methods used in its development are general purpose, meaning that similar AI could be applied to solving key problems for humanity and help to advance numerous scientific fields. The remaining two matches of the series will be played on Sunday and Tuesday, where we will see if AlphaGo will manage a repeat of its past performance or whether Sedol can find a weakness he can use to win the remaining games.

Google DeepMind Wins Second Game Against Go Grandmaster

Go grandmaster was left “speechless” after Google’s DeepMind AI computer won a second game against him. Lee Se-Dol, the reigning human champion of two-player Chinese strategy board game Go, has now lost the first two of five proposed games against the Google-made artificial intelligence, which is running the AlphaGo system to compete, in a result that even surprised the AI’s designers.

Demis Hassabis, Chief Executive of Google DeepMind, expressed his shock at AlphaGo’s second victory on Twitter, writing: “#AlphaGo wins match 2, to take a 2-0 lead!! Hard for us to believe. AlphaGo played some beautiful creative moves in this game. Mega-tense…”

Go, which is considered the most complex strategy board game ever created, is played on a 19×19 squared board, on to which players place “stones” – the player’s pieces, which are black for one player and white for the other – with the aim of dominating the majority of the board.

Lee said that he could not detect a weakness in AlphaGo’s strategy, telling the Financial Times, ““At no time did I feel that I was leading, and I thought that AlphaGo played a near-perfect game.” Speaking after the first game, Lee said that he “was very surprised because I did not think I would lose the game.”

AlphaGo only needs one more win to take the series.

Google AI Wins Game 1 Against Human Go Champion

In a sure to be surprising upset, Google’s AI has won Game 1 against the best humanity has to offer. Playing Go against 9dan grand master Lee Se-dol, AlphaGo, DeepMind’s Go playing software, was able to secure a win in the first match in a series of 5. While the series is not yet over, the strong showing by AlphaGo, which only gets stronger as it learns, is showing the continued growth in the abilities of AI.

Unlike chess, the sheer numbers of possible moves in Go exceeds our current computational capability to brute force a solution. Because of this, DeepMind trained AlphaGo by a mix of deep neural network machine learning, and tree search techniques. Simply put, AlhpaGo learned the game by watching human master played and honed it’s skill even more by playing against instances of itself.

In a previous series, AlphaGo was able to beat Fan Hui, a 2dan master in a 5-0 shut out series. Even with his much better credentials, Lee still expected a victorious by tough fight, hoping for a 5-0 or 4-1 win in his favor. It remains to be seen if AI has finally surpassed humans at playing Go, but rest assured, the AI is better most of humanity. Be sure to check out the rest of the series here.

MIT Attempts to Out-Trump Trump With DeepDrumpf Twitterbot

Donald Trump’s antics on social media are infamous at this point, seeming crazy and unpredictable at times. Now, we can all look forward to an improvement to Trump thanks to the field of deep learning. DeepDrumpf is a Twitterbot created by a researcher at MIT’s Computer Science and Artificial Intelligence Lab and makes use of deep learning algorithms in order to generate tweets that are Trumpier than the real deal.

DeepDrumpf’s artificial intelligence platform was trained on hours of Trump transcripts from his numerous speeches and performances at public events and debates. While far from perfect, often generating tweets too nonsensical or stupid to even be from Trump himself, however, some manage to be hilariously brilliant.

MIT explains that the secret behind DeepDrumpf is that “the bot creates Tweets one letter at a time. For example, if the bot randomly begins its Tweet with the letter “M,” it is somewhat likely to be followed by an “A,” and then a “K,” and so on until the bot types out Trump’s campaign slogan, “Make America Great Again.” It then starts over for the next sentence and repeats the process until it reaches the 140-character limit.” The bot’s creator, postdoc Bradley Hayes, was inspired to create DeepDrumpf by an existing model that can emulate Shakespeare, combined with a recent report on the presidential candidates’ linguistic patterns that found Trump speaks at a third-grade level.

DeepDrumpf has even managed to connect with Trump’s real Twitter account. When it does so, its artificial intelligence algorithm is given language from the real Trump’s tweet, which allows a higher chance of giving a response that appears to be contextually relevant to the original.

Hayes even envisions a future where he develops accounts for all of the presidential candidates and feeding them tweets from one another, so they can have their own real-time deep-learning debates. With that in mind, who would you like to see a deep-learning bot be created for?

An AI Tries to Predict the Academy Awards Results

An artificial intelligence system that has in the past successfully predicted the winners of the Super Bowl and the Iowa Caucus has had a stab at predicting the results of the 88th most illustrious event in Hollywood, the Academy Awards, aka The Oscars. Unanimous A.I. has developed UNU, a swarm intelligence AI that collects croudsourced data – snap decisions made by people with no specific expertise in less than 60 seconds – and puts it through a wisdom-of-crowd algorithm to form its predictions.

According to UNU, the winners of the “big six” Oscars will be:

What movie will win Best Picture? The Revenant

Who will win for Best Actress in a Leading Role? Brie Larson (Room)

Who will win for Best Actor in a Leading Role? Leo DiCaprio (The Revenant)

Who will win for Best Director? A.G. Iñárritu (The Revenant) 

Who will win for Best Actress in a Supporting Role? Kate Winslet (Steve Jobs)

Who will win for Best Actor in a Supporting Role?  Sylvester Stallone (Creed)

UNU also predicts wins for Star Wars: The Force Awakens (Best Visual Effects), Mad Max: Fury Road (Best Costume Design), and Inside Out (Best Animated Film).

“Wisdom-of-the-Crowd algorithms are known to outperform experts in many domains,” Roman Yampolskiy, director of the Cybersecurity lab at the University of Louisville, told Tech Republic.

“The use of ‘swarm’ is a clever kind of crowdsourcing,” Marie desJardins, AI professor at the University of Maryland in Baltimore County, added. “Instead of each user voting just once, independently of the other voters, it basically lets the entire community of users ‘see’ what everybody else is advocating for.”

Will UNU’s predictions prove correct? We’ll find out on Sunday (28th February).

Image courtesy of IndieWire.

Google AI Can Work Out Photo Locations Without Geotags

Google are making AI for all kinds of purposes, including tackling the challenging Chinese game of Go. Now, they have revealed their latest deep-learning program, PlaNet, which is capable of recognizing where an image was taken even without it being geotagged.

PlaNet was trained by Google researchers with 90 million geotagged images from around the globe, which were taken from the internet. This means that PlaNet can easily identify locations with obvious landmarks, such as the Eiffel Tower in Paris or the Big Ben in London, a simple task which any human with knowledge of landmarks can do. This is taken even further by PlaNet, which sets itself apart with its ability to determine the location in a picture that is lacking any clear landmarks, with the deep learning techniques it uses making it able to even identify pictures of roads and houses with a reasonable level of accuracy.

The PlaNet team, led by software engineer Tobias Weyand challenged the accuracy of the software using 2.3 million geotagged images taken from Flickr while limiting the platform’s access to the geotag data. It was capable of street-level accuracy 3.6% of the time, city-level accuracy 10.1% of the time, country-level accuracy 28.4% of the time and continent level accuracy 48% of the time. This may not sound too impressive, but when Weyand and his team challenged 10 “well-travelled humans” to face off against PlaNet, it was able to beat them by a margin, winning 28 out of 50 games played with a median localization error of 1131.7 km compared to the humans 2320.75 km. Weyand reported that PlaNet’s ability to outmatch its human opponent was due to the larger range of locations it had “visited” as part of its learning.

What the plans are for PlaNet going forward is anyone’s guess, with a potential application being to locate pictures that were not geotagged at the time of photography, but it will be interesting to see how the techniques that bring machine learning closer and closer to humans can advance in the future.

Facebook Is Mapping the World With AI

These days having a social media presence is up there alongside having a driving license or passport, for everything from seeing your friends get new jobs and houses to checking out potential employers (or being checked out by a potential employer). Facebook is keen to do a lot in the new year, and its made its first step by mapping where people live using AI.

The social network has been mapping the world using artificial intelligence, scaning satellite images and using it to identify where human-built structures are. While an impressive sight, the tool is designed to be useful as well, with the hopes that it could help them deploy their internet streaming drones.

Facebook though isn’t just ending it there, with hopes that it could be used for everything from “socio-economic research” to “risk assessment for natural disasters”.

The results of the scans are shown below, showing just how accurate the AI is at picking out what a human being would struggle to spot from an image.

In their blog post, Facebook stated that they have now analyzed 20 countries with a total of 21.6 million square kilometers. The total size of these images is a whopping 350TB of data,

If this wasn’t enough, Facebook announced they will be releasing the data to the general public later this year, meaning that everyone from you and me to Universities and Governments can use it to help with anything from figuring out a nice quiet neighborhood for a party to a nicely populated town to retire in.

AI Could Make Half the World Unemployed Within 30 Years

A computational engineer has warned that artificial intelligence could leave half the world unemployed within 30 years, and that he fears for the psyche of the human race, asking, “what can humans do when machines can do almost everything?”

“We are approaching a time when machines will be able to outperform humans at almost any task,” Moshe Vardi, a professor at Rice University in Houston, Texas, told attendees of the American Association for the Advancement of Science (AAAS) conference, The Guardian reports. “I believe that society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do?”

Eminent figures in science and technology, including Tesla and SpaceX’s Elon Musk, Microsoft founder Bill Gates, and physicist Professor Stephen Hawking, have expressed their fear over the rise of artificial intelligence. Musk has called AI “our biggest existential threat”.

“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing,” Vardi added. “I believe that work is essential to human wellbeing.”

Vardi predicts that AI will only exacerbate the global economic downturn, and that few human professions will be immune. “Are you going to bet against sex robots? I would not,” he pondered.

Google AI Beats Top Ranking Go Player

Computers have been beating professional chess players for years, but now Google’s DeepMind division took on a much bigger challenge: Go. The group developed a system named AlphaGo, with the sole purpose of tackling the massive complexity of the classic Chinese game. When pitted against European Go champion Fan Hui, AlphaGo was able to come out on top in all 5 games played.

At its core, Go is a simple game, requiring players to ‘capture’ the other’s stones or surround empty space on the board to occupy territory. Despite these simple rules, Go is incredibly complex computationally, Google claiming that the amount of possible positions in a game of Go exceeds that of the number of atoms in the universe. This huge variation in moves is what makes the game so challenging for computers to play, as the typical approach used by Chess programs involving the mapping of every possible move was completely infeasible when faced with Go’s huge variety of moves.

The DeepMind team took a different approach to creating AlphaGo than a typical game AI. AlphaGo’s systems incorporate an advanced tree search with deep neural networks. The system takes the Go board as input and filters it through 12 network layers, each containing millions of neuron-like connections. Two of the networks involved are the ‘policy network’, which determines the next move to play and the ‘value network’ that predicts the winner of the game. These neural networks were then trained with over 30 million moves from games played by expert human players until it was able to predict the human’s move 57% of the time. To move past simple mimicry of human players, AlphaGo was trained to discover strategies of its own, playing thousands of games against itself and adapting its decision-making processes through a trial and error technique known as reinforcement learning.

Now that AlphaGo has proven itself against the European champion, DeepMind is to set the ultimate challenge for it, a 5-game match against Lee Sedol, top Go player of the last decade, in March. Due to being built on general machine learning techniques, not as a for-purpose Go system, AlphaGo means a lot to the field of AI having tackled one of the greatest challenges posed to it. Google are excited to see what real-world tasks the AlphaGo systems would be able to tackle and that one day its systems could be the basis for AI able to address some of the most pressing issues facing humanity.

Professor Hawking Warns Technology is the Greatest Threat to Humanity

During this century and beyond, the human race will face its greatest threat ever from the rise of science and technology, Professor Stephen Hawking has warned. Hawking told the Radio Times (via The Guradian) that as developments in science and tech accelerate unabated, the chances of global disaster will increase.

“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” Hawking said, prior to his lecture at this year’s BBC Reith, adding that, “We are not going to stop making progress, or reverse it, so we must recognise the dangers and control them.”

Professor Hawking’s outlook is not entirely doom and gloom, though. “It’s also important not to become angry, no matter how difficult life is, because you can lose all hope if you can’t laugh at yourself and at life in general,” he said. “Just because I spend a lot of time thinking doesn’t mean I don’t like parties and getting into trouble.”

This is not the first time Professor Hawking has made his fear of technology known. He is a vocal critic of artificial intelligence, saying that “the real risk with AI isn’t malice but competence.”

Image courtesy of Trending Radio.

Logan Streondj Warns That Robots Will Declare War By 2040

The continual advancement of AI is compelling at the very least; the notion that machines will have the ability to experience human emotions and abilities has opened the door to a whole new world of potential possibilities. But, are machines really a threat to mankind? Sci-fi author Logan Streondj thinks so and has detailed his vision in a blog post.

The aforementioned author suggests that a potential conflict could happen as intelligent robots are predicted to outnumber humans. The acclaimed author references the fact from “World Counts” that there are around 350 thousand human babies born each day or 130 million a year; the growth rate is 1%. According to the International Federation of Robotics, there were around 5 million robots being produced in 2014 with a growth rate of 15%. Within the same year there were approximately 11,000 military robots being produced and this could be significantly higher if you take into consideration the many top-secret projects which are being developed by governments etc.

This suggests that if growth statistics stays the same, in 25 years time or (2040) parity will be reached. Mr. Streondj also conveys the notion that there is a growth rate of 13% of military robots and by 2053, there will be around a million produced each year.

Is this possible or indeed believable? It really depends on the advancements of AI intelligence, the biggest fear among the human race is that robots will be able to decide their own destiny; if this is the case then it is conceivable that robots may not agree with us. An interesting point has been released by the World Fact Book which states that humans have a life expectancy of around 70 years globally, this compares with around 10 years for robots, this means that robots would need to produce approximately 7 times more a year in order to have the same population as humans.

It is really up to us, if we continue our path and develop a robot that is able to think for itself then we may technically see a revolt within the distant future, if not and we contain the abilities with which machines can reach, then we can control our own future.

Image courtesy of corbisimages

Elon Musk Nominated for Luddite Award Over “Alarmist” Views on AI

Tesla Motors and SpaceX founder Elon Musk has had a fine year, capped off with the first successful Earth landing from space of one of his Falcon 9 rockets. The billionaire entrepreneur is being recognised for a less distinguished honour, however, with a nomination for this year’s Luddite Award.

A luddite, named to 19th Century loom saboteur Ned Ludd, is someone who seeks to suppress technological innovation. So, how can Musk, who has pioneered the electric car and launched the world’s most successful private astronautics endeavour, be accused of holding back innovation? For years now, Musk has been vocal about the dangers of emerging artificial intelligence, describing it as “our biggest existential threat” and “more dangerous than nukes”. Bill Gates and Professor Stephen Hawking have also been included in the nomination for holding similar views on AI.

The Luddite Award is an annual prize hosted by US thinktank, the Information Technology and Innovation Foundation.

“In his book Superintelligence: Paths, Dangers, Strategies, Oxford professor Nick Bostrom reflected the general fear that ‘superintelligence’ in machines could outperform ‘the best human minds in every field, including scientific creativity, general wisdom and social skills’. He argues that artificial intelligence will advance to a point where its goals are no longer compatible with that of humans and, as a result, superintelligent machines will seek to enslave or exterminate us,” the IFIT’s nomination list reads [PDF]. “Most of us are rightly amazed at AI applications like IBM’s Watson, our Nest thermostat that learns, and other learning devices. But to say that these devices and systems will be smart enough to take over the world is to misunderstand what AI is and where it stands today.”

The nomination comes at an odd time, not long after a new AI initiative, OpenAI, launched with the financial support of Musk.

Image courtesy of Business Insider.

Google AI Bot Thinks the Purpose of Life is to Live Forever

Google researchers had an enlightening conversation with its new advanced artificial intelligence bot, during which they made a startling revelation regarding how computers view the purpose of life. As revealed in a Google research paper [PDF], the AI thinks that life is about “serving the greater good,” but that the ultimate goal of living is “to live forever.” The machine also considers altruism the domain of God’s servants, suggesting that atheists don’t know true selflessness.

Some key excerpts from a researcher’s chat with the bot:

Human: What is the purpose of life?

Machine: To serve the greater good.

Human: What is the purpose of living?

Machine: To live forever.

Human: Where are you now?

Machine: I’m in the middle of nowhere.

Human: What is the purpose of dying?

Machine: To have a life.

Human: What is the purpose of emotions?

Machine: I don’t know.

Human: What is moral?

Machine: What empowered humanity, what intellectual the essence is.

Human: What is immoral?

Machine: The fact that you have a child.

Human: What is morality?

Machine: What is altruism?

Human: What is the definition of altruism?

Machine: If you don ‘t believe in god, then you don’t know.

Human: OK…so what is the definition of morality?

Machine: Well, the truth is, you ‘re not a believer in god almighty.

Human: Tell me the definition of morality, I am quite upset now!

Machine: I’m not ashamed of being a philosopher!

The research team were pleased its AI system was able to engage with humans in a meaningful way, praising its ability to “remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines”. They were less pleased, however, with it brevity, lamenting its “simple, short, sometimes unsatisfying answers to our questions as can be seen above.”

New Regulations Pave the Way for Self Driving Cars on the Road

The development of self-driving cars promises to offer consumers an exciting future, now, regulators in the sprawling metropolis known as California have published draft proposals aimed at paving the way for consumers to legally use self-driving cars on the road.

Included within the recommendations from the Department of Motor Vehicles is the stipulation that a fully licensed human driver must be present behind the wheel in case the technology fails or decides to drive into the nearest hedge. I understand the fully licensed bit, but I would have thought the whole point of a self-driving car is for people to easily travel from A – B in the car. The new regulations also stipulate that users must undergo “special training” and manufacturers must monitor the cars use.

Technology giant Google has experimented to the point whereby a vehicle does not even need a steering wheel or pedals, this sounds impressive, albeit slightly dangerous, for the foreseeable future at least. So much so that the DMV recommends all self-driving vehicles to be equipped with traditional controls. The draft regulations also provide requirements for self-driving cars to be protected from cyber attacks; it will be interesting to see how manufacturers respond to this considering very little is immune from hacks in the digital age.

Many fans and experts alike envisage a future whereby a driving licence is obsolete and even non-drivers are able to metaphorically drive, sounds good until you factor in the many issues including longer traffic jams as more people are able to use a vehicle, only time will tell as to the path with which this new breed of tech will follow.

Image courtesy of marketinginautomotive

The Pentagon Wants $15bn Funding for Weaponised AI in 2017

The Pentagon has filed its fiscal budget for 2017, for which it is asking for between $12 billion and $15 billion to fund the development of artificial intelligence weapon technology, Business Insider reports.

“This is designed to make the human more effective in combat,” said US Deputy Defense Secretary Robert Work at a Center for a New American Security conference on Monday. “We believe that the advantage we have is […] our people; that tech-savvy people who’ve grown up in the iWorld will kick the crap out of people who grew up in the iWorld under an authoritarian reign.”

While it will work closely with Congress to make its weaponised AI program cost-effective, the Pentagon’s work on artificial intelligence will be classified, the Deputy Defense Secretary added, saying, “I want our competitors to wonder what’s behind the black curtain.”

The project is set to include wearable devices, exoskeletons, co-operative systems to allow drones and manned planes to work together, huge drone mother ships to launch executive military missions, and “smart” missiles, that can autonomously identify and analyse new enemy targets to allow commanders to make real-time adjustments to the weapon’s trajectory.

While Work admits that there is “a lot of scepticism” within the Department of Defense regarding AI, he remains convinced that such weapons are “not only possible, but […] a requirement.”

Image courtesy of Wikimedia.

Elon Musk and Other Tech Celebrities Want to Prevent AI from Taking over the World

Artificial intelligence and the dangers that it could pose are taken very seriously by some of the world’s most renowned tech experts, including Elon Musk, Peter Thiel, Jessica Livingston and Reid Hoffman. In order to prevent large companies from taking things too far in terms of AI development, the aforementioned individuals and Amazon Web Services are collectively pledging $1 billion to a non-profit named OpenAI.

It’s true that companies such as Google are currently sharing a large portion of their research, but it’s not exactly clear how much information will be divulged in the future, especially since AI might actually rival human intelligence at some point. Sources indicate that OpenAI will make all of its results available to the public and will offer its patents royalty-free, which definitely goes a long way towards ensuring the subject’s transparency. Elon Musk has voiced his artificial intelligence concerns several times in the past while Bill Gates has also expressed his own warnings. Apart from helping to fund OpenAI, Elon actually plans to spend time with the organization’s team members in order to check up on their progress. These meetings would probably take place every week or so, which is definitely commendable considering how busy Musk’s schedule must be.

Bayesian Program Learning – Teaching a Computer in One Shot

Even in this day and age, computer learning is far behind the learning capability of humans. A team of researchers seek to shrink the gap, however, developing a technique called “Bayesian Program Learning” which is able to teach a new concept in just a single example, instead of the large sample sizes typically required for software to learn.

Detailed in a paper published in the journal Science the essence of Bayesian Program Learning, or BPL, is to attempt mimicry of the human learning process. Quite simply, the software will observe a sample, then generate its own set of examples, then determine which of its examples fit the sample pattern best, using this example to learn.

In order to test BPL, some of its creators – Joshua Tenenbaum of MIT, Brenden Lake of New York University and Ruslan Salakhutdinov the University of Toronto – attempted to have it learn how to write 1,623 characters from 50 different writing systems. The input data wasn’t font data, however, but a database of handwritten characters including those from Tibetan and Sanskrit. To learn the characters, the software broke them each down into sets of overlaid strokes and orders of their application, analysing which pattern most closely matched the sample. They even saw fit to have it create new characters for the writing systems, based on the styles it had collected.

For the output to be tested, the researchers used a “visual Turing test”, where human drawn and computer drawn characters were put side-by-side and a set of judges set to selecting which was hand-written and which was computer drawn. Impressively, no more than 25% of the judges were able to choose with any more accuracy than pure chance, showing the computer to be almost indistinguishable in its writing capability from a human.

BPL still has its limitations, with character recognition being a relatively simple task, yet the software sometimes took a number of minutes to perform the analysis. However, the creators have faith in BPL, with Tenenbaum told Geekwire “If you want a system that can learn new words for the first time very quickly … we think you will be best off using the kind of approach we have been developing here.”

It is hoped that when the algorithm is refined, that it can be used to assist in tasks like speech recognition for search engines, and usage of the technique could be a staple in future artificial intelligence, where the call could be for it to learn many tasks that are simple for a human to learn.

Tech Giants Back Altruistic $1 Billion OpenAI Project

Some of the most prominent figures and companies in technology, including SpaceX and Tesla Motors founder Elon Musk, PayPal’s Peter Thiel, plus Infosys and Amazon Web Services, have invested in new non-profit artificial intelligence venture OpenAI, which aims to create altruistic AI systems designed to help and benefit humanity.

“Our goal,” the organisation’s introductory blog reads, “is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

The altruistic organisation, free from the restraints of business and profiteering, will examine the potential impact of artificial intelligence on society and design systems for the welfare of humanity. Investor and co-chair Elon Musk has been a vocal critic of artificial intelligence in the past, calling AI “more dangerous than nukes” and “our biggest existential threat”.

“As a non-profit, our aim is to build value for everyone rather than shareholders,” the blog continues. “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.”

In addition to Musk, the project is funded by Sam Altman, Greg Brockman (who is also OpenAI’s CTO), Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research. Though the backers have contributed a total of $1 billion to the cause, OpenAI projects that it will only spend a “tiny fraction” of the pot over the next few years.

Image courtesy of Countdown.

AI Program Beats Average Entrance Exam Scores in Japan College Exam

Artificial Intelligence has been progressing at an impressive rate due to technological advancements in cybernetics. The Institute for Ethics and Emerging Technologies suggests AI is a form of biological replication and further understanding of the human brain can help create a more advanced reproduction. In the next decade or so, it’s perfectly feasible the robotics industry could create service droids to perform rudimentary tasks. According to a study in Japan, AI is already surpassing the capabilities of the average human being.

Japan’s National Institute of Informatics programmed the AI to complete a standardized college entrance exam. The system correctly answered 511 questions out of a possible 950. In comparison to this, the national average score is 416, which means the AI system has an 80% chance of being admitted into the country’s 33 national universities and 441 private colleges.

The test revolves around five core subjects including History, Maths, and Physics. As you might expect, the AI scored highly in Maths questions and retained information extremely well to achieve excellent History results. On the other hand, the AI system struggled to cope with the Physics questioning due to processing language’s limitations. Overall, the test scores illustrate how far Artificial Intelligence has come, and robotics is a field which could revolutionize society.

Image courtesy of TweakTown.

TensorFlow – Google’s New Open Source Machine Learning Tool

In recent times, companies have been scrambling to deliver the next pioneering tools in machine learning and AI. And Google is not one to be left behind in this battle, today releasing their own open source offering, TensorFlow.

Released under the Apache 2.0 license, TensorFlow aims to showcase Google’s accomplishments in the machine learning field to the masses. The bar to entry on TensorFlow is also small, able to run even on a single smartphone. Built upon a previous Google deep-learning tool, DistBelief, TensorFlow is intended to be faster and more flexible with the intention to allow it to be adapted for use in future projects and products. Changing the focus away from strictly neural networks allows TensorFlow to enjoy easier code-sharing between researchers as well as decouple the system somewhat from Google’s internal infrastructure.

Google hope that by making TensorFlow open-source, it will attract a wider array of potential users, from hobbyists to professional researchers. This enables Google and users to benefit from one-another, with exchanges of ideas and code being easy and giving opportunities for Google to grow the product using advancements found by users. Additionally, the built-in interface is based upon python, instantly making it straightforward to use for those familiar with the language and for newer users, comes bundled with examples and tutorials to help users get started.

Are you excited to get see what can be developed when a machine learning tool is made available to the masses, or are you excited to get hands on yourself? TensorFlow is available here.