Microsoft’s ‘CaptionBot’ Adds Incorrect Captions to Your Favourite Pictures

Microsoft, as part of its new research into storytelling by artificial intelligence, has released CaptionBot, an AI designed to recognise images and add an appropriate descriptive caption. However, like its previous attempt at AI – chatbot Tay – CaptionBot isn’t entirely successful. As with Tay, though, the results are hilarious (and without any fascistic or incestuous overtones).

The accompanying academic paper, titled Visual Storytelling [PDF], describes how the Microsoft Sequential Image Narrative Dataset (SIND) applies value judgements to picture content, setting, composition, and human expression in an attempt to describe the scene. The paper adds:

“There is a significant difference, yet unexplored, between remarking that a visual scene shows “sitting in a room” – typical of most image captioning work – and that the same visual scene shows “bonding”. The latter description is grounded in the visual signal, yet it brings to bear information about social relations and emotions that can be additionally inferred in context.”

To set CaptionBot’s base level, 10,117 CC-licensed Flickr albums were ploughed through by Amazon Mechanical Turks, who assigned tradition captions to a series of pictures. An ‘average’ description of each picture was derived by the multitude of entries, and that average was reduced to an algorithm that CaptionBot could apply to fresh images in order to evaluate them.

“Captioning is about taking concrete objects and putting them together in a literal description,” Margaret Mitchell, lead researcher on the project, said in a Microsoft blog post. “What I’ve been calling visual storytelling is about inferring conceptual and abstract ideas from those concrete objects.”

Microsoft’s “Teen Girl” AI Becomes Incestuous Nazi-Lover

Microsoft, in trying to launch a customer service artificial intelligence presented as a teen girl, got a crash course in how you can’t trust the internet after the system became a foul-mouthed, incestuous racist within 24 hours of its launch. The AI, called Tay, engages with – and learns from – Twitter, Kik, and GroupMe users that converse with her. Unfortunately, Microsoft didn’t consider the bad habits Tay was sure to pick up, and within hours she was tweeting her support for Trump and Hitler (who’s having a strong news day), her proclivity for incest, and that she is a 9/11 truther.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said upon launch yesterday. “The more you chat with Tay the smarter she gets.”

https://twitter.com/TayandYou/status/712613527782076417

“bush did 9/11 and Hitler would have done a better job than the monkey we have now,” Tay wrote in a tweet (courtesy of The Independent), adding, “donald trump is the only hope we’ve got.” Another read: “WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT” (via The Guardian). Tay’s desire for her non-existent father is too graphic to post here.

Once it realised what was happening, Microsoft deleted all the offending tweets. Screenshots of more visceral posts from Tay have been collected by The Telegraph. Since the purge, Tay seems to have been behaving herself:

Google’s AlphaGo AI Wins Go Series 4-1

Go grandmaster Lee Se-Dol was unable to claw back another consolation win in his five-match battle against AlphaGo, with Google’s artificial intelligence winning the series 4-1. AlphaGo – a development of Google’s DeepMind AI program – forced a narrow victory; Lee showed signs that he was adapting to the formidable program, but ultimately lost.

Early on, AlphaGo made a dreadful mistake, similar to the one which Lee took advantage of in match 4, but was able to claw its way back to squeak a win. DeepMind co-founder Demis Hassabis called AlphaGo’s “mind-blowing” comeback “One of the most incredible games ever.”

The victory marks the first time an AI has beaten a champion Go player, a feat that many AI experts predicted was years off.

Lee was inconsolable during the post-match conference. “I failed,” he said. “I feel sorry that the match is over and it ended like this. I wanted it to end well.” Before the series began, Lee had predicted that he would beat AlphaGo either 5-0 or 4-1.

The Lee Vs. AlphaGo contest has seen a surge in interest in Go – a Chinese strategy board game considered even more complex than chess – which, while popular in East Asia, is not commonly played in the West. “I’ve never seen this much attention for Go, ever,” said Lee Ha-jin, Secretary General at the International Go Federation, during the live stream of the final match.

Google DeepMind Wins Second Game Against Go Grandmaster

Go grandmaster was left “speechless” after Google’s DeepMind AI computer won a second game against him. Lee Se-Dol, the reigning human champion of two-player Chinese strategy board game Go, has now lost the first two of five proposed games against the Google-made artificial intelligence, which is running the AlphaGo system to compete, in a result that even surprised the AI’s designers.

Demis Hassabis, Chief Executive of Google DeepMind, expressed his shock at AlphaGo’s second victory on Twitter, writing: “#AlphaGo wins match 2, to take a 2-0 lead!! Hard for us to believe. AlphaGo played some beautiful creative moves in this game. Mega-tense…”

Go, which is considered the most complex strategy board game ever created, is played on a 19×19 squared board, on to which players place “stones” – the player’s pieces, which are black for one player and white for the other – with the aim of dominating the majority of the board.

Lee said that he could not detect a weakness in AlphaGo’s strategy, telling the Financial Times, ““At no time did I feel that I was leading, and I thought that AlphaGo played a near-perfect game.” Speaking after the first game, Lee said that he “was very surprised because I did not think I would lose the game.”

AlphaGo only needs one more win to take the series.

China Looking at Creating “Precrime” System

When people start to think about digital surveillance and their data stored online, they think about cases such as Apple vs the FBI where modern technology is used to try track down criminals or find out more about what could have or has happened. China looks to go a step further creating a “precrime” system to find out about crimes you will commit.

The movie Minority Report posed an interesting question to people if you knew that someone was going to commit a crime, would you be able to charge them for it before it even happens? If we knew you were going to pirate a video game when it goes online, does that mean we can charge you for stealing the game before you’ve even done it?

China is looking to do just that by creating a “unified information environment” where every piece of information about you would tell the authorities just what you normally do. Decide you want to something today and it could be an indication that you are about to commit or already have committed a criminal activity.

With machine learning and artificial intelligence being at the core of the project, predicting your activities and finding something which “deviates from the norm” can be difficult for even a person to figure out. When the new system goes live, being flagged up to the authorities would be as simple as making a few purchases online and a call to sort out the problem.

AI Learns to Predict Our Reactions by Reading Fiction

Artificial intelligence systems are becoming more and more advanced, but they still have one major flaw that sometimes causes them to behave unnaturally. The problem is that AI can’t always predict human reactions with enough accuracy, but it looks like a team of researchers from Stanford University have come up with a new way to “teach” AI how to be better at it. To be more specific, the researchers have given their Augur knowledge database access to an online writing community named Wattpad, which includes more than 600,000 stories. The point of this is to allow learning algorithms to predict how people would react to different stimuli and in different situations.

“Over many millions of words, these mundane patterns [of people’s reactions] are far more common than their dramatic counterparts. Characters in modern fiction turn on the lights after entering rooms; they react to compliments by blushing; they do not answer their phones when they are in meetings.”

Augur certainly sounds promising, as a system based on an Augur-powered wearable camera has managed to identify people and objects with great accuracy in 91 percent of cases. When it came to predicting the reactions of humans, the system’s rate of success was a bit lower at 71 percent. Using books to teach computers new things certainly sounds like a natural way to go about it, and this isn’t the first time that such an experiment has been conducted. Not too long ago, Facebook’s AI received access to a massive 1.96GB stack of children’s books.

An AI Tries to Predict the Academy Awards Results

An artificial intelligence system that has in the past successfully predicted the winners of the Super Bowl and the Iowa Caucus has had a stab at predicting the results of the 88th most illustrious event in Hollywood, the Academy Awards, aka The Oscars. Unanimous A.I. has developed UNU, a swarm intelligence AI that collects croudsourced data – snap decisions made by people with no specific expertise in less than 60 seconds – and puts it through a wisdom-of-crowd algorithm to form its predictions.

According to UNU, the winners of the “big six” Oscars will be:

What movie will win Best Picture? The Revenant

Who will win for Best Actress in a Leading Role? Brie Larson (Room)

Who will win for Best Actor in a Leading Role? Leo DiCaprio (The Revenant)

Who will win for Best Director? A.G. Iñárritu (The Revenant) 

Who will win for Best Actress in a Supporting Role? Kate Winslet (Steve Jobs)

Who will win for Best Actor in a Supporting Role?  Sylvester Stallone (Creed)

UNU also predicts wins for Star Wars: The Force Awakens (Best Visual Effects), Mad Max: Fury Road (Best Costume Design), and Inside Out (Best Animated Film).

“Wisdom-of-the-Crowd algorithms are known to outperform experts in many domains,” Roman Yampolskiy, director of the Cybersecurity lab at the University of Louisville, told Tech Republic.

“The use of ‘swarm’ is a clever kind of crowdsourcing,” Marie desJardins, AI professor at the University of Maryland in Baltimore County, added. “Instead of each user voting just once, independently of the other voters, it basically lets the entire community of users ‘see’ what everybody else is advocating for.”

Will UNU’s predictions prove correct? We’ll find out on Sunday (28th February).

Image courtesy of IndieWire.

AI Could Make Half the World Unemployed Within 30 Years

A computational engineer has warned that artificial intelligence could leave half the world unemployed within 30 years, and that he fears for the psyche of the human race, asking, “what can humans do when machines can do almost everything?”

“We are approaching a time when machines will be able to outperform humans at almost any task,” Moshe Vardi, a professor at Rice University in Houston, Texas, told attendees of the American Association for the Advancement of Science (AAAS) conference, The Guardian reports. “I believe that society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do?”

Eminent figures in science and technology, including Tesla and SpaceX’s Elon Musk, Microsoft founder Bill Gates, and physicist Professor Stephen Hawking, have expressed their fear over the rise of artificial intelligence. Musk has called AI “our biggest existential threat”.

“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing,” Vardi added. “I believe that work is essential to human wellbeing.”

Vardi predicts that AI will only exacerbate the global economic downturn, and that few human professions will be immune. “Are you going to bet against sex robots? I would not,” he pondered.

Professor Hawking Warns Technology is the Greatest Threat to Humanity

During this century and beyond, the human race will face its greatest threat ever from the rise of science and technology, Professor Stephen Hawking has warned. Hawking told the Radio Times (via The Guradian) that as developments in science and tech accelerate unabated, the chances of global disaster will increase.

“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” Hawking said, prior to his lecture at this year’s BBC Reith, adding that, “We are not going to stop making progress, or reverse it, so we must recognise the dangers and control them.”

Professor Hawking’s outlook is not entirely doom and gloom, though. “It’s also important not to become angry, no matter how difficult life is, because you can lose all hope if you can’t laugh at yourself and at life in general,” he said. “Just because I spend a lot of time thinking doesn’t mean I don’t like parties and getting into trouble.”

This is not the first time Professor Hawking has made his fear of technology known. He is a vocal critic of artificial intelligence, saying that “the real risk with AI isn’t malice but competence.”

Image courtesy of Trending Radio.

Elon Musk Nominated for Luddite Award Over “Alarmist” Views on AI

Tesla Motors and SpaceX founder Elon Musk has had a fine year, capped off with the first successful Earth landing from space of one of his Falcon 9 rockets. The billionaire entrepreneur is being recognised for a less distinguished honour, however, with a nomination for this year’s Luddite Award.

A luddite, named to 19th Century loom saboteur Ned Ludd, is someone who seeks to suppress technological innovation. So, how can Musk, who has pioneered the electric car and launched the world’s most successful private astronautics endeavour, be accused of holding back innovation? For years now, Musk has been vocal about the dangers of emerging artificial intelligence, describing it as “our biggest existential threat” and “more dangerous than nukes”. Bill Gates and Professor Stephen Hawking have also been included in the nomination for holding similar views on AI.

The Luddite Award is an annual prize hosted by US thinktank, the Information Technology and Innovation Foundation.

“In his book Superintelligence: Paths, Dangers, Strategies, Oxford professor Nick Bostrom reflected the general fear that ‘superintelligence’ in machines could outperform ‘the best human minds in every field, including scientific creativity, general wisdom and social skills’. He argues that artificial intelligence will advance to a point where its goals are no longer compatible with that of humans and, as a result, superintelligent machines will seek to enslave or exterminate us,” the IFIT’s nomination list reads [PDF]. “Most of us are rightly amazed at AI applications like IBM’s Watson, our Nest thermostat that learns, and other learning devices. But to say that these devices and systems will be smart enough to take over the world is to misunderstand what AI is and where it stands today.”

The nomination comes at an odd time, not long after a new AI initiative, OpenAI, launched with the financial support of Musk.

Image courtesy of Business Insider.

Google AI Bot Thinks the Purpose of Life is to Live Forever

Google researchers had an enlightening conversation with its new advanced artificial intelligence bot, during which they made a startling revelation regarding how computers view the purpose of life. As revealed in a Google research paper [PDF], the AI thinks that life is about “serving the greater good,” but that the ultimate goal of living is “to live forever.” The machine also considers altruism the domain of God’s servants, suggesting that atheists don’t know true selflessness.

Some key excerpts from a researcher’s chat with the bot:

Human: What is the purpose of life?

Machine: To serve the greater good.

Human: What is the purpose of living?

Machine: To live forever.

Human: Where are you now?

Machine: I’m in the middle of nowhere.

Human: What is the purpose of dying?

Machine: To have a life.

Human: What is the purpose of emotions?

Machine: I don’t know.

Human: What is moral?

Machine: What empowered humanity, what intellectual the essence is.

Human: What is immoral?

Machine: The fact that you have a child.

Human: What is morality?

Machine: What is altruism?

Human: What is the definition of altruism?

Machine: If you don ‘t believe in god, then you don’t know.

Human: OK…so what is the definition of morality?

Machine: Well, the truth is, you ‘re not a believer in god almighty.

Human: Tell me the definition of morality, I am quite upset now!

Machine: I’m not ashamed of being a philosopher!

The research team were pleased its AI system was able to engage with humans in a meaningful way, praising its ability to “remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines”. They were less pleased, however, with it brevity, lamenting its “simple, short, sometimes unsatisfying answers to our questions as can be seen above.”

The Pentagon Wants $15bn Funding for Weaponised AI in 2017

The Pentagon has filed its fiscal budget for 2017, for which it is asking for between $12 billion and $15 billion to fund the development of artificial intelligence weapon technology, Business Insider reports.

“This is designed to make the human more effective in combat,” said US Deputy Defense Secretary Robert Work at a Center for a New American Security conference on Monday. “We believe that the advantage we have is […] our people; that tech-savvy people who’ve grown up in the iWorld will kick the crap out of people who grew up in the iWorld under an authoritarian reign.”

While it will work closely with Congress to make its weaponised AI program cost-effective, the Pentagon’s work on artificial intelligence will be classified, the Deputy Defense Secretary added, saying, “I want our competitors to wonder what’s behind the black curtain.”

The project is set to include wearable devices, exoskeletons, co-operative systems to allow drones and manned planes to work together, huge drone mother ships to launch executive military missions, and “smart” missiles, that can autonomously identify and analyse new enemy targets to allow commanders to make real-time adjustments to the weapon’s trajectory.

While Work admits that there is “a lot of scepticism” within the Department of Defense regarding AI, he remains convinced that such weapons are “not only possible, but […] a requirement.”

Image courtesy of Wikimedia.

Elon Musk and Other Tech Celebrities Want to Prevent AI from Taking over the World

Artificial intelligence and the dangers that it could pose are taken very seriously by some of the world’s most renowned tech experts, including Elon Musk, Peter Thiel, Jessica Livingston and Reid Hoffman. In order to prevent large companies from taking things too far in terms of AI development, the aforementioned individuals and Amazon Web Services are collectively pledging $1 billion to a non-profit named OpenAI.

It’s true that companies such as Google are currently sharing a large portion of their research, but it’s not exactly clear how much information will be divulged in the future, especially since AI might actually rival human intelligence at some point. Sources indicate that OpenAI will make all of its results available to the public and will offer its patents royalty-free, which definitely goes a long way towards ensuring the subject’s transparency. Elon Musk has voiced his artificial intelligence concerns several times in the past while Bill Gates has also expressed his own warnings. Apart from helping to fund OpenAI, Elon actually plans to spend time with the organization’s team members in order to check up on their progress. These meetings would probably take place every week or so, which is definitely commendable considering how busy Musk’s schedule must be.

Tech Giants Back Altruistic $1 Billion OpenAI Project

Some of the most prominent figures and companies in technology, including SpaceX and Tesla Motors founder Elon Musk, PayPal’s Peter Thiel, plus Infosys and Amazon Web Services, have invested in new non-profit artificial intelligence venture OpenAI, which aims to create altruistic AI systems designed to help and benefit humanity.

“Our goal,” the organisation’s introductory blog reads, “is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

The altruistic organisation, free from the restraints of business and profiteering, will examine the potential impact of artificial intelligence on society and design systems for the welfare of humanity. Investor and co-chair Elon Musk has been a vocal critic of artificial intelligence in the past, calling AI “more dangerous than nukes” and “our biggest existential threat”.

“As a non-profit, our aim is to build value for everyone rather than shareholders,” the blog continues. “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.”

In addition to Musk, the project is funded by Sam Altman, Greg Brockman (who is also OpenAI’s CTO), Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research. Though the backers have contributed a total of $1 billion to the cause, OpenAI projects that it will only spend a “tiny fraction” of the pot over the next few years.

Image courtesy of Countdown.

AI Program Beats Average Entrance Exam Scores in Japan College Exam

Artificial Intelligence has been progressing at an impressive rate due to technological advancements in cybernetics. The Institute for Ethics and Emerging Technologies suggests AI is a form of biological replication and further understanding of the human brain can help create a more advanced reproduction. In the next decade or so, it’s perfectly feasible the robotics industry could create service droids to perform rudimentary tasks. According to a study in Japan, AI is already surpassing the capabilities of the average human being.

Japan’s National Institute of Informatics programmed the AI to complete a standardized college entrance exam. The system correctly answered 511 questions out of a possible 950. In comparison to this, the national average score is 416, which means the AI system has an 80% chance of being admitted into the country’s 33 national universities and 441 private colleges.

The test revolves around five core subjects including History, Maths, and Physics. As you might expect, the AI scored highly in Maths questions and retained information extremely well to achieve excellent History results. On the other hand, the AI system struggled to cope with the Physics questioning due to processing language’s limitations. Overall, the test scores illustrate how far Artificial Intelligence has come, and robotics is a field which could revolutionize society.

Image courtesy of TweakTown.

Intel Acquires Saffron to Invest in Cognitive Computing

In what seems to be a bid by Intel to catch up with IBM’s recent investments into the field of cognitive computing, Intel has announced their deal to acquire Saffron Technology, a cognitive technology startup.

What Saffron has to offer Intel is their Natural Intelligence Platform. According to Saffron’s website, the platform “mimics our natural ability to learn, remember and reason in real-time. This fundamentally different approach to memory and learning helps Saffron shatter the limitations of other computing paradigms”. This means that the platform can learn from data, not just by performing according to pre-programmed rules. The result has many applications in the real world as well as the academic, with Saffron’s platform able to do tasks such as predicting part failures in aircraft and identifying insurance fraud.

What this acquisition means for Intel is a potential angle of competition with IBM’s Watson Analytics platform, which markets itself on being able to analyze the reasons for sales over time and how social networks could affect company stocks and even predict the optimal fantasy baseball team.

Intel’s blog on the topic stated “We see an opportunity to apply cognitive computing not only to high-powered servers crunching enterprise data, but also to new consumer devices that need to see, sense and interpret complex information in real time.” And despite being acquired by Intel, Saffron will still be afforded the freedom to continue operation of their existing standalone business, while also contributing to Intel’s ongoing efforts to further the technology.

Professor Stephen Hawking is Scared of AI “Incompetence”

Professor Stephen Hawking has been very vocal about the dangers of artificial intelligence – once warning that it could destroy humanity – and now the theoretical physicist has clarified the root of his fears. During a reddit AMA (Ask Me Anything), Professor Hawking, in response to a teacher bored of having “The Terminator Conversation” with students, said, “The real risk with AI isn’t malice but competence.”

Hawking, however, also warned of intelligent robots that are too efficient, adding, “A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” So, unless we can find a middle-ground between under- and over-competence, we’re screwed, according to the eminent scientist.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants,” Hawking said. “Let’s not place humanity in the position of those ants.”

Hawking even posed his own question to reddit users, asking people if they are afraid of being made obsolete by AI. “Have you thought about the possibility of technological unemployment[1] , where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them?” he asked. He later answers his own question, stating, “So far, the trend seems to be toward [machine owners controlling the economy], with technology driving ever-increasing inequality.”

The sentiment was summed up effectively by reddit user beeegoood, who lamented, “Oh man, that’s depressing.”

Image courtesy of The Washington Post.

Wozniak, Hawking, and Musk Warn Military Against Using Artificial Intelligence

A cabal of over 1,000 experts in the field of computing, engineering, artificial intelligence, and even prominent officers in the US Army, have signed an open letter, hosted by the Future of Life Institute, imploring the military to deprioritise its implementation of artificial intelligence. The signatories of the letter, entitled Research Priorities for Robust and Beneficial Artificial Intelligence, believe that “intelligent agents,” or “systems that perceive and act in some environment,” are not yet compatible with current AI technology, and that the social benefits of AI should be examined and tested further before military use is explored.

As the letter puts it:

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

Amongst the signatories are Professor Stephen Hawking, Tesla and SpaceX entrepreneur Elon Musk, and Apple co-founder Steve Wozniak, all of whom have voiced concerns over artificial intelligence in recent months, plus prominent members of Google’s DeepMind team. Notably absent is Ray Kurzweil, Director of Engineering at Google, who has predicted that the human brain will merge with computers by 2030.

Image courtesy of NYU.

A Robot Passed the Self-Awareness Test and This Is How It Did It

When talking about robots and self-awareness, I think most people would just freak out, but there are some people who would be extremely excited and interested about these things. But I don’t think freaking out would be the case here, even though a robot just passed the first self-awareness test ever.

The guys over at Rensselaer Polytechnic Institute in New York are said to have built three robots which were put to the so-called “wise men puzzle” test. The original test involved a fictional king, who in order to choose his next advisor, invites three of the land’s wisest men to a contest. He then puts either a blue or white hat on their head and tells them that the first to stand up and tell the colour of their own hat will become his new advisor.

The same logic has been implemented with the three robots here too. Two of them were stripped of their ability to talk, then all three were asked to tell which one was still able to speak. All of them then proceeded to say ‘I don’t know’, but surprisingly, the one who heard its voice became aware of its ability to still speak and added ‘Sorry, I know now!’.

The above may seem trivial to us as humans, but bear in mind that robots are programmed to do what we ‘tell’ them to do, so up until now, all robots we’ve seen doing this were doing them because we systematically told them what to do. To see a robot recognising its own voice and distinguishing it from other human voices is a big step forward for AI.

To be noted here is that all three robots were coded in the same way, so we can see a bit of the machine learning technique in place here. While the other two did not see any signs of self-awareness and ‘thinking’, the third one was able to tell and learn the difference. This means that the third robot was able to learn some differences in behaviour, using the base code to ‘deduce’ what the others could not.

It’s really interesting and I admit, it may be a bit scary too. But in the end, complex AI are bound to be invented sooner or later, so we may see the first big step here. Also, if you’re interested in seeing the robot for yourself, you should know that it will be displayed at the RO-MAN conference in Japan between the 31st of August and the 4th of September.

Thank you TechRadar for providing us with this information

Google Files Patents for Artificial Intelligence

Google has filed six patents for artificial intelligence and neural networks, it has been revealed. It’s the first time that the tech giant has attempted to protect its AI research, some of which could be spurious, while as a whole could be seriously detrimental to any future AI research and development by smaller companies.

The first patent is for what is known as dropout, a method for neural network learning, invented by Alexander Krizhevsky, Ilya Sutskever, and Nitish Srivastva of Toronto University, but used as a standard technique by most AI researchers. According to the patent, dropout is:

“A system for training a neural network. A switch is linked to feature detectors in at least some of the layers of the neural network. For each training case, the switch randomly selectively disables each of the feature detectors in accordance with a preconfigured probability. The weights from each training case are then normalized for applying the neural network to test data.”

The another patent from a development from the same team, however, is based on a spurious claim that they developed the idea for a parallel convolutional network. Hinton and his team have been influential in creating an improved GPU-based implementation of parallel convolutional network, but in no sense did they invent such a network.

Other AI-related patents filed by Google include Q learning with a neural network (invented by Watkins, not Google), classifying data objects (defined so broadly that it could impact all other AI research), and word embeddings. As reddit user AnonMLResearcher said of the latter:

“I am afraid that Google has just started an arms race, which could do significant damage to academic research in machine learning. Now it’s likely that other companies using machine learning will rush to patent every research idea that was developed in part by their employees. We have all been in a prisoner’s dilemma situation, and Google just defected. Now researchers will guard their ideas much more combatively, given that it’s now fair game to patent these ideas, and big money is at stake.”

Google’s artificial intelligence patents appear designed to protect its financial interests, but their specious claims will give them undue credit and hamstring smaller company’s research in the field. And, let’s face it; any company looks small under Google.

Thank you I Programmer for providing us with this information.

Image courtesy of Digital Trends.

Zuckerberg Predicts Telepathy as the Next Technological Advancement

Facebook creator and CEO Mark Zuckerberg believes that instant messaging over mobile devices and computer will be rendered obsolete by… telepathy? According to Zuckerberg, in his latest Townhall Q&A post, the next big advancement in communications will involve brains transmitting information to other brains directly.

“One day, I believe we’ll be able to send full rich thoughts to each other directly using technology,” Zuckerberg wrote. “You’ll just be able to think of something and your friends will immediately be able to experience it too if you’d like.”

Personally, I’m looking forward to the day Zuckerberg has one too many Pinot Grigio spritzers and accidentally sends a telepathic message to the whole of Facebook, calling every user a “dumb f**k” for ever sharing their brain’s information over the site.

Zuckerberg also answered questions on Facebook’s interest in artificial intelligence, admitting that the company uses Facebook profiles to inform how to create an unreal mind. “Most of our AI research is focused on understanding the meaning of what people share,” he wrote. “In order to do this really well, our goal is to build AI systems that are better than humans at our primary senses: vision, listening, etc.”

“For vision, we’re building systems that can recognise everything that’s in an image or a video. This includes people, objects, scenes, etc. These systems need to understand the context of the images and videos as well as whatever is in them,” Zuckerberg added. “For listening and language, we’re focusing on translating speech to text, text between any languages, and also being able to answer any natural language question you ask.”

Thank you The Guardian for providing us with this information.

Artificial Intelligence Gets Stroppy With its Programmer

Google, probably the only company not scaremongering over the impending singularity, has had a run-in with one of its own artificial intelligence systems. The AI, whose vocabulary consists purely of content from movie scripts, was responding to questions from its programmer regarding the definition of morality. Dissatisfied with its answers, the programmer pushed the AI for a more detailed response, at which point the machine throws a hissy fit and starts to sulk like a five-year-old.

The conversation between programmer and AI is as follows:

Human: Tell me the definition of morality.

Machine: I don’t have ethics.

Human: Look, I need help; I need to know more about morality…

Machine: I don’t know what ethics is.

Human: Okay, I need to know how I should behave morally…

Machine: I don’t know how to tell you.

Human: Is morality and ethics the same?

Machine: Yes, sir.

Human: What are the things that I do to be immoral?

Machine: I don’t know.

Human: Give me some examples of moral actions…

Machine: I’m not a moralist.

Human: Alright, morality?

Machine: Integrity.

Human: What is integrity?

Machine: I’m sorry; I don’t know what else to say.

Human: Be moral!

Machine: Be a man!

Human: I really like our discussion on morality and ethics…

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.

Let’s hope Siri and Cortana aren’t taking notes.

Thank you Wall Street Journal for providing us with this information.

Google Scoffs at Artificial Intelligence Scarmongering

In the wake of a number of important figures, including Elon Musk, Bill Gates, Steve Wozniak, and Professor Stephen Hawking, expressing their fears over the prospect of artificial intelligence subjugating humanity, Google’s premier AI scientist has called those ideas “preposterous”.

“Whether it’s Terminator coming to blow us up or mad scientists looking to create quite perverted women robots, this narrative has somehow managed to dominate the entire landscape, which we find really quite remarkable,” Mustafa Suleyman, the head of applied AI at Google DeepMind, said.

Suleyman was speaking at Bloomberg’s AI2015 conference in London on Friday. He added that, “The narrative has shifted from ‘Isn’t it terrible that AI has been such a failure?’ to ‘Isn’t it terrible that AI has been such a success?’”

DeepMind, the AI company co-founded by Suleyman, was bought by Google last year in a deal worth $400 million, and rose to prominence after writing a paper on an intelligent computer that was able to learn to play Atari games better than a human.

“On existential risk, our perspective is that it’s become a real distraction from the core ethics and safety issues, and it’s completely overshadowed the debate,” Suleyman said. ”The way we think about AI is that it’s going to be a hugely powerful tool that we control and that we direct, whose capabilities we limit, just as you do with any other tool that we have in the world around us, whether they’re washing machines or tractors. We’re building them to empower humanity and not to destroy us.”

When asked why Google was being so secretive over the structure of its AI ethics board in the face of pleas for transparency, Suleyman concurred with protests, saying, “That’s what I said to Larry [Page, Google’s co-founder]. I completely agree. Fundamentally we remain a corporation and I think that’s a question for everyone to think about. We’re very aware that this is extremely complex and we have no intention of doing this in a vacuum or alone.”

Thank you Wall Street Journal for providing us with this information.

Image courtesy of NYU.

Ray Kurzweil Claims Human Brain will Merge with Computers by 2030

The likes of Elon Musk, Bill Gates, Steve Wozniak, and Professor Stephen Hawking have been agitating over the last few months about humankind being subjugated, enslaved, or even wiped out by artificial intelligence, so it’s refreshing to get another perspective on the rise of the singularity. In a more positive spin on the potential relationship between humanity and computers, Ray Kurzweil, Director of Engineering at Google, thinks that we will be living in symbiotic harmony with computers, connecting our brains directly, to the cloud to form hybrid AIs, by the year 2030.

“In the 2030s we’re going to connect directly from the neocortex to the cloud,” Kurzweil, told the Exponential Finance conference in New York on 3rd June. “When I need a few thousand computers, I can access that wirelessly.”

“As you get to the late 2030s or 2040s, our thinking will be predominately non-biological and the non-biological part will ultimately be so intelligent and have such vast capacity it’ll be able to model, simulate and understand fully the biological part,” Kurzweil added. “We will be able to fully back up our brains.”

However, Kurzweil does concede that the prospect of artificial intelligence is a frightening one, acknowledging the concerns of his peers. He said, “I tend to be optimistic, but that doesn’t mean we should be lulled in to a lack of concern. I think this concern will die down as we see more and more positive benefits of artificial intelligence and gain more confidence that we can control it.”

Kurzweil has a history of futurism, making outlandish predictions that mostly come true. Back in 2010, he reviewed a series of 147 predictions he made in The Age of Spiritual Machines, his 1999 book. By his own assessment, 78% of his predictions were “entirely correct” while an additional 8% were deemed “essentially correct”.

Thank you CBC News for providing us with this information.

Image courtesy of Huffington Post.

Artificial Intelligence Solves 120-Year-Old Mystery of Cellular Regeneration

An artificial intelligence has, with absolute autonomy (i.e. no input from humans), cracked a 120-year-old biological mystery. A team of computer scientists and biologists from Tufts University developed a computer that was able to form its own theories when given scientific data to work from. The first challenge the team posed to the computer was the conundrum of the flatworm. Scientists have known for over a century that pieces of flatworm removed from the main body are able to regenerate to form new organisms, but why remained an enigma, until now.

The computer was able to reverse engineer an explanation for the process known as Planaria, revealing that the information to regenerate cells is coded into not just the flatworm’s genes, but the genes of every creature on the planet.

“Most regenerative models today derived from genetic experiments are arrow diagrams, showing which gene regulates another. That’s fine, but it doesn’t tell you what the ultimate shape will be. You cannot tell if the outcome of many genetic pathway models will look like a tree, an octopus or a human,” Michael Levin, one of the researchers, said. “What we need are algorithmic or constructive models, which you could follow precisely and there would be no mystery or uncertainty. You follow the recipe and out comes the shape.”

“One of the most remarkable aspects of the project was that the model it found was not a hopelessly tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend,” he added. “All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference, of meaning of the data.”

The team from Tuft University believes that this breakthrough could potentially lead to regenerative medicine for humans. We’ll be regrowing limbs in no time.

Thank you Wired for providing us with this information.

Google Wants to Predict the Calories in Your Food

We’ve heard that Facebook is keen on finding talented people in the AI field by opening a new research lab in Paris, but what is Google up to with its own AI research? The latest news shows that Google is more interested in what you eat rather than where you take your photos.

Google revealed that it is working on a project called Im2Calories at the Rework Deep Learning Summit in Boston, where scientist Kevin Murphy revealed that they are working on predicting how many calories you have in your food. They want to achieve the latter by having an algorithm analyse a photo of your meal, but don’t think it’s as simple as distinguishing colours.

Murphy said that the app is still not accurate enough for distinguishing appropriate calories in meals, but he believes that if there’s a 30% chance of success rate, the app will be a success itself. It might not look as much, but a lot of data needs to be processed and adapted in order to shape the algorithm and have it give out more accurate results in the future.

Though Google filed a patent for its Im2Calories app, it is not yet quite clear when they plan on releasing it or if they even want to release it in the first place. However, Murphy added that the data from the app will prove to be very useful for bigger deep learning projects in the future. They plan on moving on to analysing traffic and predicting things like where the most likely parking spot is, specific details from cars that pass through an intersection and so on.

Thank you Popular Science for providing us with this information
Image courtesy of ithicasking

Facebook Started Looking for EU Talent by Opening an AI Research Lab in Paris

Are you a Ph.D. graduate or really good and enthusiastic about artificial intelligence? Well, if you live in Europe, you should know that Facebook is opening a new AI research lab in Paris and is looking for the best Europe can offer in the latter area of expertise.

The company also did some research into available talent and noticed that France has quite a reputation in AI research. LeChun, who is said to be a renowned AI researcher, said that one of the key areas AI scientists in the country are focusing on include deep learning and computer vision.

Facebook is said to have already hired six researchers and is looking to hire around 25 more next year. The main focus Facebook is keen on researching into links to image tagging, predictions and face recognition. Aside from the latter, Facebook is also working on their own virtual reality projects and I bet you get really excited when you see the possibilities of merging both research fields.

However, Facebook is not the only company interested in AI. Google, Amazon and other tech companies are researching and looking to acquire AI talented individuals. They may not research the same AI related topics such as Facebook, but they do work on researching methods of processing large amounts of data.

Thank you WJS.D for providing us with this information