Yuri Milner and Stephen Hawking Unveil $100m Voyage to the Stars

Eminent astrophysicist Professor Stephen Hawking has joined forces with science investor and philanthropist Yuri Milner to launch a revolutionary $100m “moonshot” which aims to send a miniature spacecraft hurtling across the galaxy, propelled by lasers. Facebook CEO Mark Zuckerberg has also joined the board of the organisation, known as Breakthrough Starshot.

“Breakthrough Starshot is a $100 million research and engineering program aiming to demonstrate proof of concept for light-propelled nanocrafts,” the official press release reads. “These could fly at 20 percent of light speed and capture images of possible planets and other scientific data in our nearest star system, Alpha Centauri, just over 20 years after their launch.”

Breakthrough Starshot aims to send Nanocrafts – tiny robotic spaceships with a gram-scale mass and tiny lightsails, propelled using a 100 billion-watt laser – on a twenty-year journey, at one-fifth the speed of light, 25 trillion miles (4.37 light years) across the Milky Way to the Alpha Centauri star system.

The announcement coincides with the 55th Anniversary of the first orbit of the Earth by Russian cosmonaut Yuri Gagarin, after whom Milner is named.

“The human story is one of great leaps,” Milner said. “55 years ago today, Yuri Gagarin became the first human in space. Today, we are preparing for the next great leap – to the stars.”

“Earth is a wonderful place, but it might not last forever,” added Stephen Hawking, “Sooner or later, we must look to the stars. Breakthrough Starshot is a very exciting first step on that journey.”

“We take inspiration from Vostok, Voyager, Apollo and the other great missions,” said Pete Worden, former director of NASA AMES Research Center and advisor to Breakthrough Starshot. “It’s time to open the era of interstellar flight, but we need to keep our feet on the ground to achieve this.”

Image courtesy of The Guardian.

AI Could Make Half the World Unemployed Within 30 Years

A computational engineer has warned that artificial intelligence could leave half the world unemployed within 30 years, and that he fears for the psyche of the human race, asking, “what can humans do when machines can do almost everything?”

“We are approaching a time when machines will be able to outperform humans at almost any task,” Moshe Vardi, a professor at Rice University in Houston, Texas, told attendees of the American Association for the Advancement of Science (AAAS) conference, The Guardian reports. “I believe that society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do?”

Eminent figures in science and technology, including Tesla and SpaceX’s Elon Musk, Microsoft founder Bill Gates, and physicist Professor Stephen Hawking, have expressed their fear over the rise of artificial intelligence. Musk has called AI “our biggest existential threat”.

“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing,” Vardi added. “I believe that work is essential to human wellbeing.”

Vardi predicts that AI will only exacerbate the global economic downturn, and that few human professions will be immune. “Are you going to bet against sex robots? I would not,” he pondered.

Professor Hawking Warns Technology is the Greatest Threat to Humanity

During this century and beyond, the human race will face its greatest threat ever from the rise of science and technology, Professor Stephen Hawking has warned. Hawking told the Radio Times (via The Guradian) that as developments in science and tech accelerate unabated, the chances of global disaster will increase.

“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” Hawking said, prior to his lecture at this year’s BBC Reith, adding that, “We are not going to stop making progress, or reverse it, so we must recognise the dangers and control them.”

Professor Hawking’s outlook is not entirely doom and gloom, though. “It’s also important not to become angry, no matter how difficult life is, because you can lose all hope if you can’t laugh at yourself and at life in general,” he said. “Just because I spend a lot of time thinking doesn’t mean I don’t like parties and getting into trouble.”

This is not the first time Professor Hawking has made his fear of technology known. He is a vocal critic of artificial intelligence, saying that “the real risk with AI isn’t malice but competence.”

Image courtesy of Trending Radio.

Professor Stephen Hawking is Scared of AI “Incompetence”

Professor Stephen Hawking has been very vocal about the dangers of artificial intelligence – once warning that it could destroy humanity – and now the theoretical physicist has clarified the root of his fears. During a reddit AMA (Ask Me Anything), Professor Hawking, in response to a teacher bored of having “The Terminator Conversation” with students, said, “The real risk with AI isn’t malice but competence.”

Hawking, however, also warned of intelligent robots that are too efficient, adding, “A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” So, unless we can find a middle-ground between under- and over-competence, we’re screwed, according to the eminent scientist.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants,” Hawking said. “Let’s not place humanity in the position of those ants.”

Hawking even posed his own question to reddit users, asking people if they are afraid of being made obsolete by AI. “Have you thought about the possibility of technological unemployment[1] , where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them?” he asked. He later answers his own question, stating, “So far, the trend seems to be toward [machine owners controlling the economy], with technology driving ever-increasing inequality.”

The sentiment was summed up effectively by reddit user beeegoood, who lamented, “Oh man, that’s depressing.”

Image courtesy of The Washington Post.

Wozniak, Hawking, and Musk Warn Military Against Using Artificial Intelligence

A cabal of over 1,000 experts in the field of computing, engineering, artificial intelligence, and even prominent officers in the US Army, have signed an open letter, hosted by the Future of Life Institute, imploring the military to deprioritise its implementation of artificial intelligence. The signatories of the letter, entitled Research Priorities for Robust and Beneficial Artificial Intelligence, believe that “intelligent agents,” or “systems that perceive and act in some environment,” are not yet compatible with current AI technology, and that the social benefits of AI should be examined and tested further before military use is explored.

As the letter puts it:

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

Amongst the signatories are Professor Stephen Hawking, Tesla and SpaceX entrepreneur Elon Musk, and Apple co-founder Steve Wozniak, all of whom have voiced concerns over artificial intelligence in recent months, plus prominent members of Google’s DeepMind team. Notably absent is Ray Kurzweil, Director of Engineering at Google, who has predicted that the human brain will merge with computers by 2030.

Image courtesy of NYU.

Interstellar Travel and Time Travel? Never! Says Killjoy Hawking

The major tropes upon which most science fiction is based, time travel and visiting other worlds, will never be achieved by humankind, according to professional bonfire urinator (and astrophysicists) Professor Stephen Hawking.

In a BBC interview with comedian (and physicist) Dara Ó Briain, Professor Hawking said that traveling back in time is theoretically impossible, and that mankind will not leave the solar system in our lifetime, if ever. Professor Hawking told Ó Briain, “I still believe that time travel to the past is not possible for macroscopic effects,” adding that “You can’t send a message back in time.” The ‘Grandfather Paradox’? Stop wasting your bloody time, Hawking says.

Regarding interstellar travel, he says, “The present breed of humans won’t reach the stars.” The distances are too great. The radiation exposure would be too severe.” In an uncharacteristic spirit of optimism, however, Hawking does concede that it could be possible if we were to “genetically engineer humans or send machines.”

Oh, and in case you were wondering, there is no God – “I think the afterlife is a fairy story for people afraid of the dark” – and entering a black hole will kill you – “If you jump in a black hole, you will meet an unpleasant fate” – so, if you were thinking of doing that, don’t.

Ray Kurzweil Claims Human Brain will Merge with Computers by 2030

The likes of Elon Musk, Bill Gates, Steve Wozniak, and Professor Stephen Hawking have been agitating over the last few months about humankind being subjugated, enslaved, or even wiped out by artificial intelligence, so it’s refreshing to get another perspective on the rise of the singularity. In a more positive spin on the potential relationship between humanity and computers, Ray Kurzweil, Director of Engineering at Google, thinks that we will be living in symbiotic harmony with computers, connecting our brains directly, to the cloud to form hybrid AIs, by the year 2030.

“In the 2030s we’re going to connect directly from the neocortex to the cloud,” Kurzweil, told the Exponential Finance conference in New York on 3rd June. “When I need a few thousand computers, I can access that wirelessly.”

“As you get to the late 2030s or 2040s, our thinking will be predominately non-biological and the non-biological part will ultimately be so intelligent and have such vast capacity it’ll be able to model, simulate and understand fully the biological part,” Kurzweil added. “We will be able to fully back up our brains.”

However, Kurzweil does concede that the prospect of artificial intelligence is a frightening one, acknowledging the concerns of his peers. He said, “I tend to be optimistic, but that doesn’t mean we should be lulled in to a lack of concern. I think this concern will die down as we see more and more positive benefits of artificial intelligence and gain more confidence that we can control it.”

Kurzweil has a history of futurism, making outlandish predictions that mostly come true. Back in 2010, he reviewed a series of 147 predictions he made in The Age of Spiritual Machines, his 1999 book. By his own assessment, 78% of his predictions were “entirely correct” while an additional 8% were deemed “essentially correct”.

Thank you CBC News for providing us with this information.

Image courtesy of Huffington Post.

Steve Wozniak Thinks Future of Artificial Intelligence “is Scary and Very Bad for People”

Steve Wozniak, co-founder of Apple, has thrown his hat into the artificial intelligence scaremongering ring – along with Professor Stephen Hawking, father of Microsoft Bill Gates, and SpaceX and Tesla founder Elon Musk – by claiming that the future of AI is “scary and very bad for people,” presumably because a computer could choose more evocative synonyms than “scary” and “bad” to describe the singularity’s threat to humanity.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people,” Wozniak told the Australian Financial Review. “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”

Wozniak added, cranking up the hysteria, “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that […] But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines […] well I’m going to treat my own pet dog really nice.”

In an effort to appease any AIs that might have been listening – “Computers are going to take over from humans, no question,” he claims – Wozniak welcomes our new AI overlords, à la Kent Brockman, saying, “I hope it does come, and we should pursue it because it is about scientific exploring. But in the end we just may have created the species that is above us.”

Wozniak’s recent apocalyptic alarmism marks a u-turn; he had previously dismissed Ray Kurtzweil’s prophecies of doom regarding super machines that were destined to control the world, but now admits that, after seeing it in progress, he is now a believer. Let’s just hope Woz never stumbles upon the ramblings of David Ike.

Source: Washington Post

Artificial Intelligence Could Destroy Humanity Warns Stephen Hawking

Crush. Kill. Destroy. The singularity – the super-evolution of sentient machines – is coming, according to Professor Stephen Hawking, and we should all be afraid.

“The development of full artificial intelligence could spell the end of the human race,” Professor Hawking said to the BBC. “It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

“We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it,” Hawking warns. This echoes the opinion of Elon Musk, Chief Executive of Space X. He voiced his fear during the Summer, calling advanced AI “more dangerous than nukes” and is “our biggest existential threat”.

Source: BBC

Stephen Hawking’s Speech System Made Open Source

A new speech system, created for famed physicist Professor Stephen Hawking, has been made available free and open source, creator Intel have announced. The project is called ACAT (Assistive Context Aware Toolkit) and has been in the making, through collaboration between Hawking and Intel, for the past three years.

Hawking said of the system, at a press conference in London, “We are pushing the boundaries of what is possible with technology, without it I would not be able to speak to you today.” He added, “the development of this system has the potential to greatly improve the lives of disabled people all over the world”.

Hawking, who is almost entirely paralysed by motor neuron disease (related to ALS), praised the efficiency of ACAT, since it has doubled his typing rate – SwiftKey integration means he types 20% fewer characters than before – and improved common tasks by a factor of ten.

Intel and Hawking hope the open source software will help millions of disabled people to communicate more easily.

Source: Wired