Donald Trump’s antics on social media are infamous at this point, seeming crazy and unpredictable at times. Now, we can all look forward to an improvement to Trump thanks to the field of deep learning. DeepDrumpf is a Twitterbot created by a researcher at MIT’s Computer Science and Artificial Intelligence Lab and makes use of deep learning algorithms in order to generate tweets that are Trumpier than the real deal.
DeepDrumpf’s artificial intelligence platform was trained on hours of Trump transcripts from his numerous speeches and performances at public events and debates. While far from perfect, often generating tweets too nonsensical or stupid to even be from Trump himself, however, some manage to be hilariously brilliant.
MIT explains that the secret behind DeepDrumpf is that “the bot creates Tweets one letter at a time. For example, if the bot randomly begins its Tweet with the letter “M,” it is somewhat likely to be followed by an “A,” and then a “K,” and so on until the bot types out Trump’s campaign slogan, “Make America Great Again.” It then starts over for the next sentence and repeats the process until it reaches the 140-character limit.” The bot’s creator, postdoc Bradley Hayes, was inspired to create DeepDrumpf by an existing model that can emulate Shakespeare, combined with a recent report on the presidential candidates’ linguistic patterns that found Trump speaks at a third-grade level.
[Romney is ]a tool. I want to tell you this. They're probably the last thing we need in a leader, We can't do that.
DeepDrumpf has even managed to connect with Trump’s real Twitter account. When it does so, its artificial intelligence algorithm is given language from the real Trump’s tweet, which allows a higher chance of giving a response that appears to be contextually relevant to the original.
@realDonaldTrump They're going to be paying right now, and like, absolutely. I’m really rich. Oh I want to support and have them.
Hayes even envisions a future where he develops accounts for all of the presidential candidates and feeding them tweets from one another, so they can have their own real-time deep-learning debates. With that in mind, who would you like to see a deep-learning bot be created for?
Google are making AI for all kinds of purposes, including tackling the challenging Chinese game of Go. Now, they have revealed their latest deep-learning program, PlaNet, which is capable of recognizing where an image was taken even without it being geotagged.
PlaNet was trained by Google researchers with 90 million geotagged images from around the globe, which were taken from the internet. This means that PlaNet can easily identify locations with obvious landmarks, such as the Eiffel Tower in Paris or the Big Ben in London, a simple task which any human with knowledge of landmarks can do. This is taken even further by PlaNet, which sets itself apart with its ability to determine the location in a picture that is lacking any clear landmarks, with the deep learning techniques it uses making it able to even identify pictures of roads and houses with a reasonable level of accuracy.
The PlaNet team, led by software engineer Tobias Weyand challenged the accuracy of the software using 2.3 million geotagged images taken from Flickr while limiting the platform’s access to the geotag data. It was capable of street-level accuracy 3.6% of the time, city-level accuracy 10.1% of the time, country-level accuracy 28.4% of the time and continent level accuracy 48% of the time. This may not sound too impressive, but when Weyand and his team challenged 10 “well-travelled humans” to face off against PlaNet, it was able to beat them by a margin, winning 28 out of 50 games played with a median localization error of 1131.7 km compared to the humans 2320.75 km. Weyand reported that PlaNet’s ability to outmatch its human opponent was due to the larger range of locations it had “visited” as part of its learning.
What the plans are for PlaNet going forward is anyone’s guess, with a potential application being to locate pictures that were not geotagged at the time of photography, but it will be interesting to see how the techniques that bring machine learning closer and closer to humans can advance in the future.
Deep learning is redefining what is possible for a computer to do with the information that is provided. This is however a very compute intensive task and it requires specialized hardware to get the optimal performance. This is also the technology that one day will make an AI possible. Nvidia’s Tesla M40 is the fastest deep learning accelerator and it significantly reduces the training time. GIGABYTE is the first server maker to have its lineup certified for just these new NVIDIA cards. While a certification isn’t a thing that is necessarily needed, it is one of those guidelines that you shouldn’t look past.
Right now you are most likely wondering what deep learning is and I could go into a lot of details about its start, progress, and details – but I doubt anyone would read all that here. Wikipedia’s definition probably sums it up best. With very basic words, it allows the software to draw its own conclusions based on what it already knows.
The Wikipedia definition reads: “Deep learning (deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures, or otherwise composed of multiple non-linear transformations.”
NVIDIA’s Tesla M40 is a quite impressive card with its 3072 CUDA cores, 12GB GDDR5 memory with a bandwidth of 288GB/s, and a single precision peak performance of 7 TFLOPS. That is just from one card and we need to keep in mind that some of GIGABYTE’s servers can handle up to 8 graphics cards each. That adds up to a lot of performance.
If you already have a GIGABYTE server or plan to purchase one, then you’ll most likely also know the model number already. I’ve added a screenshot from the official compatibility list below which in that case will save you the trip to the official compatibility list. We see that it’s only the R280-G20 that isn’t certified for the M40, but that is because the system has a different field of operation than the rest.
So GIGABYTE has you well covered in regards to NVIDIA’s impressive Tesla M40 Deep Learning GPU.
A new study suggests that the human brain may be capable of storing as much as 1 petabyte of data, and yes that is a lot; it’s even ten times more information that was previously believed. With so much data, it’s no wonder I struggle to find the bit of information in my brain that knows where my car keys are, but that’s a whole different story.
“This is a real bombshell in the field of neuroscience,” Salk Institute for Biological Studies researcher Terry Sejnowski said. “Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte, in the same ballpark as the World Wide Web.”
The team reconstructed a rat’s hippocampus in 3D, allowing them to study the memory center of the brain. Through this process, they realised that the brain synapses are capable of changing dimensions, altering their memory capacity. While other synapses were duplicating, allowing the reconstruction of connectivity, shapes and volumes of the brain tissue. This also led to the idea that there may be as many as 26 categories of synapses, far more than previously thought.
“This is roughly an order of magnitude of precision more than anyone has ever imagined,” Sejnowski said. “The implications of what we found are far-reaching. Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.”
The research can now help advance deep learning and neural networking computer techniques, as we discover how the brain can process with unmatched abilities while consuming just 20 watts of power. With a petabyte, and maybe even more at its disposal, the human brain is an amazing thing. If you can’t grasp just how much data that is, just imagine downloading the entire internet, literally all of it, and storing it in your head with room left over! Although I can’t imagine how big the piracy fine would be for doing so.
We’ve heard that Facebook is keen on finding talented people in the AI field by opening a new research lab in Paris, but what is Google up to with its own AI research? The latest news shows that Google is more interested in what you eat rather than where you take your photos.
Google revealed that it is working on a project called Im2Calories at the Rework Deep Learning Summit in Boston, where scientist Kevin Murphy revealed that they are working on predicting how many calories you have in your food. They want to achieve the latter by having an algorithm analyse a photo of your meal, but don’t think it’s as simple as distinguishing colours.
Murphy said that the app is still not accurate enough for distinguishing appropriate calories in meals, but he believes that if there’s a 30% chance of success rate, the app will be a success itself. It might not look as much, but a lot of data needs to be processed and adapted in order to shape the algorithm and have it give out more accurate results in the future.
Though Google filed a patent for its Im2Calories app, it is not yet quite clear when they plan on releasing it or if they even want to release it in the first place. However, Murphy added that the data from the app will prove to be very useful for bigger deep learning projects in the future. They plan on moving on to analysing traffic and predicting things like where the most likely parking spot is, specific details from cars that pass through an intersection and so on.
Are you a Ph.D. graduate or really good and enthusiastic about artificial intelligence? Well, if you live in Europe, you should know that Facebook is opening a new AI research lab in Paris and is looking for the best Europe can offer in the latter area of expertise.
The company also did some research into available talent and noticed that France has quite a reputation in AI research. LeChun, who is said to be a renowned AI researcher, said that one of the key areas AI scientists in the country are focusing on include deep learning and computer vision.
Facebook is said to have already hired six researchers and is looking to hire around 25 more next year. The main focus Facebook is keen on researching into links to image tagging, predictions and face recognition. Aside from the latter, Facebook is also working on their own virtual reality projects and I bet you get really excited when you see the possibilities of merging both research fields.
However, Facebook is not the only company interested in AI. Google, Amazon and other tech companies are researching and looking to acquire AI talented individuals. They may not research the same AI related topics such as Facebook, but they do work on researching methods of processing large amounts of data.
Thank you WJS.D for providing us with this information