Functional Human Hearts Generated From Skin Cells

Doctors are constantly being helped by their friends in the technology industry, from 3D printing ears to making veins in a cotton candy machine, people are now able to start replacing damaged parts of themselves with items created from their genetic make up. This technology may have gone one step further with a research group claiming to have created functional human hearts.

The new technique could see people avoiding waiting lists and the risk of their immune system rejecting the new organ. With a low risk of an immune response, the new technique could see a 100% acceptance amongst transplants.

By using skin cells from a patient, the team were able to generate the cardiac muscles found in a heart. In order to turn it into a transplantable heart it needed a structure, something that would take time to develop. Using 73 donor hearts that were considered unsuitable for transplantation, the team removed the living cells leaving only the neutral network required for the heart.

With the ability to replace body parts with artificially grown organs appearing quicker and quicker, it won’t be long before we can repair defects in body parts and ensure that people who suffer injuries to their organs can repair them as easily as a cut on their arm.

We Can Now Create 3D Printed Organs!

3D printing is not a new area for doctors and surgeons to use, they’ve managed to 3D print new ribs and a sternum for a cancer patient and that was only the start. 3D printing has come a long way, being able to print everything from a bike to a supercar, a PC case or even a houses. The problem is that they are all solid things, inanimate objects and items that we use on occasion, the problem with organs is that we use them everyday and need to keep alive. Previous attempts to grow human organs have had trouble with the latter stage, with it proving difficult to give an organ what it needs to grow, this has now come one step closer to being solved thanks to 3D printing.

Published in Nature Biotechnology, the recent advancement means that not only can organs be “printed” but they are kept alive and retain their strength long after creation. They do this by creating a lattice of layers, with holes going throughout the organ, this means that when it is still developing it can absorb the nutrients and chemicals needed, filling out and retaining its strength as it absorbs its needed Oxygen.

This solution is a step forward, being described as the “geese that lays the golden egg”, and certainly seems more in line with common ideas than using a candy floss machine to create blood vessels.

Analysing Your Brain Could Be 30 Times Faster Than A Supercomputer

The human brain, fascinating, exciting and full of possibilities, the notion to create, form an opinion and challenge the environment which we live in, is truly exceptional. We now might be able to find answers as to how powerful the human brain is after a project which is designed to compare a supercomputer with that of a brain.

An Artificial Intelligence project which has been devised by two PhD students from the University of California Berkeley and Carnegie Mellon University, will be the first of its kind to compare the human brain with the world’s best supercomputer. The AI Impacts project aims to determine how fast the human brain sends signals in its internal network compared to that of a supercomputer.

The scholars compared the power of our brains with that of IBM’s Sequola supercomputer which is in the top 3 of the most powerful supercomputers. “Sequola has a TEPS (Traversed Edges per Second) benchmark of 2.3 x 1013 TEPS”. The estimates suggest the “AI Impacts are that the human brain should be at least as powerful as Sequoia in the lower limits and for the upper estimates, therefore the human brain could surpass the IBM Sequoia speed by 30 times at 6.4 x 1014 TEPS”.

Which is both a lot to take in but also equally and potentially incredible, evolution has formed an instrument which is quite amazing, and it begs the question, what else will we find as research and tech advances with the aim of exploring us. It is also interesting to note if the wiring of for example a genius brain, think Stephen Hawking, is different to that of an average mind or the best sportsman evolved differently with more advanced genes, or if are we all capable. If we spent enough time learning a skill to be able to adapt to anything? Its compelling none the less.

Thank you aiimpacts for providing us with this information.

Image courtesy of fossbytes

L’Oreal Teaming Up To 3D Print Skin

Cause you’re worth it. The catch phrase of L’Oreal, a world-famous cosmetic company, is known to many. Did you know that they grow skin from donated samples? How about that they want to start 3D printing human skin?

Teaming up with the start-up company Organovo, L’Oreal hopes to be able to use the created skin in its product tests. Organovo, however, is new to this area, having already claimed that they can 3D print a human liver that will last for up to 40 days.

Stated as in the early research stages, experts are divided about how this would work. Many believe that the science behind it is plausible, and that is is possible to 3D print skin and other parts using human cells. One possible application of this would be to help burn and trauma patients, being able to replace the damaged skin and create skin graphs on site in each hospital using specialist 3D printers.

A major advantage of this would be in regards to animal testing, with the ability to test cosmetics on human tissue giving better results and more in-depth knowledge about side effects without the need for animal testing or damage to a person undergoing the testing.

With the ability to create more humane testing methods, help repair damage done by fire to burn victims and with untold potential it will be interesting to follow how L’Oreal and Organovo use this technology and research.

Thank you BBC and Organovo for providing us with this information.

OkCupid Follows in Facebook’s Footsteps, Admits to be Experimenting on Humans

After the Facebook fiasco and their little research on human behaviour, it seems that its time for OkCupid, the online dating service, to do the same. The service is said to have admitted that it too had manipulated what is shows users in order to see what would happen.

Three examples of the experiment are said to have been posted by OkCupid’s co-founder, Christian Rudder, in an article entitled “We Experiment On Human Beings!”. It can be viewed over at the site’s OkTrends blog.

“OkCupid doesn’t really know what it’s doing,” said Rudder. “Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.”

It is said that when talking about Facebook and its experiment that involved manipulating users’ news feeds in order to study their real-life reactions, Rudder stated the following:

“Guess what, everybody,” he says, “if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

The first experiment is said to have happened in January 2013, when the website removed all user images and called it “Love is Blind Day”. The user count is said to have been low on that day, but those who were online proved to have responded 44% more often to messages.

The second one involved the user’s rating, a score given to them by other users. Rudder attempted to see how much the user’s profile image counts when rating someone by presenting a small subset of users with their profile text hidden. He stated to have found that only 10% of the typical user’s score is based on what they write about themselves, while 90% is based on the profile image.

The final experiment is stated to have been more “controversial”, having OkCupid tweak with the users’ “match” rating. It is basically used to show people’s “compatibility” rating based on the information given by the user.

“In the back of our minds, there’s always been the possibility: maybe it works just because we tell people it does. Maybe people just like each other because they think they’re supposed to? Like how Jay-Z still sells albums?” Rudder stated

OkCupid has then tweaked the compatibility ratings for most of its users and noticed how many single messages led to a full conversation. The experiment noted that most users do not talk to each other due to the low compatibility ration, for example 20% or 30%. Change those to a 90% and it seems that ‘weird things happen’.

All in all, what Facebook and OkCupid did are far from ethical, but it still underlines a solid truth in all. When information is available on the internet, we tend to trust it more than we trust ourselves. Do we really need a webpage or app to tell us who to love or what to believe in?

Thank you The Guardian for providing us with this information
Image courtesy of The Guardian

Numenta Releases Gork, The First App to Work Similar to a Human Brain

A Redwood City, Californian-based startup company by the name of Numenta has apparently held a conference to show off their achievement, which is a piece of software mimicking the processing power of a human brain.

The company, started by Jeff Hawkins and Donna Dubinsky nine years ago, has set out to achieve an algorithm that would process information like a normal human brain. The company is said to have already shipped its first product, Gork, a piece of software that would detect unusual patterns in information technology systems. By detecting these anomalies in a computer server early, the company states it would help avoid further problems while also saving a lot of time in manually finding and fixing them.

Their first application may seem strange at first, but it fits the description of what the human brain is good at, meaning pattern recognition. It is said that the company built its architecture on Hawkins’ theory of Hierarchical Temporal Memory, having the brain store data in time sequences. This is easily noticeable by the fact that we quickly remember the words and music of a song. This theory has apparently become the foundation for Numenta’s code base, bearing the name of Cortical Learning Algorithm (CLA), which the company intends to use in future applications as well.

Hawkins and Dubinsky have stated at the company’s conference that they are even more excited about new applications based on the CLA code, having already started ‘deeper’ conversations with potential partners about how to use the technology. What would we expect from such a code in the future? Your guess is as good as ours.

Thank you Venture Beat for providing us with this information
Image courtesy of Venture Beat

Android News Casters Hitting the Scene in Japan

Japan recently facilitated the ‘Andriod: What is Human?’ exhibition which brought to the table some exciting new advancements in lifelike technology.

In the future, it is quite possible we’ll be read the news by Android robots closely resembling those pictured above. Not only do these ‘robots’ provide a pretty face, they can interact with humans, read news headlines, tell jokes and read out Tweets – basically anything a regular news presenter is tasked to do.

The two Androids shown on Tuesday are called Kodomoroid and Otonaroid, they’re designed to mimic gestures and movements of humans whilst upkeeping a lifelike appearance.

[youtube]https://www.youtube.com/watch?v=tGaBQZ195tA[/youtube]

To show off the full capabilities of their creations, the scientists interacted with the androids and displayed their news reading functionality. Kodomoroid was heard making fun of Hiroshi Ishiguro stating “You’re starting to look like a robot!” after she had delivered her news piece to the audience as seen in the video above. Otonariod unfortunately caught stage fright, and after a quick reboot said “I’m a little bit nervous” – which is understandable in such a large crowd of stunned journalists.

These Androids will be held at Tokyo’s National Museum of Emerging Science and Innovation in a bid to interact with visitors and help Ishiguro collect research information on human reactions to his creations.

Ishiguro also proudly stated:

“You can take my androids on planes. The torso in the suitcase and the head in carry-on” Breitbart

Will we see these more in the future? The answer is likely yes. Should we be worried about losing all of our jobs to ‘the machine’? That is for you to decide!

Image courtesy of Breitbart

Complex Algorithm To Accurately Identify Objects Including Human Faces

Computers that can identify objects seem a thing from the future. Apparently, it is more close to reality than any of us think. Birmingham Young University from Provo, US – has found a way to make computers identify objects without the need of a human helping hand.

According to Dah-Jye Lee, BUY engineer, algorithms have become so advanced that they can make a piece of software identify objects by themselves from images and even videos. Lee is the founder of this algorithm and from what he describes, it is based on the computer making decisions on its own based on the shapes identified on the images or videos analysed.

“In most cases, people are in charge of deciding what features to focus on and they then write the algorithm based off that,” said Lee, a professor of electrical and computer engineering. “With our algorithm, we give it a set of images and let the computer decide which features are important.”

Lee’s algorithm is said to learn on its own, just as a child learns to distinguish a cat from a dog. He explains that instead of teaching a child the difference between the latter, we are better off showing the two images and let the child distinguish them on his or her own. Just like a child, the algorithm has been shown four image datasets from CalTech, namely motorbikes, faces, airplanes and cars, having the algorithm output 100% accurate results on each of the datasets. However, the algorithm had a lower rate of success with human faces, being able to accurately distinguish 99.4%, but still gave a better result than other object recognition systems.

“It’s very comparable to other object recognition algorithms for accuracy, but, we don’t need humans to be involved,” Lee said. “You don’t have to reinvent the wheel each time. You just run it.”

Professor Lee mentioned that the highly complicated algorithm may be used in a variety of tasks, from detecting invasive fish species to identifying flaws in produce such as apples on a production line. However, the complexity of the algorithm can go way beyond that.

Thank you Birmingham Young University for providing us with this information
Images courtesy of Birmingham Young University

BlizzCon: About WarCraft Movie

Blizzard has announced that they finally have a release date for the film, and it is believed to be 18 December 2015. Though they have not even started filming. They also spoke a little on the idea of the film being about Orcs vs Humans and the First Contact: Lothar and Durotan.

My first thought when I heard about Warcraft the movie, I pictured a very cartoony movie, but this movie will not be cartoony at all, it will be photo-realistic. Using a half human cast while the other half will be computer generated. They are planning to do some really great things with this film. We wont see it for at least another two years.

It is unclear about what parts of Azeroth we will be seeing. We will see some really fine details, and everything will look and feel real. We will see different zones and different cities. Even if you don’t know anything about Warcraft you will be able to watch and enjoy the film, but if you are a fan, you will notice plenty of things that will match up very well with the games.

As much as you may feel each character within the world plays a vital role, the most important role is the world itself. We will see a number of the different races from both the Horde and the Alliance.

I would love to see the lore and how it plays out giving us a history of the game, the instances and even the zones of Azeroth. Blizzard doesn’t seem to be taking an approach of multiple feature films, but rather one film that covers everything. I feel that Blizzard could easily make a film on each zone and each city as well as the different instances. Giving us a ton of history and lore of Azeroth.

I felt the panel was extremely frustrating that they didn’t want to unveil very much about the film, so we will just need to wait until the movie moves further along in production.