Nintendo NX May Contain Kinect-Like Features

Rumours galore surround the Nintendo NX, the successor to the Wii U. Now it seems like another one has come forward thanks to a patent showing off what seems to be Kinect-like features.

Nintendo took the console market by storm when it released the Wii back in 2006. Featuring motion controllers through the Wii remotes, the console saw entire families playing bowling together in their sitting room and racing around tracks with Luigi and Mario against the grandparents. Since then several consoles have used similar methods to achieve motion capture gameplay, from the use of light orbs to accelerometers in controllers the consoles have experienced an influx of movement related gameplay features. The Kinect took it to a whole new level and it looks like a similar method could be used to take the NX that little bit further.

The Kinect used cameras to not only recognise who was playing the game but also their movement and distance (something that proved difficult for other consoles). The methods listed for the depth tracking software include everything from a distance measuring laser to using the thermal signature of a person to determine how close / far away from the camera they are.

Combined with the depth measuring technology there also appears to be gesture recognition, something which could see you swirling your arms around like Iron man as you power up and play your favourite games.

Are you excited for the NX? We are excited to see how it will turn out and what rumours are included in the latest console from Nintendo!

Facebook Facial Recognition Software Proven to be More Accurate than FBI Counterpart

There has been a lot of debating when it comes to facial recognition, having the FBI scare people off with its Next Generation Identification project and its intention to gather millions of photos in a federal database.

However, the FBI’s system has been proven to be inaccurate despite the EFF’s concern regarding people’s privacy and pointing out the fact that innocent people might end up in the ‘pool’ of photos. It is said that the NGI returns a ranked list of 50 possibilities, giving only a 85 percent chance of returning the suspect’s name in the list. This means that one in several suspects might slip away from the analysis and nobody can do anything about it.

Comparing the FBI’s project to Facebook’s DeepFace system revealed at the IEEE Computer Vision conference could make the law enforcement agency look like little kids playing with toy blocks. It is said that DeepFace can return a match in two pictures with a 97 percent accuracy, similar to having a human witnessing a suspect. Nonetheless, both the social media giant and the authorities are still far away from true facial-recognition capabilities.

Shahar Belkin, CTO of FST Biometrics, describes that for a facial recognition software to work, it currently needs a person to stare into a camera at an offset of 15 degrees at most off the center axis. Even so, the actual camera or photograph needs to present a high density of pixels and resolution, namely to be a high-quality picture. This is why Belkin states that we are still far away from actual face-recognition software that works. Street cameras and even surveillance cameras are not made for facial-recognition technology due to their poor image quality and angle.

This does not mean that your privacy is secure though. Facebook may win in facial recognition, but it does however present an opportunity which the FBI could take advantage of. While the law enforcement agency cannot provide a fully working facial recognition system just yet, it can still drag the social media giant into court orders to gain access to its database. It is just a matter of time until a fully working facial recognition system will emerge.

Thank you The Verge for providing us with this information
Image courtesy of The Verge

$5,000 Can Nab You a Copy Cat of Your Ex-Partner Thanks to Match.com

Do you have a hard time letting go? Possible attachment issues? Match.com has the answer for you!

Thanks to a new advancement in their technologies, Match.com now gives users the option to search for a new relationship candidate based purely on their facial structure. It has been reported that Match.com are running this service through Los Angeles-based facial recognition experts Three Day Rule – as reported by Mashable.com.

In a more scientific and positive approach, Three Day Rule’s founder Talia Goldsetin stated:

“People have a type and it’s not necessarily about height or race or hair color, but a lot of it is about face shape” Mashable.com

As with all new technology however this comes with a hefty price tag of $5,000 USD, which includes a six month total package. Not only do you get access to the facial recognition technology, but you receive a personal ‘dating guidance councillor’ (matchmaker) who will meet with you to determine your wants and needs, sift through potential applicants and even go on pre-dates to determine the candidates potential.

To help determine the type you’re after, this matchmaker will process photo’s of your ex-partners and use Three Day Rule’s software to help determine some potential matches.

Goldstein also stated:

“I’ve noticed over my years in matchmaking that people have types. I always ask my clients to send me photos of their exes. They say that they don’t have a type, but when I see the photos, to me they look very similar. The ex’s may be different ethnicities, or have different hair color, but their facial structures are the same.”  Mashable.com

For those desperately seeking love, what have you got to lose? This service can be found on Match.com

Photo courtesy of cngl

Complex Algorithm To Accurately Identify Objects Including Human Faces

Computers that can identify objects seem a thing from the future. Apparently, it is more close to reality than any of us think. Birmingham Young University from Provo, US – has found a way to make computers identify objects without the need of a human helping hand.

According to Dah-Jye Lee, BUY engineer, algorithms have become so advanced that they can make a piece of software identify objects by themselves from images and even videos. Lee is the founder of this algorithm and from what he describes, it is based on the computer making decisions on its own based on the shapes identified on the images or videos analysed.

“In most cases, people are in charge of deciding what features to focus on and they then write the algorithm based off that,” said Lee, a professor of electrical and computer engineering. “With our algorithm, we give it a set of images and let the computer decide which features are important.”

Lee’s algorithm is said to learn on its own, just as a child learns to distinguish a cat from a dog. He explains that instead of teaching a child the difference between the latter, we are better off showing the two images and let the child distinguish them on his or her own. Just like a child, the algorithm has been shown four image datasets from CalTech, namely motorbikes, faces, airplanes and cars, having the algorithm output 100% accurate results on each of the datasets. However, the algorithm had a lower rate of success with human faces, being able to accurately distinguish 99.4%, but still gave a better result than other object recognition systems.

“It’s very comparable to other object recognition algorithms for accuracy, but, we don’t need humans to be involved,” Lee said. “You don’t have to reinvent the wheel each time. You just run it.”

Professor Lee mentioned that the highly complicated algorithm may be used in a variety of tasks, from detecting invasive fish species to identifying flaws in produce such as apples on a production line. However, the complexity of the algorithm can go way beyond that.

Thank you Birmingham Young University for providing us with this information
Images courtesy of Birmingham Young University