We all use WiFi at some point, be it at work or at home, we rely on the technology to avoid the miles and miles of cables that we would overwise have to plug and unplug every time we wanted to grab a drink or watch a movie on Netflix. Researchers may have developed a way to accurately calculate distance through WiFi, a feature that could see wireless communications made more secure and controlled.
Researchers from MIT’s CSAIL team managed to achieve the feat using just a single router by measuring the “time of flight” for the WiFi signals between both the transmitter and receiving components, with a margin of error of just 0.5 nanoseconds, 20 times more accurate that other systems. Once the time was calculated they multiplied it by the speed of light, resulting in the distance between people and their wireless routers.
Using a four room apartment as an example, the researchers managed to locate the correct room for a user 94% of the time. Not stopping there the researchers took the technology to a cafe and managed to track down if someone was within the cafe with a 97% accuracy. Not stopping at wireless routers the technique was then applied to a drone, restricting the distance of the drone from the operator with an error margin of just 2-inches.
With the ability to limit or restrict access to a network by a user’s distance, public networks, and drones could be made more secure and with greater control of who, and where, people can access the systems.
The researchers over at MIT have been keeping themselves quite busy as of late, especially as far as drone research is concerned. We’ve recently stumbled upon some news regarding a very interesting drone developed by MIT’s Fluid Interfaces Group, and what makes this particular device special is the fact that it can actually mimic the movements of a human hand and sketch out what it sees on a blank canvas. Admittedly, the drone is not exactly an expert at reproducing human-made drawings right now, but that’s because it uses a software algorithm as well as aerodynamics in order to add its own unique touch to each drawing.
At this point, you might be wondering what good is this drone if it can’t reproduce exact drawings? Well, even though the technology still needs some refinement, all it takes is a bit of imagination to realize that it could potentially make our lives so much easier in the future. For example, artists with disabilities could use similar drones to sketch out their paintings and drawings without having to leave their beds, while workers who need to paint interior murals or ceilings could also have a drone mimic their hand movements and paint in places that would be difficult to reach otherwise.
Sadly the experience on some websites these days can very quickly be summed up by the word “loading”. We like our pictures, our videos and some even like ads, the problem being is that everything you view on the internet has to come from somewhere and that is where the loading comes in. MIT and Harvard want to give you a hand and help speed up your browsing online.
The plan for Polaris is to open-source the framework, meaning you could soon find it in every site and browser you use, and with it showing reductions of up to 34% in loading time on websites, you can get one more cat video in on your lunch break.
Donald Trump’s antics on social media are infamous at this point, seeming crazy and unpredictable at times. Now, we can all look forward to an improvement to Trump thanks to the field of deep learning. DeepDrumpf is a Twitterbot created by a researcher at MIT’s Computer Science and Artificial Intelligence Lab and makes use of deep learning algorithms in order to generate tweets that are Trumpier than the real deal.
DeepDrumpf’s artificial intelligence platform was trained on hours of Trump transcripts from his numerous speeches and performances at public events and debates. While far from perfect, often generating tweets too nonsensical or stupid to even be from Trump himself, however, some manage to be hilariously brilliant.
MIT explains that the secret behind DeepDrumpf is that “the bot creates Tweets one letter at a time. For example, if the bot randomly begins its Tweet with the letter “M,” it is somewhat likely to be followed by an “A,” and then a “K,” and so on until the bot types out Trump’s campaign slogan, “Make America Great Again.” It then starts over for the next sentence and repeats the process until it reaches the 140-character limit.” The bot’s creator, postdoc Bradley Hayes, was inspired to create DeepDrumpf by an existing model that can emulate Shakespeare, combined with a recent report on the presidential candidates’ linguistic patterns that found Trump speaks at a third-grade level.
[Romney is ]a tool. I want to tell you this. They're probably the last thing we need in a leader, We can't do that.
DeepDrumpf has even managed to connect with Trump’s real Twitter account. When it does so, its artificial intelligence algorithm is given language from the real Trump’s tweet, which allows a higher chance of giving a response that appears to be contextually relevant to the original.
@realDonaldTrump They're going to be paying right now, and like, absolutely. I’m really rich. Oh I want to support and have them.
Hayes even envisions a future where he develops accounts for all of the presidential candidates and feeding them tweets from one another, so they can have their own real-time deep-learning debates. With that in mind, who would you like to see a deep-learning bot be created for?
The quest to gain a greater insight into artificial Intelligence has been exciting and has also opened up a range of possibilities that have included “convolutional neural networks”, these are large visual networks of simple information-processing units which are loosely modelled on the anatomy of the human brain.
These networks are typically implemented using the more familiar graphics processing units (GPUs). A mobile GPU might have as many as 200 cores or processing units, this means that it is suited to “simulating a network of distributed processors”. Now, a further development in this area could lead to the potential for a specifically designed chip that has a sole purpose of implementing a neural network.
MIT researchers have presented the aforementioned chip at the “International Solid-State Circuits Conference in San Francisco”. The advantages of this chip include the notion that it is 10 times more efficient than an average mobile GPU, this could lead, in theory, to mobile devices being able to run powerful artificial intelligence algorithms locally, rather than relying on the cloud to process data.
The new chip, coined “Eyeriss” could, lead to the expansion of capabilities that includes the Internet of things, or put simply, where everything from a car to a cow, (yes apparently) would have sensors that are able to submit real-time data to networked servers. This would then open up horizons for artificial intelligence algorithms to make those important decisions.
Before I sign off I wanted to further delve into the workings of a neural network, the workings are that it is typically organised into layers, each of these layers contains a processing node. Data is then divided up among these nodes within the bottom layer, each node then manipulates the data it receives before passing it on to nodes within the next layer. This process is then repeated until “the output of the final layer yields the solution to a computational problem.” It is certainly fascinating and opens up a world of interesting avenues with which to explore, when you combine science and tech, the outcome is at the very least educational with the potential for it to be life changing. .
SpaceX just staged a 3-day event at the Texas A&M University, where the pioneering space company brought together teams of engineering students from around the world to compete for a chance to have their pod designs built and tested on SpaceX founder Elon Musk’s proposed Hyperloop transportation system. Musk himself even made a surprise appearance on stage during the event, where he was met with whoops, cheers and clapping from the crowd who may not have been expecting the chance to meet their inspirational icon.
“I’m starting to think that this is really going to happen,” said Musk as he took the stage, with many of the teams in attendance holding their hands up in groups in the hopes of drawing the SpaceX founder’s attention during his Q&A session. Musk going on to say, “the work that you guys are doing is going to blow people’s minds.”
The rest of the SpaceX Hyperloop Pod Competition Design weekend went on to pit over 1000 student teams from 120 colleges and 3 high-schools worldwide against each other in the design competition. This stage of the competition intended to create a shortlist of at least 22 teams, which may be invited to the Californian headquarters of SpaceX this summer to build and test their designs. Judges from Musk’s SpaceX and Tesla companies as well as university professors were in attendance to judge the teams’ 20-minute pitches and grill them with 10 minutes of questions on their designs. The contest challenged the students, not just as engineers, but also their business and marketing skills with many presenting business cards, prototype models and high-quality marketing videos, making the contest a good chance for enterprising engineers to network with their peers.
By the end of the weekend, the team from Massachusets Institute of Technology were deemed the winners, with the Delft University of Technology from the Netherlands finishing second and University of Wisconsin, Virginia Tech and the University of California filling the rest of the top 5.
It is always great to see Elon Musk continuing to engage with rising engineering stars and his positive effects on the field. It really pays off too, with both Tesla and SpaceX already performing feats beyond many of their rivals and with the Hyperloop on the horizon, Musk’s legacy will only continue to grow.
From smartwatches to fitness trackers, the number of wearable electronics in our lives are only set to increase, but they all have one limiting factor: power. But what if all of these wearables could power themselves from your movements, just like a self-winding mechanical watch? MIT may just have the answer, announcing this week that a way of transforming small bending motions into electrical energy had been figured out in a paper submitted to the journal Nature Communications.
The technology involves a central polymer separator soaked in an electrolyte sandwiched with two identical electrodes. When bent, it causes a chemical potential difference between the two electrodes, which in turn produces a voltage and electric current between the electrodes, which can power a device they are connected to. When attached to a small weight, the metal could be able to bend under simple ordinary movements, similarly to an automatic watch. The one flaw that can be foreseen in this method is degradation in power generated over repeated use. The current prototype of the technology mostly maintained it’s performance over 1500 bend cycles, but due to each cycle damaging the metal electrodes slightly, there was some fall off.
Most traditional methods of motion-based power generation rely on the triboelectric effect (based on friction) or piezoelectrics (crystals that produce a small voltage when bent or compressed). These technologies typically have high bending rigidity and rely on high-frequency sources of motion, which make them ill suited to gathering energy from natural human motions. By comparison, this new technology is both flexible, simple and cheap as well as being based on similar technology and materials to existing lithium-ion batteries.
This discovery, if its flaws can be overcome and be mass produced, has the potential to make fitness trackers and other bodily worn electronics vastly more convenient in daily life and not a useless piece of plastic, metal and glass should you forget to charge it. It’s even possible that a self-charging smartwatch might be enough to make me give up my automatic analogue watch.
A group of researchers from the University of Colorado Boulder, the University of California, Berkeley, and the Massachusetts Institute of technology have created a CPU that eschews electricity to transfer data in favour of light, which operates at astronomical speeds but uses a fraction of the energy required to run a standard processor. The remarkable photonic chip has been revealed in a new paper published in the academic journal Nature.
“Light based integrated circuits could lead to radical changes in computing and network chip architecture in applications ranging from smartphones to supercomputers to large data centers, something computer architects have already begun work on in anticipation of the arrival of this technology,” Miloš Popović, Assistant Professor at CU-Boulder’s Department of Electrical, Computer, and Energy Engineering and a co-corresponding author of the study, told CU News Center.
Measuring in at 3mm by 6mm, the photonic CPU operates at a bandwidth density of 300 gigabits per second per square millimetre, a rate of up to 50 times higher than that of the conventional electrical-based microprocessors of the current market. The chip uses 850 optical input/output (I/O) components to transmit data at superfast speeds.
“One advantage of light based communication is that multiple parallel data streams encoded on different colors of light can be sent over one and the same medium – in this case, an optical wire waveguide on a chip, or an off-chip optical fiber of the same kind that as those that form the Internet backbone,” he Popović, adding, “Another advantage is that the infrared light that we use – and that also TV remotes use – has a physical wavelength shorter than 1 micron, about one-hundredth of the thickness of a human hair,” said Popović. “This enables very dense packing of light communication ports on a chip, enabling huge total bandwidth.”
There are lots of ways people try to protect their privacy in the modern world, where techniques like encryption are under fire. While hiding message content can be effective, the ability to collect a mass of metadata can be just as invasive to your privacy if a company, government body or nefarious element were able to gain access to when, where and to whom you communicated with. A team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a system named “Vuvuzela”, after the popular (and annoying) plastic horn, that adds noise to any messages sent, rendering them untraceable.
Vuvuzela relies on a number of nodes to function, similar to Tor router for internet traffic, it relies on fewer nodes and more traffic. A sender deposits an encrypted message in a secure “dead drop” server, which can then be retrieved by its receiver. On top of that, traffic is not controlled by the user sending a message, instead message circulation takes place over 10-20 seconds, so as not to allow attackers to detect and track messages being sent. A user stopping sending or joining a chat may also cause hackers to be able to trace activity based on the number of messages sent. This is where the spam comes into effect. All of the server nodes that are part of Vuvuzela send junk messages to random inboxes at the same time that messages are propagated normally, hiding the activity of normal users. It is even resilient against a server being compromised or knocked offline, as the noise can be enough to obfuscate messages even with only a few nodes remaining. As a result, the only data that Vuvuzela exposes is the amount of nodes engaged in a chat.
It may seem like the holy grail of privacy at this point, but the assurance of data being hidden comes at a price, namely speed. Vuvuzela, while still in early development, is incredibly slow due to the timed sending of messages. In a test run by the researchers at MIT, they simulated 1 million users generating 15,000 messages per second. With this volume of data, the average time for a message to be delivered was 44 seconds, a time that many would consider unacceptable for every day or commercial use. For those in high-risk situations where their communication privacy is paramount, a small delay is not a massive trade-off.
Even in this day and age, computer learning is far behind the learning capability of humans. A team of researchers seek to shrink the gap, however, developing a technique called “Bayesian Program Learning” which is able to teach a new concept in just a single example, instead of the large sample sizes typically required for software to learn.
Detailed in a paper published in the journal Science the essence of Bayesian Program Learning, or BPL, is to attempt mimicry of the human learning process. Quite simply, the software will observe a sample, then generate its own set of examples, then determine which of its examples fit the sample pattern best, using this example to learn.
In order to test BPL, some of its creators – Joshua Tenenbaum of MIT, Brenden Lake of New York University and Ruslan Salakhutdinov the University of Toronto – attempted to have it learn how to write 1,623 characters from 50 different writing systems. The input data wasn’t font data, however, but a database of handwritten characters including those from Tibetan and Sanskrit. To learn the characters, the software broke them each down into sets of overlaid strokes and orders of their application, analysing which pattern most closely matched the sample. They even saw fit to have it create new characters for the writing systems, based on the styles it had collected.
For the output to be tested, the researchers used a “visual Turing test”, where human drawn and computer drawn characters were put side-by-side and a set of judges set to selecting which was hand-written and which was computer drawn. Impressively, no more than 25% of the judges were able to choose with any more accuracy than pure chance, showing the computer to be almost indistinguishable in its writing capability from a human.
BPL still has its limitations, with character recognition being a relatively simple task, yet the software sometimes took a number of minutes to perform the analysis. However, the creators have faith in BPL, with Tenenbaum told Geekwire “If you want a system that can learn new words for the first time very quickly … we think you will be best off using the kind of approach we have been developing here.”
It is hoped that when the algorithm is refined, that it can be used to assist in tasks like speech recognition for search engines, and usage of the technique could be a staple in future artificial intelligence, where the call could be for it to learn many tasks that are simple for a human to learn.
Toyota have a history in investment in investing in research, investing $50 million already this year. Now Akio Toyoda, president of Toyota Motor Corp announced that the company would be setting aside $1 billion in order to establish a new R&D department, known as the Toyota Research Department.
Based close to Stanford University in Silicon Valley and possessing a second site close to MIT, the Toyota Research Department aims to work alongside researchers from the university to further development in the fields of AI and robotics. The immediate thought that a motor company’s stake in such an investment would be furthering the development of self-driving cars, as many car companies strive to perfect in recent times. But Toyota is thinking on a wider scale, with the aim being, in the words of the new department’s leader Gill Pratt, to “bridge the gap between fundamental research and product development, particularly of life saving and life improving technologies.”
This leads to 3 goals, safety, accessibility and robotics. Safety, as expected, involves increasing the safety of those on the road, preventing accidents as much as possible. Accessibility is about bringing the freedom of travel that a car brings to those that are unable to drive for any reason. Lastly, robotics, the most general of the three aims to increase the quality of life for everyone, especially the old and infirm.
Toyota do not expect any instant returns on this investment, however with the investment set aside for at least the next 5 years, we could see new technologies from Toyota in more than just our cars. And even if all this sounds a lot like the work already underway by companies like Google, Toyota remain undeterred, despite others having a head start, the race for such technologies is just getting started; A fitting mindset for a company with long history in motorsport.
Neil deGrasse Tyson is known for a lot of things. He has advertised science and technology to thousands and even found Krypton (okay he found a planet roughly where Krypton would be and got it named after Supermans home planet). This week though he presented a session at the Clinton Global Initiative’s annual meeting. He was joined by two speakers, Massachusetts Institute Of Technology’s (MIT) professor and biomedical engineer Sangeeta Bhatia and the founder and CEO of code to inspire Fereshteh Forough. Amongst their things to discuss was a school that is set to open in Afghanistan with a purpose.
Forough explained that they plan to open a programming lab that will be targeted at women aged between 15 and 25, with the hopes that it can be used to teach women in the middle east to code and program in a safe place.
She hopes that the school will be the first of many in middle eastern countries while Bhatia suggested that they could make changes closer to home to help increase the number of women that took part in computer science programs. This comes in the same week where Stanford has reported that it has 214 female students in its Computer Science major. This figure would make it the most popular major in the University for women.
With more and more people feeling safe and confident in Computer Science, the number of people taking up the subject could soon see an even greater boost as more governments and schools make programming a part of their standard curriculum.
Science and tech are compatible with each other when it comes to developing new ideas for a variety of applications. This is certainly evident in the health sector which has seen a wide scope of innovations which in turn have been implemented to save lives.
Portability is essential and Harvard researchers are actively developing a machine which can filter pathogens from the blood, this newly proposed technique could offer hope for faster and more effective treatment for sepsis. This machine is nearing the point to which it could be clinically tested on sets of human control groups, which is crucial to the operation and further development of the device.
A prototype of this device has been tested on rats under lab conditions and the results have so far been rather encouraging, below is the current understanding of this machine.
“ It has been found the device which works in a similar way to the dialysis machines already used to filter the blood of patients with kidney failure, not only efficiently removes pathogenic material from the bloodstream but also works in concert with antibiotics to prevent a harmful immune response that can lead to organ malfunction and even death”.
The project which is being led by researchers at Harvard University’s Wyss Institute for Biologically Inspired Engineering, is part of an effort by the U.S. defence department to design a portable machine for treating soldiers in the field.
Sepsis is an incredibly dangerous and life threatening condition which is triggered by an infection, there is currently no effective therapy and the disorder kills millions of people around the world every year.
This device is potentially an exciting breakthrough in the search for a treatment to sepsis, what’s more exciting and potentially revolutionary is the new device removes pathogens regardless of their identity. It does this by using a genetically engineered blood protein that can bind to more than 90 varieties of harmful microorganisms, including bacteria, fungi, viruses, and parasites.
Let’s hope this machine can be successfully developed with the aim of rolling out to patients and not stocked for US defence use only. It’s exciting times to watch from afar as the boundaries of human health treatments are being pushed to a whole new technical level.
I recently wrote an article concerning a new technique of using a 3D printer to build up layers upon layers with pre-existing materials to create “glass” based objects. The accompanied video looked stunning and the potential applications seemed endless, well now, a team of MIT researchers have opened up a new frontier within 3-D printing which has expanded on the premise with new details concerning the ability to print optically transparent glass objects.
The ability to print glass objects is extremely complex and has been attempted by other research groups, the problem lies with the extremely high temperature which is required in order to melt the material. Quite a few development teams have used tiny particles of glass which is melded together at a lower temperature in a technique called sintering. Unfortunately, this technique has rendered such objects to be structurally weak and optically cloudy, thus eliminating two of glass’s most desirable attributes: strength and transparency.
MIT have therefore developed its own process which retains those properties and produces printed glass objects which are both strong and fully transparent to light. The device which is used to print such objects utilizes a computer assisted program which is similar to the standard design operating mechanisms implemented by current 3D printers. The result is a machine which can print objects with little human interaction or indeed intervention; it’s stunning to imagine an autonomous production line in your living room.
In the present incarnation, molten glass is loaded into a hopper within the top of the device after being gathered from a conventional glass blowing kiln. When completed, the finished piece must be cut away from the moving platform on which it is assembled; the temperatures are the same of 1900 degrees Fahrenheit which is approx 1037 degrees Celsius.
The potential uses for such a technique is mind-blowing, Neri Oxman, an associate professor at the MIT Media Lab envisions a future whereby it would be possible to “consider the integration of structural and environmental building performance within a single integrated skin.” This notion could completely transform the manufacturing process.
A further expansion on this technique would be to add pressure to the system which is either through a mechanical plunger or compressed gas, by doing so it is hoped to produce a more uniform flow and thus a more uniform width to the extruded filament of glass.
There is a potential downside to such a revolutionary direction, if you could inhabit a world where houses are printed on an industrial scale and goods are quickly printed, this would ultimately reduce the number of workers needed within production. AI and new techniques are slowly making people redundant within an ever-expanding population, a quote below emphases this further
“Boston Consulting Group predicts that by 2025, up to a quarter of jobs will be replaced by either smart software or robots, while a study from Oxford University has suggested that 35% of existing UK jobs is at risk of automation in the next 20 years”.
What future will be printed for us humans?
Thank you mit and bbc for providing us with this information.
The Massachusetts institute for technology (known as MIT) is known throughout the world for its technological prowess and skills. Producing proud graduates, it is known for being at the forefront of the information technology that we as a world use on an everyday basis. Once again it has scored first, this time, however, this is not good news.
Conducted by Security Scorecard, an information security assessment company, the company tested an assessment for several high-value universities and nearly gave MIT a failing grade. MIT scored low in several areas, including; hacker chatter (this measures the number of times the school was mentioned in online forums used by hackers and the amount of user details that were revealed online on these forums), patching cadence (how quickly reported patches were applied to deal with the vulnerabilities reported during the scan’s period) and IP reputation (the amount of malware communications that were coming from IP’s registered with the school).
MIT did score high in several areas, though, such as its Web Application Security, the health of its DNS records and finally the quality of its security at its endpoints. As with all things security is not something that can be considered fixed and left alone, it should always be considered and updated.
Among the many annoyances of a tech lovers life which includes, overheating, constant patching, hacking and dropped connections, there is the term battery or lack of considering your average smartphone is dead by the end of each day. Don’t get me started on your run of the mill double A battery, it was fine for a Gameboy, until you had to unreel a long wire with a plug on the end to continue playing, but not for today’s hi-tech toys.
Hopefully, an evolution is on the horizon after researchers at MIT and Samsung have developed a new approach to one of the three basic components of batteries, which in this case is the electrolyte. The premise involves developing a solid electrolyte instead of the current liquid used in today’s most common rechargeables. Current batteries use a liquid organic solvent whose function is to transport charged particles from one of a battery’s two electrodes to the other during charging and discharging, this process has been responsible for the overheating and fires which have caused high-profile disruption.
Another advantage of a solid state electrolyte is the ability to limit degradation to near 0; therefore such batteries could last for “hundreds of thousands of cycles.” Researchers also state these batteries provide a 20 to 30 percent improvement in power density. This means the amount of power that can be stored in a given amount of space can be increased.
By reducing these factors, researchers are hopeful this technique will improve efficiency and waste of the common battery, which in turn will benefit consumers. On a side note, it will be interesting to note how you would put this into practice with the aim of analysing if these batteries would really last for hundreds of thousands of cycles. Indefinite lifetimes in theory, let’s see what a Galaxy S6 makes of that.
Thank you MIT for providing us with this information
Recent years have seen the technique of 3D printing evolve from a niche concept to a mainstream phenomenon, which in turn has opened up a whole new horizon for product manufacturing. If you thought this was exciting, then be prepared to be blown away as a new development centres on glass 3D printing.
MIT’s Mediated Matter Group has unveiled a first of its kind optically transparent glass printing process which goes by the name of G3DP, If you are wondering, it stands for “Glass 3D Printing”. In order for this process to become a reality, an additive manufacturing platform is applied with dual heated chambers. The first or upper chamber is a “Kiln Cartridge,” which operates at an intense heat of 1900°F, while the lower chamber works with the aim of heating before cooling in order to soften the glass.
This technique is not creating glass but rather building layers upon layers with pre-existing materials. Below is a video to convey this process in action, as you can see, it is compelling, mind-blowing and quite relaxing to watch, the building up of an object looks similar to a lava lamp which used to be popular.
The consistency looks to be incredibly hot syrup which is drizzled onto a sugary treat, yep I know, perhaps a poor observation but I have included a screenshot below which kind of backs it up, sort of.
It’s intricate and opens up a whole new set of possibilities for everyday applications in the near future, for now, if your feeling stressed and would like a few moments to relax, then by all means watch the video, aside from the fact that it is pretty amazing to view, it might also soothe you.
Thank You to Gizmodo for providing us with this information.
A Boston startup has developed a new device to help police and first responders assess a potentially dangerous area before having to enter. Bounce Imaging, founded by MIT alumni, has designed and built the tactical sphere – branded the Explorer – which is a soft ball containing six digital cameras and LED lights that, when rolled into a room or other enclosed space, can be used to scope out the location for prospective threats. The technology could be especially useful in hostage situations, allowing law enforcement to locate gunmen prior to entering.
According to MIT News, “When activated, the camera snaps photos from all lenses, a few times every second. Software uploads these disparate images to a mobile device and stitches them together rapidly into full panoramic images.”
“It basically gives a quick assessment of a dangerous situation,” said, Francisco Aguilar, CEO of Bounce Imaging CEO. The first run of Explorers were tested by police, and their feedback informed the direction of future models. “You want to make sure you deliver well for your first customer, so they recommend you to others,” Aguilar added.
The Explorer also features within its thick rubber exterior built-in temperature and carbon monoxide sensors, and also acts as a Wi-Fi hotspot. Live footage from an Explorer can be viewed through a smartphone app.
A whole new level can be explored of what could be achievable when both Science and Tech meet; this time around the geniuses at MIT have developed a way to fix bugs in source code by using a system to import functionally from other programs.
The system is called CodePhage and it functions by analysing an applications execution, by undertaking these procedures, it is able to characterize the types of security checks with which it performs. As a result of this, the system can import checks from applications written in programming languages which differs from that of the one in which the program it’s repairing was written.
Once the fix has been imported, CodePhage offers a further layer of analysis which guarantees that the bug has been repaired. If this was not impressive enough, this system was tested on seven common open-source programs which were identified as having bugs. CodePhage was able to patch the vulnerable code with the estimated time calculated between two and 10 minutes per repair.
The ability to borrow code from one application in order to fix another could be revolutionary and the time which it takes in testing is phenomenal. Further experimentation and development is required, but it’s certainly an impressive start which has the potential for real world applications which are wide.
Thank You MIT for providing us with this information
During our lifetime, we get exposed to dozens of viruses, but luckily our bodies are well equipped to deal with it. Our immune system produces antibodies to deal with foreign agents and make us well again. However, the antibodies tailored for these viruses tend to linger in our bodies prolonged periods of time.
A team of researchers from Harvard, MIT and Howard Hughes Medical Institute got a brilliant idea of making use of the above-mentioned antibodies and came up with a new method of revealing what viruses your body was exposed to. The method is called VirScan and it can reveal your entire viral history with just a drop of blood.
The method involves mixing a patient’s blood with a set of known human viruses. Each virus carries a unique protein signature that the antibodies are trained to identify and attack. This means that dropping a blood sample filled with antibodies in a pool of viruses will ‘activate’ the antibodies and tell the researchers which virus strains were targeted.
To test their theory, the researchers performed tests on 569 patients. The results revealed that, on average, we are exposed to 10 viral species, with some even being exposed to up to 84 viral species. VirScan has also proved to be a cheap testing method, allowing doctors to perform a variety of tests at once for about $25.
The researchers say that VirScan is not only about identifying your past viruses. The method can also be used for early detection of viruses such as hepatitis C and HIV, as well as give doctors more insight on some viruses we don’t quite understand yet.
Thank you Newsweek for providing us with this information
Exploring the vastness of the planet’s oceans is definitely no easy task, but the experts over at the Massachusetts Institute of Technology are always trying to find easier and more efficient ways of getting the job done. In order to be able to map the ocean floor or keep tabs on ocean habitants, scientists use autonomous underwater vehicles, some of which will soon be able to plan their own missions without requiring any external input.
This goal is based on a system called Enterprise, which can be used to assign general objectives to AUVs, such as exploring certain locations within a given time frame. Once the objective is in place, the vehicles calculate a route and make a plan in order to fulfill the mission as quickly and as efficiently as possible. So what happens when the AUV cannot fulfill its goal? Well, a series of backup programs come into place, which help draw up new plans and even repair hardware failures.
Back in March, the Enterprise system was tested using an underwater glider that was sent off the coast of Australia. The test was definitely a success, as the glider managed to avoid unexpected collisions and accomplished its objective without issues. Aptly named after the famous Star Trek starship, Enterprise is based on three individual components, namely a captain, a navigator and an engineer. The system will be presented during the International Conference on Automated Planning and Scheduling event, which is set to take place in Israel in June.
What do you think about this innovative technology?
An extensive study by the Massachusetts Institute of Technology suggests that, despite unevidenced claims to the contrary, current solar panel technology is capable of delivering all the electricity a modern household could need. According to the 356-page report – The Future of Solar Energy – solar panels could, with the proper investment, deliver terawatts of electricity by 2050. MIT maintains that it is not the technology that is holding solar power back, but the investment, with researchers calling for increased funding from the US government.
“The recent shift of federal dollars for solar R&D away from fundamental research of this sort to focus on near-term cost reductions in c-Si technology should be reversed,” the report reads.
Richard Schmalensee, Professor Emeritus of Economics and Management at the MIT Sloan School of Management, said, “What the study shows is that our focus needs to shift toward new technologies and policies that have the potential to make solar a compelling economic option.”
“Massive expansion of solar generation worldwide by mid-century is likely a necessary component of any serious strategy to mitigate climate change,” reads the conclusion of the study. “Fortunately, the solar resource dwarfs current and projected future electricity demand. In recent years, solar costs have fallen substantially and installed capacity has grown very rapidly.”
Telsa CEO and SpaceX founder Elon Musk believes artificial intelligence (AI) is like summoning the demon, and that humanity should tread with caution. We all know the AI apocalyptic novels and movies – with robots and artificial life taking a hold of power and dominating life over the planet. During a Q&A session at an MIT Aeronautics and Astronautics Centennial Symposium, Musk made it clear he wasn’t keen on leading us down such a path for artificial intelligence; warning us of the dangers AI could lead us to. We need to make sure “we don’t do something very foolish,” Musk said. “If I were to guess like what our biggest existential threat is, it’s probably that.” Musk continued stating that within “all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
One thing is for certain, Musk is a firm believer that AI is a real and potential threat – warning us earlier in the year on Twitter that “we need to be super careful with AI. Potentially more dangerous than nukes.” Musk was so caught up in the question of AI that he actually failed to take in the next audience members question. He apologised with “sorry can you repeat the question, I was just sort of thinking about the AI thing for a second.” eTeknix readers, where do you stand on the topic of artificial intelligence?
Thanks to MIT for providing us with this information.
Remember the multi talented Atlas robot created by Boston Dynamics? Well it looks like it’s still being improved upon since it’s 2013 outing at the DARPA Robotics Challenge. While this is still one of the most capable humanoid robots on the planet, it’s still picking up new tricks and after some recent upgrades a team at MIT have taught Atlas to carry objects of different weights in each hand.
Carrying two different weight and/or size objects doesn’t sound overly impressive, but the way different objects behave has profound effects on movement, balance and more. Having this capability gives the Atlas robot a much wider range of real-world applications, such as carrying a heavy shield in one hand and a sawn off shotgun in the other… oh wait, I mean for things like construction or clearing out dangerous debris in disaster zones that would otherwise be unsafe for humans.
In the video below you can see Atlas dragging around a sizeable aluminium pillar with one hand. Feel free to mute the video if you’re not a fan of annoying high pitched noises.
Nuclear power is one of the safest methods of power generation, in theory. In the real world it however looks different, especially when the structures aren’t maintained or natural disasters hit, or both at once like in Japan. A more immediate problem is the waste generated by these power reactors and the thousands of years it takes to break down and stop being hazardous.
As it is now we bury our nuclear waste under ground, in mountains and deep under the sea, which isn’t very smart. This isn’t a solution that is bearable in the long run, in any way. On a personal level I’d really like to see them all shut down once and for all. We also hear one report after another about leaks in the storage facilities and radioactive material leaking into our water and and food supplies.
To make this situation a bit more manageable, Hitachi, in partnership with MIT, the University of Michigan, and the University of California, Berkeley, is working on new reactor designs that use transuranic nuclear waste for fuel; leaving behind only short-lived radioactive elements.
Most people believe radioactive waste to be some green glowing goo, but that is far from reality. The real problem isn’t the “hot” stuff as that will burn out in a matter of minutes, or even seconds. It’s the mildly radioactive elements with an atomic number greater than 92. These elements, such as plutonium, have half lives measured in tens of thousands or even millions of years. That makes storing them a very long-term problem, and is a particular difficulty in countries like the United States that don’t recycle transuranium elements by fuel reprocessing or fast-breeder reactors.
What Hitachi and its partners are trying to do is to find ways to design next-generation reactors that can use the low-level transuranium elements as fuel, only leaving the high-level elements to quickly (relatively speaking) burn themselves out in no more than a century or so. The idea in itself isn’t new and some modular nuclear reactors already use nuclear waste as fuel. But what sets Hitachi apart is that it’s looking into designs based on current boiling-water reactors that are known as Resource-renewable Boiling Water Reactors (RBWR) and are being developed by Hitachi and Hitachi GE Nuclear Energy Ltd.
The idea is to develop a new fuel element design using refined nuclear waste products along with uranium that can be installed in a standard boiling water reactor. This would not only make such reactors more economical to build, but would also use decades of safety and operations experience to achieve efficient nuclear fission in transuranium elements. Hitachi says that it’s already carried out joint research with its partners starting in 2007 and is now concentrating on the next phase, which deals with more accurate analysis methods, as well as reactor safety and performance, with an eye toward practical application of what’s been learned.
Thank you Hitachi for providing us with this information
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analysing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.
In other experiments, they extracted useful audio signals from videos of aluminium foil, the surface of a glass of water, and even the leaves of a potted plant. The researchers will present their findings in a paper at this year’s Siggraph, the premier computer graphics conference.
“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realize that this information was there.”
Joining Davis on the Siggraph paper are Frédo Durand and Bill Freeman, both MIT professors of computer science and engineering; Neal Wadhwa, a graduate student in Freeman’s group; Michael Rubinstein of Microsoft Research, who did his PhD with Freeman; and Gautham Mysore of Adobe Research.
Reconstructing audio from video requires that the frequency of the video samples, the number of frames of video captured per second, be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera that captured 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.
In other experiments, however, they used an ordinary digital camera. Because of a quirk in the design of most cameras’ sensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second. While this audio reconstruction wasn’t as faithful as that with the high-speed camera, it may still be good enough to identify the gender of a speaker in a room; the number of speakers; and even, given accurate enough information about the acoustic properties of speakers’ voices, their identities.
The researchers’ technique has obvious applications in law enforcement and forensics, but Davis is more enthusiastic about the possibility of what he describes as a “new kind of imaging.”
“We’re recovering sounds from objects,” he says. “That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.”
In ongoing work, the researchers have begun trying to determine material and structural properties of objects from their visible response to short bursts of sound. In the experiments reported in the Siggraph paper, the researchers also measured the mechanical properties of the objects they were filming and determined that the motions they were measuring were about a tenth of micrometer. That corresponds to five thousandths of a pixel in a close-up image, but from the change of a single pixel’s colour value over time, it’s possible to infer motions smaller than a pixel.
“This is new and refreshing. It’s the kind of stuff that no other group would do right now,” says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. “We’re scientists, and sometimes we watch these movies, like James Bond, and we think, ‘This is Hollywood theatrics. It’s not possible to do that. This is ridiculous.’ And suddenly, there you have it. This is totally out of some Hollywood thriller. You know that the killer has admitted his guilt because there’s surveillance footage of his potato chip bag vibrating.”
The results are certainly impressive and a little scary. In one example shown in a compilation video, a bag of chips is filmed from 15 feet away, through sound-proof glass. The reconstructed audio of someone reciting “Mary Had a Little Lamb” in the same room as the chips isn’t crystal clear. But the words being said are possible to decipher.
With all the cheap 3D printing methods out there, anyone can make anything with a little bit of imagination. This is how a few researchers from Harvard and MIT have created cheap, self-assembling complex robots that can transform from a something that looks like a sheet of paper to a fully fledged walking robot.
The scientists have said that they got the inspiration from the ancient Japanese art of origami. This, along with some children toy ideas and Transformer fan imagination, they were able to create the robots that can build themselves.
The robots are said to be built out of hobby shop materials that cost around $100, having tiny motors and batteries power them up. With this, a paper robot is said to transform to a four-legged automaton in just four minutes.
The self-assembling robots are said to benefit space exploration missions, dangerous environments, search-and-rescue missions and much more. Sam Felton, robotics researcher at Harvard, stated that this technology could be as technology-changing as the 3D printer. Having robots which can be built in your home for as little as $100 is everyone’s geeky dream come true.
Thank you Yahoo for providing us with this information Video courtesy of YouTube