Knights Landing Supercomputing Chipset Featured in Ninja Desktops

Whenever new hardware is released, they always come with cool names and Intel’s latest Xeon Phi chip’s don’t disappoint with the name Knights Landing (any Game of Thrones fan spot the possible reference?). While not designed for desktops the next step of Colfax’s Ninja desktops will make sure of these supercomputing chips.

The new Knights Landing chips from Intel feature 72-cores, remember when you were excited to get a dual core processor? Intel is open in saying that only a limited number of workstations with the chip will become available this year, having originally been designed to help boost servers and supercomputers around the world but now could be powering your full gaming experience.

Be warned the extra power will come at a cost, with costs from Colfax’s website starting at $4,983 (around £3,508) for the base configuration. Featuring a 240GB SSD, a 4TB hard drive and a staggering 96GB of DDR4 memory the computer could easily let you get on with your daily YouTube and emailing while loading up the computer with two 1.6TB SSDs and two 6TB hard drives will jump the price to $7,577. With everything liquid cooled and two-gigabit ethernet ports, you don’t need to worry about overheating or slow network traffic.

Workstations are typically used for graphically intense operations such as film editing, graphics manipulation or engineering applications but with process heavy software coming out with the likes of virtual and augmented reality, people are looking at getting greater computing power like those offered by workstations for everyday use.

Protein-Powered Biocomputer is Only as Big as a Book

Supercomputers are incredibly important, especially when it comes to research and performing incredibly complex calculations. However, supercomputers are notorious for using a lot of energy and taking up a lot of space, not to mention the fact that they’re incredibly expensive. However, sources indicate that a multi-university team of researchers has managed to create a powerful protein-powered biocomputer that is only about as big as a book. According to Lund University, this computer could become incredibly useful for cryptography and “mathematical optimization,” and that’s because biocomputers operate in parallel as opposed to traditional computers that work in sequence.

When it comes to power, this product needs less than one percent of the power used by a traditional transistor in order to do a calculation step. What’s arguably even more impressive is the fact that this product is so much smaller when compared to regular supercomputers such as IBM’s Watson, which includes 90 server modules. Even though its problem-solving capabilities are rather limited for now, the ATP-powered biocomputer has great potential for scalability, and I wouldn’t be surprised to see it perform much more complex tasks in the near future.

“Our approach has the potential to be general and to be developed further to enable the efficient encoding and solving of a wide range of large-scale problems.”

Intel to Deliver 72-Core Supercomputer Chip to Workstations

In an attempt to change the workstation game entirely, Intel intends to release their new 72-core supercomputer chip, code-named “Knight’s Landing”, to workstation machines as well as supercomputers.

Typically larger and more expensive than a typical desktop, workstations are typically used for processing-intensive tasks such as high-quality computer generated graphics, film editing and for computations and modelling in the science and engineering fields. Due to these professional applications, they also require more processing power, often using high-end desktop chips or even server chips like Intel’s Xeon. This new Knight’s Landing chip will be based on the Xeon Phi architecture, of which the current generation chips are used in systems such as Tianhe-2, the world’s most powerful supercomputer.

The aim of bringing these powerful chips to workstations as well as supercomputers is an experiment in making supercomputing available to researchers without access to a full-scale supercomputer to run computation on, or to allow writing and testing of code intended for Xeon Phi-based supercomputers before deployment to the supercomputer itself. And while current workstations make use of discrete coprocessors alongside production CPUs in order to supplement their power, Knight’s Landing will run both the main processor and coprocessing units on one chip, in concert, the system will be able to provide over 3 teraflops at peak.

While the idea of 72-cores on a processor may boggle the mind of most PC users used to between 2 and 8 cores, Knight’s Landing runs more like a modern graphics card, of which the top end chips have multiple thousands of single purpose cores. Further, Knight’s Landing possesses 16GB of MCDRAM, of which is claims has 5-times the bandwidth of consumer DDR4 RAM as well as lower power draw and higher density than GDDR5.

Intel will be handling initial distribution of these new workstations themselves, hoping to extend sales of the workstations and maybe even desktop variants through other partner companies. These machines will be far more limited than typical PCs, however due to the chip being highly integrated into the rest of the system and the OS and other tools being pre-loaded by Intel. And while this seems like it could bring a new face to desktop computing, Intel claims that currently the rollout is more of an experiment than an attempt to do so. The ambition is definitely there, though, after the dropping of its Larrabee chip back in 2010. This is just the start, Intel already has plans for the successor to Knight’s Landing, Knight’s Hill.

Could the future be a supercomputer in all of our homes be more real than we think? If Knight’s Landing succeeds in its experimental release, we could be seeing chips of this calibre on the consumer market all-too-soon; an exciting idea for sure!

Analysing Your Brain Could Be 30 Times Faster Than A Supercomputer

The human brain, fascinating, exciting and full of possibilities, the notion to create, form an opinion and challenge the environment which we live in, is truly exceptional. We now might be able to find answers as to how powerful the human brain is after a project which is designed to compare a supercomputer with that of a brain.

An Artificial Intelligence project which has been devised by two PhD students from the University of California Berkeley and Carnegie Mellon University, will be the first of its kind to compare the human brain with the world’s best supercomputer. The AI Impacts project aims to determine how fast the human brain sends signals in its internal network compared to that of a supercomputer.

The scholars compared the power of our brains with that of IBM’s Sequola supercomputer which is in the top 3 of the most powerful supercomputers. “Sequola has a TEPS (Traversed Edges per Second) benchmark of 2.3 x 1013 TEPS”. The estimates suggest the “AI Impacts are that the human brain should be at least as powerful as Sequoia in the lower limits and for the upper estimates, therefore the human brain could surpass the IBM Sequoia speed by 30 times at 6.4 x 1014 TEPS”.

Which is both a lot to take in but also equally and potentially incredible, evolution has formed an instrument which is quite amazing, and it begs the question, what else will we find as research and tech advances with the aim of exploring us. It is also interesting to note if the wiring of for example a genius brain, think Stephen Hawking, is different to that of an average mind or the best sportsman evolved differently with more advanced genes, or if are we all capable. If we spent enough time learning a skill to be able to adapt to anything? Its compelling none the less.

Thank you aiimpacts for providing us with this information.

Image courtesy of fossbytes

BBC and Met Office Experiences A Stormy End

After the poor weather we’ve had recently, it seems that everyone’s attention has been turned to weather forecasts to scope for a break of sunshine or even just a stop to the rain and thunderstorms. Today marks a day in history, when the BBC leaves the weather forecasting capabilities of the Met Office for an overseas option of either the MeteoGroup from the Netherlands or MetService from New Zealand.

The BBC has been using the information supplied by the Met Office for 93 years, ever since the first bulletin back on November 12, 1922. The Met Office says it will still provide severe weather reports to the BBC, but all other weather information will be provided by another service.

It’s not entirely clear why the split has come to pass, but it could be due to the constant monetary restraints and the BBC has been forced to source a cheaper provider to cut the overall company spend. This contract drop would be a devastating blow for the Met Office which is currently undergoing a massive venture to produce one of the world’s most powerful weather forecasting supercomputers. This build will be expected to be capable of around 16 petaflops and a grand total of £96 million.

Two statements have been released, one by both companies which show different feelings towards the split:

BBC Spokesperson, “Our viewers get the highest standard of weather service and that won’t change. We are legally required to go through an open tender process and take forward the strongest bids to make sure we secure both the best possible service and value for money for the licence fee payer.”

Met Office operations director, Steve, Noyes, “Nobody knows Britain’s weather better and, during our long relationship with the BBC, we’ve revolutionised weather communication to make it an integral part of British daily life. This is disappointing news, but we will be working to make sure that vital Met Office advice continues to be a part of BBC output. Ranked No 1 in the world for forecast accuracy, people trust our forecasts and warnings.”

This could deal a massive blow to the Met Office and seems to be another version of UK outsourcing for cost effectiveness. What do you think of the split? Will a foreign company be able to accurately monitor the UK as well as the Met Office has? Let us know in the comments.

Thanks to ArsTechnica for providing this information.

China Bans Supercomputer and UAV Exports Based on National Security Concerns

The Chinese superpower seems to be a bit concerned about its latest tech getting into the wrong hands and has banned the export of unlicensed supercomputers and some UAV models.

The ban seems to forbid any company attempting to export machines capable of outputting eight TFlops of data or more than 2 Gbps of network bandwidth. Taking a look at the Top 500 list of supercomputers, we see China’s Tianhe-2 at the top of it, while the US occupies the 2nd and 3rd place.

The UAV ban comes from news about an Indian drone being shot down in Pakistan, suspected of using Chinese tech. Pakistan has close ties with the US and we all know how the US is keen on getting their hands on Chinese technology, so the word regarding the drone seems to have freaked out some high-ranking officers enough to ban UAV exports from China too.

However, the UAV ban seems to affect only aircraft capable of flying for more than an hour or reaching altitudes of 50,000 feet, so there aren’t many UAVs boasting those kind of specs outside of military use.

There has been no official reason for the ban in question, but speculations point to the ban as a result of the US blocking Intel’s export of high-end x86 chips to China. The race for who has the best tech has been noticed between the US and China for ages now, but signs like this just keep on cropping up. So where is all of this heading? It could be anyone’s guess, but we like to hear your own!

Thank you The Register for providing us with this information

Concept Turns Old Smartphone Parts into High Spec PC

Many of us update our phones at increasingly regular intervals now, often once every one or two years. While many of us sell our phones, the buyers of secondhand devices don’t keep them too long either. Quite often they end up in landfill. So what do we do to solve this? Should we all buy modular phones like the concept proposed by Google and others? Well one company in Finland thinks that is indeed the way to go, but wants to take it even further.

The Puzzlecluster is a concept put forward by Circular Devices, a Finnish company that is developing the Puzzlephone, a modular smartphone similar to Google’s Project Ara. Their cluster concept would utilise the discarded modules used in the phones to make high spec PCs and as time progresses, supercomputers.

The idea is beautifully simple – the Puzzlecluster is essentially a case that can take old components from modular smartphones. It’ll allow you to keep adding old CPUs, building up the power of the system the more you add. The machine could be repurposed as a PC, or as you add more and more CPUs, it could become a high spec PC and eventually a server or mini supercomputer.

It’s an interesting concept, one that will hopefully someday come to fruition.

Source: The Verge

Access IBM’s Watson Supercomputer for Free

IBM has opened up its Watson supercomputing platform to everybody for free. The decision to open up a public beta for the data analytics platform means that we now all have partial access to a supercomputer, anytime, anywhere.

Using what is described as “the most powerful natural-language supercomputer in the world”, you can upload a dataset and let Watson analyse it all in incredibly accurate detail – producing correlations, predictive analyses, graphs, charts and even infographics that represent your data.

It’s a very interesting concept and is probably the first time anybody and everybody has been able to access a supercomputer for free. You can access Watson at IBM’s website here, where you will be required to set up a free account.

I know what some of you are wondering. Can it run Crysis?

Source: Gizmodo

U.S. Department of Energy to Spend $425 Million on Supercomputers

The US Government’s Department of Energy has announced it is to invest $425 million to build two supercomputers, which, when built, will be the fastest computers in the world. The ultimate aim is to research science projects, including nuclear weapons.

The two computers, named Summit and Sierra, will be installed at Oak Ridge National Laboratory, Tennessee, and Lawrence Livermore National Laboratory in California, respectively.

NVIDIA, IBM, and Mellanox have provided the components for use in the two computers. Summit will run at 150 petaflops, with Sierra operating at 100 petaflops. For comparison, the world’s current fastest supercomputer, the Chinese Tianhe-2, runs at 55 petaflops.

An extra $100 million will go to fund research into extreme-scale computing, under the project name FastFoward2.

Source: Reuters

Disney Renders Its New CGI Film on a 55,000-Core Supercomputer.

Prepare your CPU’s – the new bad boy in computer animation processing has arrived at Disney’s studio – a 55,000-core monster supercomputer, and it’s pumping out over 400,000 high level computations per day. What’s it currently at work at you ask? Disney’s new animated film Big Hero: 6. The film tells the tale of a young boy and his soft robot and is said to be a mishmash of varying super powers (think Marvel or DC Comics) crossed with a technology infused fantasy city called “San Fransokyo”.

The visual workings of the film are staggering, the movie’s been created from the ground up using Disney’s new software engine known as Hyperion. Hyperion accurately analyses and calculates lighting from multiple sources of indirect lighting – allowing artists and graphic designers to create scenes with more realistic and creative atmospheres digitally than ever before. Disney’s offered a few looks into the scenes and lighting that’s being rendered – and we have to say that it’s absolutely stunning. To put the power of Hyperion into perspective Disney’s Animation CTO Andy Hendrickson said that Hyperion “could render Tangled from scratch every 10 days.” The fantasy city of San Fransokyo will feature over 260,000 individually rendered trees, 83,000 buildings, 215,000 streetlights and over 100,000 vehicles. This is on top of literally thousands of crowd extras that have been poured into the animation through a special generational tool called Denizen.

The funny thing about it all is that the extreme envelope pushing technology isn’t the kind of stuff that will ever excite your typical moviegoer. The incredible details and calculations Hyperion deliver are just your typical norms that people associate with the moving pace of technological advancements – however, with that said, the hardware and software powering them is just as staggering and mind blowing as always. With the backbone of the film being rendered via a 55,000-core supercomputer, Hendrickson estimates that most of the fancy effects that will be seen on screen will be taken for granted by most audiences.

Thanks to Engadget for providing us with this information.

Wonder What It Is like to Unbox a Supercomputer?

There isn’t a much better feeling then receiving that new product and unpacking it, digging through bubble wrap and packing peanuts for every little thing. And then finally, we can peel off the protective plastic-covers, slowly. But have you ever wondered how it would be to unbox a super computer? Apparently some reporters did, and we got a lot of photos.

Unboxing a super computer isn’t much different then any other computer, it’s just a much bigger scale. Crack open the crates, connect the cabinets and voila, your super computer is running.

Pawset Supercomputing Centre in Australia recently received a Cray XC30, dubbed Magnus2, and the unboxing was covered by the reporters from ITNews. Each of the 7 new cabinets weighs about 1.4 tonnes and can have up to 384 CPU’s in each, cracking a wopping 99 teraflops. The new system features over 35.000 Xeon cores and a peek performance around 1 petaflop.

The University of Arizona recently got their El Gato supercomputer, and the unboxing was covered as well. It’s composed out of IBM’s x86 iDataPlex servers and Nvidia Tesla K20 accelerators, and the El Gato also came fully build, shrink-wrapped but otherwise ready to go. With a peak performance of 145 teraflops, the El Gato is one of the fastest supercomputers located at an US university.

Just as in our consumer world, not everything in the server world is plug and play like above. The video below shows a time-lapse of engineers at DoE’s Oak Ridge National Laboratory (ORNL) manually upgrading the jaguar supercomputer to become the titan super computer. With it’s 560 Xeon cores and 640 Tesla cores, it’s pumping a massive 17  petaflops, yet it still isn’t the world’s fastest.

[youtube]https://www.youtube.com/watch?v=S8Y77efFW-I[/youtube]

So there you have it, it isn’t much more difficult to unbox and set up a super computer then any random Dell PC for example.

Thank you Extremetech for providing us with this information.

Image and video courtesy of Extremetech.

US to Simulate Nuke Tests Using Cray Supercomputers

Supercomputer manufacturer, Cray, is said to help the US guard its arsenal of nukes after winning a $174 million contract to provide a new supercomputer to the National Nuclear Security Administration (NNSA).

The current supercomputer, a Cray XE6 called “Cielo”, is said to have 107,152 cores and a theoretical peak performance of a little over 1028 TFlops. The new supercomputer, which is a Cray XC super model going by the name of “Trinity”, is said to be connected to the company’s Sonexion storage at Los Alamos and is expected to provide 8x the power of the current XE6. The new supercomputer is said to be a joint project between “the New Mexico Alliance for Computing at Extreme Scale (ACES) at the Los Alamos National Laboratory and Sandia National Laboratories as part of the NNSA Advanced Simulation and Computing Program (ASC)”.

Trinity is said to be based on Intel’s Xeon Haswell processors and the upcoming “Knights Landing” Xeon Phi processors, boasting a 82 PB capacity and a design throughput of 1.7 TB per second. Its main purpose is to test the nuke arsenal’s safety, security, reliability and performance, in addition to conducting simulations of the US nuke stockpile in order to understand the weapons’ integrity as they age, while avoiding the need for underground detonations of devices.

Thank you The Register for providing us with this information

NVIDIA Tesla GPUs and Intel Xeon Phi Coprocessors seen in Cray XC30 Supercomputers

Cray Inc. today announced the Company has expanded its support for accelerators and coprocessors. Cray is now selling the XC30 series of supercomputers with NVIDIA Tesla K20X GPU accelerators and Intel Xeon Phi coprocessors. Cray’s Adaptive Supercomputing vision has reached its goal today, which is focusing on delivering innovative systems that integrate diverse technologies like multi-core and many-core processing into a unified architecture.

“We designed the Cray XC30 supercomputer to be the realization of our Adaptive Supercomputing vision, which is about providing customers with a powerful, flexible tool for solving a multidisciplinary array of computing challenges,” said Cray’s senior vice president of high performance computing systems, Peg Williams. “Integrating diverse accelerator and coprocessor technologies into our XC30 systems gives our customers a variety of processing options for their demanding computational needs. Equally as important, Cray XC30 supercomputers feature an innovative software environment that allows our customers to optimize their use of diverse processing options for their unique applications.”

The Cray XC30 bear the code-name “Cascade” and are Cray’s most advanced HPC systems supercomputers which were engineered to meet the performance challenges of today’s HPC users. The Cray XC30 and Cray XC30-AC supercomputers feature the unique Aries system interconnect, a Dragonfly network topology that frees applications from locality constraints, innovative cooling systems to lower customers’ total cost of ownership, the next-generation of the scalable, high performance Cray Linux Environment supporting a wide range of ISV applications, Cray’s HPC optimized programming environment, and the ability to handle a wide variety of processor types.

“The Intel Xeon Phi coprocessor is designed for high-density computing and highly parallel processing while offering important efficiencies in programmability for the software developer community,” said Rajeeb Hazra, Intel Vice President & General Manager, Technical Computing Group. “The performance and programmability of Intel Xeon Phi coprocessors, along with the Intel Xeon processors, will enable a powerful and energy-efficient Cray XC30 supercomputer that will be broadly applicable for scientists, engineers, and researchers in achieving their breakthrough innovations.”

More about the Cray XC30 supercomputers can be found on the Cray website here.

Thank you TechPowerUp for providing us with this information.

Image courtesy of Cray Inc.