Building a datacenter can be a costly and time-consuming endeavor, but the latest project from Microsoft may have solved quite a few problems all at once. Typically it can take two years to build a datacenter on land, and even then you’ll find that they need to be built quite far from built-up residential areas and city centers, which then leads to increased latency for users. Then you’ve also got the issue of cooling the datacenter, as all that hardware generates a lot of heat and the cost of cooling it can quickly become a headache. Microsoft’s solution to all of this? Build the datacenter at the bottom of the ocean!
While it may seem like a wacky idea, it’s pretty clever. The ocean water is a very efficient way of cooling the datacenter and it’s passive too, so there are no ongoing running costs for the cooling as there would be in a building that used air conditioning systems. Microsoft said that during their testing, they observed no heading of the marine environment beyond a few inches of the device, so there should be no major impact from these units.
Microsoft’s Project ‘Leona Philpot’ was deployed about 1 kilometer off the pacific coast, where it stayed for 105 days and worked perfectly. Having it deployed in water like this means that it can be located close to populated areas, reducing latency and not taking up valuable land space around cities. What better, for Microsoft at least, is that these units can be deployed in just 90 days, much quicker than land-based datacenters. Microsoft’s researchers are already working on a follow-up to Leona Philpot, where they will deploy a unit three times the size and perform further tests. It will be interesting to see how well these datacenters perform and if they’ll become more popular than the current land based ones although we suspect that may not be for a long time.
Nearly 4 years after AMD first revealed their ARM plans, their first ARM-based Opteron chips are finally ready. Shipping today, the octa-core Opteron A1100 server SoC and platform is already able for purchase from several partners and is available in 3 SKUs. Despite such a late launch, the A1100 may yet find a home in the datacenter.
First off, AMD has done a lot of work to build a comprehensive ARM server SoC. The Opteron features up to eight 64bit A57 cores running at 2Ghz. This puts it roughly in the same space as Intel’s Silvermont Atoms clock for clock. The key is the 4MB of L2 cache and 8MB of L3 cache that connect up to 128GB of DDR4 (DDR3 is limited to 64GB) over a 128bit bus. This is all backed up by an A5 co-processor to handle system control, compression and encryption as well. I/O is impressive as well, offering up to 8 PCIe 3 lanes and 14 SATA3 ports and two 10GbE ports.
While the A1100 will undoubtedly blow its way past Intel Atoms and other ARM competitors as a server SoC, the biggest competition comes from Intel’s big Xeons. At $150, AMD is pricing their chip dangerously close to Intel’s big cores which offer much higher performance and potentially better performance/watt. Still AMD is offering a viable chip to cater to the microservices and cluster-based computing market. If AMD’s in-house K12 arrives on time and on performance, AMD stands a good chance as securing a strong foothold in this market.
BP have just opened up their brand new datacenter in Houston, which now houses the world’s largest supercomputer for commercial research and it is capable of punching out data at more than 2.2 petaflops.
The currently unnamed system is part of a five year investment program at BP that will cost $100 million. It is interesting that BP claim it is the fastest cluster in the business too, given that the Pangea ICE-X cluster that is used by rival gas firm Total Group clocks in at 2.3 petaflops, and that performance is due to be doubled by 2015!
With 536TB of memory, 23.5 petabytes of storage space and more than 67,000 CPUs, the BP rig is one incredibly powerful piece of hardware, obviously. It is based on HP’s Scalable System SL6500 server enclosures, typically used for cloud computing system.
2,912 HP ProLiant SL 230s Gen8 server nodes make up the bulk of the system, each packing two eight-core Sandy Bridge Xeon E5 2600 V1 CPUs with 128GB of memory. Then an extra 59 DL580 racks with Xeon E7 Westmere-EX CPUs running at 2.4 GHz. With 2,520 ProLiant SL230s Gen8 nodes featuring ten-core Ivy Bridge-EP Xeon E5-2600 V2s at 3 Ghz completing the set. Bringing the total to 96,992 cores.