Thecus Expands Windows Storage Server Line-up with W2810PRO

Thecus had a lot of success with their Windows-based NAS devices and now they’ve expanded that series with the new W2810PRO NAS. The new NAS comes equipped with an Intel Celeron N3150 quad-core processor, the same that we recently saw the N2810PRO launch with, it comes with 4GB DDR3 memory, and uses an SSD as boot drive.

Linux-based NAS run more out of the memory than Windows does and as such the use of a real SSD over a flash module is something that will make a huge difference for such a system. The quad-core processor is powerful enough to drive 4K experiences and thanks to it being a Windows version, it comes with a familiar interface. Integrating Microsoft services such as Office 365 and Azure Cloud Service is as easy as it could be on these devices as they’re running Windows 2012R2 Essentials as the operating system.

“Adding to the success of the original Windows Storage server 2012 solutions from Thecus, the W2810PRO provides increased power and speed performance to users”, said Florence Shih, CEO of Thecus Technology Corp. “This new Thecus Windows Storage Server is an ideal solution for individuals and businesses that are comfortable and proficient with the Windows platform to safely protect and effortlessly manage their valuable data.”

The 60GB boot SSD is more than enough for Windows itself and plenty of apps while the two drive bays give you up to 16TB raw storage capacity when using 8TB drives. That is a lot of power and storage in a small form factor. Windows Storage Server Essentials offers a host of features and functionality for organizations of all sizes including Data Deduplication and Storage Spaces for efficiency and protection, native support for Active Directory, and remote access through the P2P application, Orbweb.me. Users can further customize the W2810PRO to specific business needs as it supports third-party Windows Server add-ins.

Connection-wise, you get both an HDMI port and DisplayPort for direct usage, an S/PDIF for an audio connection, three USB 3.0 ports, and dual Gigabit Ethernet network connectivity.

Feature Highlights:

  • Top Storage for 1-50 Employee Business
  • Office 365 & Microsoft Azure: Cloud Service Integration
  • Active Directory Domain Services: Scalable, Secure User Management
  • Data Deduplication: Performance Optimization
  • Enhanced Boot Drive: Embedded SSD
  • Windows License Included
  • Intel Celeron N3150 Quad-Core Processor
  • 4GB DDR3 RAM

The new Thecus W2810PRO NAS is available starting today and it is an update well worth it. It packs more than double the power of the previous model, the W2000+, which was based on an Intel Atom dual-core processor.

Welcome the 13TB SSD Server Named Olive

When it comes to storage, SSD’s are gaining traction at an incredible speed. With SSD prices sinking closer to HDDs, there are very few reasons why people would avoid the large speed advantage that upgrading your drives can have. Fixstars wants to move that advantage to a whole new market, with a 13TB SSD drive titled Olive.

Fixstars’ Olive isn’t just a 13TB SSD drive, with it essentially packing the abilities of a server into the 2.5 inch SSD drive. This advantage comes courtesy of an FPGA (field programmable gate array), something which enables you to reprogram the Olive to perform certain tasks. With the ability to share and spread movies, or even collect information from a selection of devices, the Olive doesn’t just have a lot of storage space.

The downside with FPGA’s is that while they can be fast, you have to program any tasks directly onto the hardware, something which isn’t a very common skill in the industry and is limited to just the tasks that are embedded within the FPGA. Thanks to its size though the FPGA could be used for anything you might normally use a server for, such as quickly backing up a system or even acting as an independent server should one of yours fail.

Currently, the Olive only runs 32-bit Linux, with 512MB of RAM and an ARM Cortex-A9 CPU, although it has been suggested that it could be modified to speed up the Olive and enable 64-bit computing in the future.

Still experimental, the Olive hasn’t got a price yet while Fixstars evaluate feedback and decide if there is a market for something so unique.

While The Last One Was A Hoax 123-reg Has Actually Deleted People’s Websites

A few days ago we reported on the fact that a company had apparently deleted itself, news which later turned out to be a hoax as part of a bad marketing scheme. For people who used 123-reg, a website hosting company in the UK, the joke may be on them as the company has actually deleted people’s websites.

123-reg has around 800,000 customers within the UK, hosting around 1.7 million sites, said that similar to the hoax, an error was made during “maintenance”, resulting in data from one of their servers being deleted.

The firm issued a statement saying that the company they were working on “restoring … packages using data recovery tools”, a process that is slow and not always effective, as people noted to the previous hoax. 123-reg has recommended that those with backups of their sites should use them to rebuild their sites, as the company itself didn’t have backups of the customers sites.

While the fault is reported to have only affected “67 out of 115,000” servers, it was caused by an automated script. An audit of 123-reg’s scripts is now being conducted and any deletion will now require human approval in the future, something that I’m sure the many companies that have lost business because of this blunder are less than comforted by.

Play Counter-Strike 1.6 on Android!

When it comes to technology we are often told about how quickly it is advancing, both in power and the level that we adapt it to our lifestyles. One such popular activity would be the games we play, going from 8-bit dungeon explorers to giant adventure games across stars and planets in virtual reality, but one game that many will remember is the classic called Counter-Strike 1.6, something you can now play on your phone.

Counter-Strike 1.6 was originally a mod of the original Half-Life game, introducing many to competitive multiplayer gameplay for the first time while others enjoyed the mods that let you turn the battle into a paintball party or even the custom maps like the Simpsons neighbourhood. Alibek Omarov apparently didn’t just want to enjoy that same feeling on his phone but wanted the full experience, original game and all.

Featured on his GitHub account under the handle a1batross, the CS16Client lets you install and play the original game on your phone, servers and all. Got a free minute on the way to work? Why not stop a terrorist bomb threat or see if you’ve still got the skill to perform a 360 no scope.

While controls seem a little complicated and clunky, it shouldn’t be hard to connect one of the many controller adaptors that you can now get for your phone to turn your experience into a full-on classic gaming experience.

Library Management Software May Be Open to Ransomware Attacks

When it comes to software, schools are either on top of it or a little behind. The reason being is mostly the budgets they have to deal with, one piece of software that is often ignored by schools, which tend to have to work on the “if it isn’t broken we don’t need to replace it” policy, is the Library management software. If people are using any of Follett’s old library management software, they may want to change that approach and update soon as it’s been revealed that the software may be open to ransomware attacks.

The vulnerability was discovered by Cisco’s Talos group and found that users could remotely install backdoors and ransomware code to the JBoss web server element of the library management system, leaving users with either a large bill or no access to their libraries information.

Follett has not sat idly by with them already releasing a patching system to fix the flaws that expose the system and it even picks up any unofficial files which may have been snuck on to compromise the servers. Working with the Talos group, Follett is seeking to inform customers about the security risk and how to address the issue, potentially removing the threat and damage it could do before someone manages to make any money off of your local schools’ library.

Knights Landing Supercomputing Chipset Featured in Ninja Desktops

Whenever new hardware is released, they always come with cool names and Intel’s latest Xeon Phi chip’s don’t disappoint with the name Knights Landing (any Game of Thrones fan spot the possible reference?). While not designed for desktops the next step of Colfax’s Ninja desktops will make sure of these supercomputing chips.

The new Knights Landing chips from Intel feature 72-cores, remember when you were excited to get a dual core processor? Intel is open in saying that only a limited number of workstations with the chip will become available this year, having originally been designed to help boost servers and supercomputers around the world but now could be powering your full gaming experience.

Be warned the extra power will come at a cost, with costs from Colfax’s website starting at $4,983 (around £3,508) for the base configuration. Featuring a 240GB SSD, a 4TB hard drive and a staggering 96GB of DDR4 memory the computer could easily let you get on with your daily YouTube and emailing while loading up the computer with two 1.6TB SSDs and two 6TB hard drives will jump the price to $7,577. With everything liquid cooled and two-gigabit ethernet ports, you don’t need to worry about overheating or slow network traffic.

Workstations are typically used for graphically intense operations such as film editing, graphics manipulation or engineering applications but with process heavy software coming out with the likes of virtual and augmented reality, people are looking at getting greater computing power like those offered by workstations for everyday use.

The Company Deleted by One Line of Code Was a Hoax!

Yesterday we reported that a man had mistakenly deleted his entire company using just one line of faulty code. Now it turns out that the entire thing appears to have been made up by the poster as a publicity stunt.

Marco Marsala posted on the Server Fault forums asking for help earlier this week, explaining that his careless use of the “rm -rf” command in Unix had caused him to accidentally delete the contents of all of his servers, including the backups. The story became incredibly popular online and was reported by a number of major news sources as well as garnering a large number of responses to his original post, with a variety of sympathy, pity, and derision.

On Friday, the post was deleted by Stack Overflow, the parent forum of Server Fault and later a post made by a moderator, Sven, brought to light that the story was, in fact, a hoax. The poster, Sven, a Server Fault moderator, pointed to an Italian news report that detailed that the story was part of a marketing stunt by Marsala in order to promote his company and gain visibility. Marsala told the paper that the whole thing was “just a joke”. A statement by Stack Overflow revealed they did not find it quite as funny, saying “The moderators on Server Fault have been in contact with the author about this, and as you can imagine, they’re not particularly amused by it.”

In many ways, it could be surprising how many people believed the story, especially on a forum populated almost entirely by those knowledgeable in technology. It is yet to be decided how Server Fault will deal with the hoax topic, with Sven currently allowing the community in question to decide its fate.

One Line of Code Accidentally Deletes Entire Company

As far as code mistakes go, few can claim that their careless coding practices caused the deletion of their entire company. Marco Marsala ran a small web hosting company that carried the websites of a number of clients until he unwittingly instructed the servers to delete their entire contents, effectively wiping out his business and the websites of his clients.

In response to the tragedy that befell his servers, Marsala took to the Server Fault forum to explain his plight and perhaps hope that some of the forum’s denizens would be able to help him with his predicament. Instead of help, most of the advice he received simply informed him that the chance he had forever deleted his company was high and his code had completely destroyed both his own data and that of his clients.

I run a small hosting provider with more or less 1535 customers and I use Ansible to automate some operations to be run on all servers. Last night I accidentally ran, on all servers, a Bash script with a rm -rf {foo}/{bar} with those variables undefined due to a bug in the code above this line.

The reason why Marsala lost all of his data stems from his use of the “rm -rf” command, which can be broken down to “rm”, removing files, “-r” meaning it will delete recursively into every subfolder and “-f” for force, meaning no warning will be given. Due to the two variables surrounding the / being empty, this caused the system to delete from the root directory, essentially wiping out everything on the machine. To make matters worse, while he had taken backups, the backup devices had been mounted just before the erroneous script ran, causing them to also be wiped.

Responses to Marsala’s post ranged from pity to insulting, however, all agreed that the data on the servers was almost certainly gone for good with no recovery. Most focused on pointing out the mistakes he had made, instead of being able to offer him any help, “This is not bad luck: it’s astonishingly bad design reinforced by complete carelessness” wrote user Massimo.

For Marsala, there doesn’t look to be a good end to this story. There are very few options open to him that would allow the data to be recovered and even those, such as contacting professional data recovery experts, are expensive, time-consuming and have no guarantee of success. This should serve as a cautionary tale for those wishing to start their own online businesses to be very careful over what you run on your servers and the care you take of your backups.

Intel’s New Broadwell Xeon Chips Will have 22 Cores

In a growing trend for Intel’s server targeted chips, this Thursday they released the newest Xeon E5-2600 processors, which contain as many as 22 cores.

The move to developing chips with an ever-increasing number of cores allows them to cater to the needs of cloud and mobile service providers, whose servers make full use of multiple cores and processing threads to allow more video and applications to be streamed from a single server simultaneously. The chips also provide benefits in workstation usage. When combined with a powerful graphics processor, it will be able to assist in the development of cutting-edge, high-quality experiences such as virtual reality applications and 4K video editing.

The Xeon E5-2600 v4 lineup includes 27 different chips, all based on the new Broadwell microarchitecture. Broadwell offers a number of improvements which allows these new chips to offer as much as a 5% increase in speed over previous generation Haswell architecture chips. According to tests run by Dell using SAP benchmarks on a Linux OS, the new chips were observed to be as much as 28% faster than their predecessors. The main issue with chips packing so many cores is cooling as a result, the frequency of the top-line 22-core Xeon E5-2699 v4 has had to be set to 2.2 GHz, where it still draws 145 watts of power.

Of course, these chips aren’t for the average consumer, with the prices for these new chips peaking at $4,115 for the 22 core model. For their largest customers, Intel is even willing to deliver customized versions of these new Xeons, which we can be sure will hold an even heftier price tag.

Synology Officially Releases DiskStation Manager (DSM) 6.0

Synology released the final version of DiskStation Manager (DSM) 6.0 after six months of beta programs and it was well worth the wait. DSM 6.0 is a major leap in the development of DSM and introduces major enhancements in every aspect including Virtualisation, Cloud Solution, Collaboration, Security, Multimedia, Accessibility and much more. Loyal readers will also have seen our review on DSM 6.0 just a few weeks ago where we had a go with it ahead of time.

There are so many improvements in DSM 6.0 that it is hard to get them all mentioned in a post like this, but I’ll try to bring you the highlights. Should you want to check out more details on the individual new function before you upgrade, then you can visit the official minisite for DSM 6.0 and check up on all the details. One of the awesome new functions that you’ll barely notice except for its usability is the new powerful content indexing service. This effective feature allows you to quickly reach all your data with a full content search for more than 700 file formats including office documents and metadata from your media files. With this, you’ll quickly find the files you’re looking for, no matter where on your NAS they’re located.

The Cloud Station Suite also made file syncing a lot easier, no matter what device you’re using. It is now all in one place and easy to setup and configure. Whether you just want to use it for backups to your personal cloud or sync it with a host of cloud storage services too, the Cloud Station Suite makes it easy.

A lot of the really new features in DSM 6.0 focus on the enterprise users, but the home users were in no way forgotten. DSM 6.0 focused highly on optimizing the multimedia experience. The redesigned Video Station with offline transcoding allows you to watch movies anytime, anywhere. Multimedia mobile apps support multiple devices including the new Apple TV, Apple Watch, and Windows 10. Media storage and access are some of the main reasons for home users to get a NAS and with these improvements you can be sure to have a smooth experience, anytime, anywhere.

The advanced collaboration tools are equally useful for home and enterprise users alike. Whether you are calculating prices for customers or keep track of your household expenses, you can do it all on your own NAS without the need for any local software. Create Spreadsheets or use the advanced Note Station yourself or share, edit, and collaborate with friends, family, and coworkers.

The concept of electronic mail is as old as the internet itself. DiskStation Manager now also comes with the all-new MailPlus and MailPlus Server packages that allow you to set up a secure and reliable private mail server as well as use a modern email client for receiving and sending messages. Again, everything will run on your own server and you remain in full control and don’t need to rely on third-party services. While this mostly is relevant to enterprise users, there are quite a few enthusiasts such as myself that could benefit greatly from this system at home too.

DSM 6.0 also offers a much greater support for SSD cache that ensures a significant boost in performance for those that need more than the average. This is mainly for enterprise users and so is the newly added support for shared folders with over one petabyte of storage space. The Btrfs file system is now also supported on more NAS models than before, which in itself adds a row of great features such as data compression and data scrubbing.

DSM 6.0 also introduces Snapshot Replication to Synology’s NAS’ that offers near-continuous data protection as well as multi-site replication for an even better protection of your files. In addition, Synology’s Hyper Backup package can now perform multi-version backups of all types of destinations.

Consolidating physical servers with virtualization technology can increase server utilization and reduce business operating costs – and it’s also really cool. DSM 6.0 introduces two new features here with Docker DSM and Virtual DSM that enables users to build a reliable and multi-tenant environment on their Synology NAS.

Virtual DSM allows you to deploy multiple virtual instances of DiskStation Manager on the same unit. You can easily live migrate virtual machines to another Synology NAS and test out DSM upgrades in isolated virtual machines before you effectively install it. There is no need to worry about downtimes when upgrading with such a feature. It also adds another layer of security as it protects the physical machine from being affected if the virtual machine gets attached.

Docker DSM is a lightweight virtualization system with data protection where you don’t have to give up system performance. It can be containerized and run on a Btrfs shared folder with little performance impact, yet provide you with a lot of benefits. It only requires 256MB memory for each Docker DSM where Virtual DSM requires 1GB or more each and the only real difference is the whether you need the ability to use iSCSI LUN and targets.

 

So, it might be time to upgrade your Synology NAS. You can find a full list of applied models for each function and check out the full software specification too if you want to know more. As a user that already had the pleasure to play with DSM 6.0, I can highly recommend it.

Man Pleads Guilty To Leaking US Military Aircraft Blueprints

When it comes to security and privacy, there is little more protected than military details. As a result, the information is often protected by several layers of protection, and even if these are breached the chances of it going unnoticed are even slimmer than being able to gain access in the first place. Something Su Bin found out the hard way when he pleaded guilty to leaking US military aircraft blueprints. Su Bin, a Chinese national, has pleaded guilty to illegally accessing sensitive military data and distributing this material to China for financial gain. Bin’s role in the scheme was to obtain access to Boeing and other companies servers, in the process retrieving information about their military aircraft

Su Bin, a Chinese national, has pleaded guilty to illegally accessing sensitive military data and distributing this material to China for financial gain. Bin’s role in the scheme was to obtain access to Boeing and other companies servers, in the process retrieving information about their military aircraft such as the C-17 and even fighter jets. Once he obtained access, he told two associates, un-named in his plea deal, which servers to hack and what information was useful on the projects. He even provided a translating service, converting the documentation from English to Chinese before sending it back to China, all at a cost.Sending both server details and names of US executives (and their emails)

After being caught in Canada in 2014 and then extradited to the US last month, Bin will now be charged with stealing data listed on the US Munitions List contained in the International Traffic in Arms Regulations.

With countries becoming more and more aware of the risks and dangers regarding the digital world, catching anybody is a stark warning that just because you can do something, doesn’t mean that you will get away with it.

Apple Designing Servers In-House to Prevent Snooping

With the amount of sensitive information stored on their servers, cloud providers take security very seriously. However, many cloud services actually use third-party servers like Amazon Web Services or Microsoft Azure to run their platform. Even for those with their own servers, the hardware is made by and supplied by third-parties. In light of security concerns, Apple is taking it to the next level and designing their own servers.

Right now, Apple uses Amazon, Microsoft and Google servers to help run iCloud in addition to their own hardware. While it might seem prudent to do everything in-house to keep things secure, Apple wants their servers to be designed themselves. As we know from Edward Snowden’s revelations, the NSA, and probably other spy agencies are prone to intercepting hardware mid-shipment and tampering with the hardware.  Cisco for instance, has been one own past target and with Apple’s legal fight against the FBI, they may have been moved up the list.

By designing their own hardware, Apple will be able to make sure that everything is where it is supposed to be and no hardware has been added to it. With the massive scale of iCloud, Apple will be able to easily have whole manufacturing runs dedicated to them. Still, with their massive user base, running that many servers will be will a challenge for Apple. Nonetheless, Apple may soon get the total hardware control truly needed for true security.

QNAP Launches TDS-16489U Dual Xeon E5 Double Server

QNAP’s newest server, the TDS-16489U, is an amazing one that sets itself apart from the rest in so many ways. I want one so badly even though I have absolutely no need for this kind of power. This must be how a normal person feels when they see a Bugatti Veyron. But let us get back to the new QNAP dual server.

The TDS-16489U is a powerful dual server that’s both an application server and storage server baked into on chassis for simplicity and effectiveness. It is powered by two Intel Xeon E5 processors with 4, 6, or 8 cores each while supporting up to 1TB DDR4 2133 MHz memory with its 16 DIMM slots. These are already some impressive specs, but this is just where the fun begins.

The dual server has 16 front-accessible drive bays for 3.5-inch storage drives as well four rear-facing 2.5-inch drive bays for SSD cache. Should this not be enough, then you can expand that further by use of NVMe based PCI-Express SSDs too. The system has three SAS 12 Gb/s controllers built-in to couple it all together.

There are just as many connection options as there are storage options in the TDS-16489U. It comes with two normal Gigabit Ethernet ports as well as four SFP+ 10Gbps ports powered by an Intel XL710. Should that not be enough, then you can use the PCI-Express slots to expand with further NICs of your choice. The system supports the use of 40 Gbps cards too. It also comes with a dedicated IPMI connection besides the normal networking. The PCI-Express x16 Gen.3 slots can also be used with AMD R7 or R9 graphics cards for GPU passthrough to virtualization applications. A true one-device solution for applications, storage, and virtualization.

The TDS-16489U combines outstanding virtualization and storage technologies as an all-around dual server. With Virtualization Station and Container Station, computation and data from the guest OS and apps can be directly stored on the TDS-16489U through the internal 12Gb/s SAS interface. Coupled with Double-Take Availability to provide comprehensive high availability and disaster recovery, backup virtual machines can support failover for the primary systems on the TDS-16489U whenever needed to enable data protection and continuous services. QNAP Virtualization Station is a virtualization platform based on KVM (Kernel Virtual Machine) infrastructure. By sharing the Linux kernel, GPU passthrough, virtual switches, VM import/export, snapshot, backup & restoration, SSD cache acceleration and tiered storage.

“Software frameworks for Big Data management and analysis like Apache Hadoop or Apache Spark can be easily operated on the TDS-16489U using virtual machines or containerized apps, and with Qtier Technology for Auto Tiering the TDS-16489U empowers Big Data computing and provides efficient storage in one box to help businesses gain further insights, opportunities and values,” said David Tsao, Product Manager of QNAP.

With all the above, we shouldn’t forget that it still also runs QNAP’s QTS 4.2 operating system that provides everything you know and love from that. Included is the comprehensive virtualization applications that we’ve also seen on our consumer models, but this is where you truly can take advantage of what QNAP created and run multiple Windows, Linux, Unix, and Android-based virtual machines on your NAS. All the backup solutions and failover, from local to other NAS or the cloud. You can do it all. Share files to basically any device anywhere is made as easy as possible.

Should you still not have enough storage in this impressive unit, then you can expand with up to 8 of the QNAP enclosures and reach a seriously impressive 1152 TB raw storage capacity controlled by this single 3U server unit. The CPU power, dual system capabilities, virtualization options and impressive storage option will let you deploy an impressive system with a very tiny size and total cost of ownership compared to traditional setups.

Key Specifications

  • 16-bay, 3U rackmount unit
  • 2 x Intel Xeon E5-2600 v3 Family processor (with 4-core, 6-core and 8-core configurations)
  • 64GB~1TB DDR4 2133MHz RDIMM/LRDIMM RAM (16 DIMM)
  • 4 x SFP+ 10GbE ports
  • hot-swappable 16 x 3.5″ SAS (12Gbps/6Gbps)/SATA (6Gbps/3Gbps) HDD or 2.5″ SAS/SATA SSD, and 4 x
  • 2.5″ SAS (12Gbps) SSD or SAS/SATA (6Gbps/3Gbps) SSD;
  • 4 x PCle slots;
  • 4 x USB 3.0 port

The new QNAP TDS-16489U dual-server is now available.

Quantum Computing Could Be A Step Closer Thanks To “Noise-Cancelling” Technology

Mention quantum computing to anyone involved with technology and their eyes will light up like its Christmas day. With the theoretical ability to complete thousands of complicated calculations in a fraction of the time that it takes the most advanced processor available on the market, quantum computing could see your phone becoming as powerful as your computer. While such a great concept the technology needed is far from complete, but maybe one step closer thanks to the recent work to incorporate noise canceling technology into their design.

Quantum computing relies on quantum bits, the problem being is the “noise” these bits encounter. The noise is normally in the form of magnetic disturbances, and if that computer is calculating your finances or the medicine dose you need you really don’t want someone putting a fridge magnet nearby messing it up. Researchers at Florida State University National High Magnetic Field Laboratory (MagLab) have instead decided to cancel out this noise using the quantum equivalent of noise-cancelling headphones.

Thanks to specially designed tungsten oxide molecules, MagLab were able to keep a quantum bit working without interference for 8.4 microseconds. While that may not seem like long, in the quantum world that time could calculate any number of operations and is a step towards making quantum computing a feasible technology for corporate and public use.

With the likes of NASA and Google working on creating a usable quantum computer, I for one am hoping that I get to see a quantum computer within the next twenty years. A single quantum computer could replace all the advanced servers and systems used by Google and Microsoft, offering us the ability to miniaturize our systems, creating even smarter systems in even smaller packages.

Microsoft Estimate Around 8000 Companies Looking To Try SQL Servers On Linux

Microsoft recently announced their interest in providing their SQL server software for Linux operating systems. It would seem the news has been well received, with over 8000 companies looking to try SQL servers on Linux.

Takeshi Numoto, Microsoft’s Corporate VP of Cloud and Enterprise, stated that by his estimates around 8000 companies were already signed up to try SQL Server 2016 on Linux, with at least 25% of that being fortune 500 companies.

Given that companies like Amazon and Oracle offer similar services, the fact that so many are interested in what Microsoft could provide shows the reputation their software has for businesses. With the move showing that Microsoft is serious not only about the open source community that is commonly found using Linux but also offering alternatives to the cloud for companies to use.

While the cloud offers scalable solutions and choices for companies all over the world, many companies are hesitant to take it on board due to the fact that they lose control over its security and access. Being able to run SQL servers on Linux, using Microsoft’s software would help businesses keep their servers in-house, offering that little bit of choice that companies are often forced to forgo in exchange for cheaper rates.

Hackers Lost Out On $780 Million Due To Spelling Mistake

We’ve all had that moment, you are writing an email and are worrying so much about your wording that just as you hit send something jumps out at you. Forgot to attach the file you were talking about or added the wrong details by mistake? Imagine if you were making a bank transfer and made the same mistake, pretty big deal. Even bigger if the money doesn’t belong to you and all that stopped you getting it was the spelling mistake.

The hackers in question managed to gain access to the servers of Bangladesh Bank, from there they went about their business. In total, they were going to send somewhere close to $850 million to different accounts in the Philippines and Sri Lanka in just 13 transfers. $81 million of these went through before the fifth one was flagged up by a routing bank in Germany.

The reason for the flag was simple, “fandation”. Instead of putting “foundation”, the hackers had mistyped and put in “fandation”. If the hack had been successful it would have been one of the largest of its kind on record, while $81 million is impressive you have to think that with a little spell check they could have made off with a lot more.

Chenbro Debuts RM23624 2U 24-Bay Server Chassis

Do you need a lot of storage? Do you need to achieve superior IOPS performance? Then you should look at Chenbro’s newest system which is called the RM23624 and is a 24-bay 2U rack server. The RM23624 features 24 hot-swappable 2.5-inch drive bays for HDDs or SSDs and a 12Gb/s SAS expander with LSI DataBolt that provides the higher performance and reliability that data centers demand.

Cooling challenges encountered by placing this many hard drives in a 2U chassis are countered through a strong thermal design by experts from Chenbro and their years of experience in the field. The chassis is perfect for enterprise storage, online transaction processing, database servers, and other I/O constrained applications.

Besides the 24 drive bays in the front, the case also features an additional HDD cage that supports two internal 2.5-inch drive bays for your operating system and system software deployment. The server chassis doesn’t just feature a lot of hot-swappable drive bays, the four anti-vibration fans are also easy swappable and so are the redundant power supplies.

You can optionally mount three full height or seven low-profile expansion cards through the PCIe expansion slots on the rear, which gives you even more flexibility for your build.

 

As mentioned earlier, the chassis is designed with optimal airflow and heat ventilation in mind and to be able to provide a reliable thermal structure with its four patented anti-vibration fans with smart speed control configuration. You can mount two more optional 40mm rear fans to achieve the maximum thermal performance in this setup.

Power supply options include a choice of a single power supply or 1+1 redundant power supply for that extra backup and safety.

Chenbro didn’t reveal any pricing or availability time yet, but both are sure to follow soon. Until then you can also check out all the juicy details on the official product page.

Microsoft Bringing SQL Database Software To Linux

Microsoft is well-known for three things, their hardware (such as the Microsoft Surface series), their operating systems and their software. The problem being is that a lot of these are closely tied together, their hardware uses their operating system and normally come pre-installed with their software. You can get their operating systems or software alone, but putting their software on another operating system tends to work quite badly (or if you are using the Mac version of Windows Office, you may be missing some of the features available on Windows). This is set to change with Microsoft announcing that their SQL database software will be coming to Linux soon.

For clarification, at least, some of Microsoft’s SQL server’s core capabilities will be coming to Linux, with what parts being heavily influenced by demand and feedback. With Microsoft looking to build their own Linux distro and even opening up their .NET framework for Linux and Mac OSX users, maybe we could see a more and more open approach regarding their software.

With open source software being a big part of companies and governments, Microsoft may be looking to not only get community support in increasing their software capabilities but possibly winning back some of the markets that are going to open source solutions.

Thecus Announces Launch of Two New Rackmount NAS


Thecus announced the launch of two new enterprise-grade NAS with 12 and 16 bays and packed with plenty of features and connectivity while running on Haswell Xeon processors. The new N12580 and N16850 offer massive scalability on top of their cross-platform file sharing, schedulable snapshots, and resilient data integrity for a working environment that won’t let your enterprise down.

The two new servers come equipped with the Intel’s Haswell Xeon E3-1231 v3 3.4GHz processor and the C224 chipset. They are equipped with 16GB DDR3 ECC RAM but support up to 32GB each. Four RJ45 LAN ports allow for plenty of connectivity and the units come with plenty of USB 2.0 and 3.0 ports too. Inside the NAS’, you’ll find an 8-lane (x1) or 4-lane (x2) and 1-lane (x1) PCI-E slot for further expansions. All that coupled together should make the N12850 and N16850 deliver lightning fast, persistent throughput speeds while offering the requirements necessary to efficiently complete CPU-consuming tasks and serves more concurrent tasks at the same time.

“Businesses today are seeking a NAS system that can best handle the demanding day-to-day high storage needs that occur in the workplace. Our new enterprise-class N12850 and N16850 NAS series are the solution. With advanced data protection and integrity mechanisms, these rackmounts NAS provide the ideal choice for storing a business’s crucial data.” said Florence Shih, CEO of Thecus Technology Corp.

With native support for both SAS and SATA drives, users can experience the superior storage performance of 12G SAS and 6G SATA drives for a flexible storage environment. These new models are 10GbE ready and support High Availability for system redundancy. The units also deploy Daisy Chaining via SAS technology which offers connections to four additional D16000 units, allowing users to reach storage capacities of up to 640TB. Impressive.

These new units also come with some new features which include Virtualization, Volume encryption, Free Intel Security, Thecus App Center and User Profiles. This new enterprise series delivers significant improvements in design, performance, and user experience.

Key Specifications

  • Intel Haswell XEON processor
  • 16GB DDR3 ECC RAM
  • AES-NI hardware encryption engine
  • Redundant power supply
  • 4 x USB 2.0 ports, USB 2 x 3.0 ports
  • 1 x VGA port
  • 6G SATA and SAS 12G compatibility
  • Hot-swappable hard drives
  • RAID 0, 1, 5, 6, 10, 50, 60 and JBOD

The new Thecus N16850 and N12850 servers are expected to begin shipping globally in April, so that will be very soon and the wait won’t be long.

Leaked Slides Confirm 32 Core AMD Zen Opteron CPUs

For those are hoping for “MOAR COARS”, it looks like AMD will be delivering later this year. First alluded to in a Linux patch last week, AMD’s upcoming Zen Opteron CPUs are set to have up to 32 psychical cores. A leaked slide from CERN reveals that patch was right on target. Combined with the introduction of Symmetrical Multi-threading, this will allow Zen to handle at least 64 threads at once, an unprecedented amount for AMD and quadruple current chips.

In addition to the large core, Zen is expected to bring PCIe 3.0 and DDR4 to AMD’s server offerings. The memory subsystem also gets a major boost with up to 8 channels, double from the current 4 on Socket G34. Compared to Intel’s Haswell-EP, Zen will offer 14 more cores and 28 more threads and double the memory channels. While Broadwell-EP may change things up later this year, AMD may still hold a lead in terms of core and thread count.

Combined with the expected 40% IPC boost, Zen may finally bring AMD back into relevance in the lucrative server and data centre market. AMD has had no real update to their server lineup since 2011, leading to their market share dropping to near zero. With such a major update, AMD will once again be competing in the server market with Opterons that can go toe to toe with Intel. While 32 cores is unlikely for the consumer lineup, a 16 core chip seems pretty likely.

GIGABYTE Server Lineup First To Be Tesla M40 Certified

Deep learning is redefining what is possible for a computer to do with the information that is provided. This is however a very compute intensive task and it requires specialized hardware to get the optimal performance. This is also the technology that one day will make an AI possible. Nvidia’s Tesla M40 is the fastest deep learning accelerator and it significantly reduces the training time. GIGABYTE is the first server maker to have its lineup certified for just these new NVIDIA cards. While a certification isn’t a thing that is necessarily needed, it is one of those guidelines that you shouldn’t look past.

 

Right now you are most likely wondering what deep learning is and I could go into a lot of details about its start, progress, and details – but I doubt anyone would read all that here. Wikipedia’s definition probably sums it up best. With very basic words, it allows the software to draw its own conclusions based on what it already knows.

The Wikipedia definition reads: “Deep learning (deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures, or otherwise composed of multiple non-linear transformations.”

NVIDIA’s Tesla M40 is a quite impressive card with its 3072 CUDA cores, 12GB GDDR5 memory with a bandwidth of 288GB/s, and a single precision peak performance of 7 TFLOPS. That is just from one card and we need to keep in mind that some of GIGABYTE’s servers can handle up to 8 graphics cards each. That adds up to a lot of performance.

If you already have a GIGABYTE server or plan to purchase one, then you’ll most likely also know the model number already. I’ve added a screenshot from the official compatibility list below which in that case will save you the trip to the official compatibility list. We see that it’s only the R280-G20 that isn’t certified for the M40, but that is because the system has a different field of operation than the rest.

So GIGABYTE has you well covered in regards to NVIDIA’s impressive Tesla M40 Deep Learning GPU.

Air Force Cyberspace System is Fully Operational

I know it sounds like it comes straight out of a movie but I promise this is all really happening. Air Force Space Command (AFSPC) is a part of the United States Air Force, focused mostly on supporting worldwide operations through digital means such as satellites or cyber tools. As with every part of the government and even business, any system connected to another proves a risk. One of the first ways you can limit that risk though is to limit the number of points you can access the system through. Something that the Cyberspace System can now do thanks to its fully operational status.

Fully operation status (or FOC) means that the new system is online and ready to control traffic between and in bases while also looking at the communications coming into the Air Forces operations. Previously the Air force had over 100+ regionally managed entry points to the network, imagine tracking down all those different access points if there was a problem! The new system means there are only 16, offering a much smoother and controlled entrance into their systems, effectively creating a solid wall to help reduce risks to their network and operations.

While impressed, Brigadier General Stephen Whiting, the Director stated, “This is a great achievement for the Air Force and the first cyberspace weapon system to achieve FOC.  We look forward to continued rapid progress and maturation of the Air Force Cyberspace mission. As we all know, our mission is to fly, fight and win in air, space and cyberspace”

So next time you see that movie and they are tapping away at the keys pinging nodes from all over the world to try to find a way into your system, you can be safe that the people using those systems know what their doing and are watching out for those who might misuse them!

AMD Finally Launches Datacenter Opteron ARM Chips

Nearly 4 years after AMD first revealed their ARM plans, their first ARM-based Opteron chips are finally ready. Shipping today, the octa-core Opteron A1100 server SoC and platform is already able for purchase from several partners and is available in 3 SKUs. Despite such a late launch, the A1100 may yet find a home in the datacenter.

First off, AMD has done a lot of work to build a comprehensive ARM server SoC. The Opteron features up to eight 64bit A57 cores running at 2Ghz. This puts it roughly in the same space as Intel’s Silvermont Atoms clock for clock. The key is the 4MB of L2 cache and 8MB of L3 cache that connect up to 128GB of DDR4 (DDR3 is limited to 64GB) over a 128bit bus. This is all backed up by an A5 co-processor to handle system control, compression and encryption as well. I/O is impressive as well, offering up to 8 PCIe 3 lanes and 14 SATA3 ports and two 10GbE ports.

While the A1100 will undoubtedly blow its way past Intel Atoms and other ARM competitors as a server SoC, the biggest competition comes from Intel’s big Xeons. At $150, AMD is pricing their chip dangerously close to Intel’s big cores which offer much higher performance and potentially better performance/watt. Still AMD is offering a viable chip to cater to the microservices and cluster-based computing market. If AMD’s in-house K12 arrives on time and on performance, AMD stands a good chance as securing a strong foothold in this market.

GIGABYTE Launches Intel Xeon D-1500 Server Motherboards

Intel launched the very impressive Intel Xeon D-1500 System on Chip (SoC) last year and now GIGABYTE is ready with four new server boards based on this tiny wonder chip. The Xeon D-1500 family is a series aimed at low power and high-density server applications and it great at that as we’ve seen in our review section.

While I said that there are four new motherboards in this line-up, that’s both true and false at the same time. A better way to say it would probably be that there are two new motherboards that each come in two versions. The difference between the two versions is the SoC that has been used. Two of the motherboards use an Intel Xeon D-1521 processor while the other two use the faster Intel Xeon D-1541 processors. The motherboards are identical on all the other parts than the SoC.

The MB10-DS4 and MB10-DS3 server motherboards come with four cores/eight threads and eight cores/sixteen threads respectively from the SoC and are equally well packed on connectivity. Despite the small mITX form factor, these motherboards still pack dual 10GbE SFP+ LAN ports as well as dual 1GbE LAN ports for optimal connectivity. There’s also a dedicated IPMI 2.0 remote management port with iKVM support. With four DIMM slots, these motherboards can take up to 128GB ECC DDR4 memory and run it at 2133MHz.

Being server motherboards, these come with a rear IO ID button as well as a power button and LEDs for quick diagnostics. There’s also a D-Sub VGA port and two USB 3.0 ports. You can connect two more USB 3.0 ports via the onboard header, but we don’t find any USB 2.0 at all. There are six SATA ports for your drives and you also get a single PCI Express x16 Gen3 slot on the motherboard. The integrated display option is powered by the well-known Aspeed AST2400.

We find the same setup as on the above motherboards when we look at the MB10-DS1 and MB10-DS0 server motherboards, except for one difference. These motherboards don’t have the two 10GbE LAN ports that the above two come with. The faster network connection is something that will increase the price slightly per system which in return quickly runs up when a lot of systems have to be deployed. If you don’t need it, then don’t pay for it and get the versions without. It’s great to see so many options for what’s basically one motherboard with a few changes each time.

Microsoft Embrace Linux With Azure Certification

Microsoft is a company best known for a range of options, both hardware and software, in the modern world. From their Surface tablets to Microsoft Office, a widely known thing about Microsoft is that you pay for what you get. While they offer some free tools, Office and even Microsoft Windows, cost a small amount if you want to use them. This makes it all the more surprising when they announced that you can now get a certification for managing an operating system on Azure, their cloud based system. The operating system in question will cost you nothing, it’s Linux.

The new Microsoft Certified Solutions Associate Linux on Azure Certification will be used to show that you as a professional can run and manage Linux based servers on Microsoft’s cloud system (Azure). While this is a surprise, only a few years ago Azure didn’t support anything like Linux, however, it is not a giant surprise given the recent push by companies and governments to embrace open source software.

In order to get the new certification, you need to have passed not only the Linux Foundation Certified System Administrator but also the Implementing Microsoft Azure Infrastructure solutions exam.

While it’s nice for Microsoft to continue supporting open source software, and even with competing operating systems, I see it being a rare case where people (and companies) will be paying out hundreds for certification of free software.

Crucial DDR4 2400MT/s 8Gb-based Server Memory Now Available

Crucial is ready with the next step in their server memory and announced the availability of the Crucial DDR4 2400MT/s 8Gb-based RDIMM, LRDIMM and ECC UDIMM server modules which enable increased performance, bandwidth, and energy efficiency.

The higher density 8Gb-based modules allow for both a greater channel bandwidth and channel density, but the most important factor is probably the lowered power consumption. Memory can be quite power hungry and will make up quite a bit of the overall consumed power in a server environment due to the constant rewrites happening. The new 8Gb-based modules offer up to 20 percent higher energy efficiency than the 4Gb-based modules and that is something that will make a noticeable difference.

Ultimately, these benefits provide more value per gigabit than current 4Gb-based offerings, making it easy to scale
up server deployments in the future and the modules are designed to be compatible with Intel’s next generation processor product families.

Crucial’s 8Gb-based server memory is extensively tested to mission-critical standards and is backed by a limited lifetime warranty. The new 8Gb-based modules are available for immediate purchase through global partners and directly through Crucial.

“We are excited to continue Intel’s collaboration with Crucial with the release of the new 8Gb-based DDR4 server modules,” said Geof Findle, director of memory enabling, Intel. “By working together, we are able to support next-generation server platforms while providing the technology and services needed to support our mutual channel customers.”

“Data-intensive server applications continue to require higher densities of memory as they struggle to meet ever-increasing and more demanding workloads,” said Michael Moreland, worldwide product marketing manager, Crucial. “The new Crucial 8Gb-based server memory modules will help with future scalability and deliver a lower total cost of ownership for users.”

GIGABYTE Announces Availability of Cavium ThunderX-based Servers

Gigabyte announced that their extensive Cavium ThunderX-based server portfolio now is available for orders and that shipments to numerous end customers and OEMs have already begun. These aren’t just any servers as they sport up to 384 ThunderX cores in a standard 2U rack chassis with 64 DDR4 DIMMs, and 40Gb Ethernet network. That kind of power allows for an enormous potential of core-intensive application workload with uncompromised performance.

There are quite a few different models to choose from, but not all of them have had their product page go online yet. Some of the new systems are the following:

  • Single-socket 1U R120-T30 servers, in 4 x 3.5″ HDD and 12 x 2.5″ HDD configurations
  • Dual-socket 1U R150-T60 servers, in 4 x 3.5″ HDD and 10 x 2.5″ HDD configurations
  • Dual-socket 2U R270-T60 servers in 12 x 3.5″ HDD and 24 x 2.5″ HDD configurations
  • Dual-socket 2U G220-T60 GPU servers, available with two GPUs and 24 x 2.5″ HDD
  • Dual-socket 2U 4-nodes H270-T70 and H260-T70 high-density servers

H270-T70 is one of the ones that already had their product page go live and it’s mouth-watering to see what this system has on hardware. The system is made up of four nodes with 2 48-core Cavium ThunderX CN8890 processors each. Those 384 cores then get to play with up to 4TB DDR4 memory (64x 64GB DIMM), 1TB per node. If that wasn’t enough to get your mouth-watering, then I can add that the system features eight 40GbE QSFP+ LAN ports (Cortina CS4343) and four 10/100/1000 management LAN ports combined over the four nodes. It also features four 2.5-inch hot-swappable storage bays per node and you can also add one half-length low-profile slot PCIe x16 and one mezzanine PCIe x8 card in each note. The end result is a beast with a total weight of 38KG.

The ThunderX workload-optimized server processors are designed to enable best-in-class ARMv8 performance per dollar and performance per watt. With all these features working in unison in a processor supported by the server industry’s major operating systems and development environments, the Cavium ThunderX is ready for the most demanding datacenter-grade applications.

“Gigabyte first began working with Cavium and ThunderX late last year, and have developed and shipped a broad range of platforms to customers,” said Chen Lee, Sales Director, Gigabyte USA. “Our entire portfolio of production systems is now available for order and we are shipping production platforms to customers. We are seeing a strong demand for these platforms and we expect the demand to further accelerate in 2016.”