NVIDIA DRIVE PX2 Powered by Two GP106 Chips

NVIDIA showed of its DRIVE PX 2 system – the new iteration of its autonomous and driver assistance AI platform – at last week’s GTC 2016 conference, and eagle-eyed viewers may have noticed that the board shown to the audience by CEO Jen-Hsun Huang was sporting a pair of integrated GP106 GPUs, eschewing the two Maxwell-based NVIDIA Tegra X1 chips that powered the original DRIVE PX, and confirming a rumour that we reported last week.

The GP106 runs on NVIDIA’s new Pascal architecture – set to hit the market in the latest line of GeForce graphics cards this Summer – which can perform at 24 DL TOPS or 8 TFLOPS, and features up to 4GB GDDR5.

NVIDIA hopes that the new DRIVE PX 2 will power the next generation of driverless cars – the DRIVE PX has so far only be used to power the ‘infotainment’ system on-board a Tesla, for example – and has already shipped to a number of unnamed Tier 1 customers.

https://www.youtube.com/watch?v=KnVVJSIiKpY

“DRIVE PX platforms are built around deep learning and include a powerful framework (Caffe) to run DNN models designed and trained on NVIDIA DIGITS,” according to NVIDIA. “DRIVE PX also includes an advanced computer vision (CV) library and primitives. Together, these technologies deliver an impressive combination of detection and tracking.”

Semiconductor Marvell Could be up for Sale

The semiconductor industry is a tough area to operate in and it is a dog-eat-dog world, and now it looks like Marvell Technology Group could be the next to be swallowed by a bigger dog on the market. According to an article in the New York Post, the chip maker could be up for sale and Broadcom/Avago could be the possible buyer.

This isn’t the first time that we’ve heard of Avago’s interest in Marvell. Last July several news outlets reported that Avago was interested in the purchase, but was holding their bid until the Broadcom and Avago merger was completed.

It hasn’t gone all that good for Marvell lately despite them making some great chips that we see all the time here in the office when we review products. For example, last month they reached a settlement with Carnegie Mellon University for $750 million and they have also just been through an audit for alleged fraud. They were however cleared in the audit and there wasn’t found any evidence of wrongdoing or accounting fraud as the allegations said.

Still, the stock has been on a steady decline as the semiconductor business generally has been going down and stockholders demand that the company cuts costs. After the stock dove about 40 percent of the past year, it doesn’t come as a big surprise that the stockholders demand some action.

Whether Broadcom (AVGO) will make an official bid or not is something time will tell and I’m sure that there will be a lot of details to iron out between the two before they’ll reach an agreement, if they do at all. There is however also some doubt on whether such a merger could result in an antitrust scrutiny as some of the areas of the two overlap, such as Wi-Fi, Bluetooth, and Ethernet switching chips, among others.

Energy-Friendly Chip Could Boost Neural Networks

The quest to gain a greater insight into artificial Intelligence has been exciting and has also opened up a range of possibilities that have included “convolutional neural networks”, these are large visual networks of simple information-processing units which are loosely modelled on the anatomy of the human brain.

These networks are typically implemented using the more familiar graphics processing units (GPUs). A mobile GPU might have as many as 200 cores or processing units, this means that it is suited to “simulating a network of distributed processors”. Now, a further development in this area could lead to the potential for a specifically designed chip that has a sole purpose of implementing a neural network.

MIT researchers have presented the aforementioned chip at the “International Solid-State Circuits Conference in San Francisco”. The advantages of this chip include the notion that it is 10 times more efficient than an average mobile GPU, this could lead, in theory, to mobile devices being able to run powerful artificial intelligence algorithms locally, rather than relying on the cloud to process data.

The new chip, coined “Eyeriss” could, lead to the expansion of capabilities that includes the Internet of things, or put simply, where everything from a car to a cow, (yes apparently) would have sensors that are able to submit real-time data to networked servers. This would then open up horizons for artificial intelligence algorithms to make those important decisions.

Before I sign off I wanted to further delve into the workings of a neural network, the workings are that it is typically organised into layers, each of these layers contains a processing node. Data is then divided up among these nodes within the bottom layer, each node then manipulates the data it receives before passing it on to nodes within the next layer. This process is then repeated until “the output of the final layer yields the solution to a computational problem.” It is certainly fascinating and opens up a world of interesting avenues with which to explore, when you combine science and tech, the outcome is at the very least educational with the potential for it to be life changing.         .

Bug Causes Intel Skylake PCs to Freeze During Hyperthreading

Intel has confirmed that its Skylake processors can freeze during complex workloads. The bug, that “can freeze any system that has a Skylake processor,” was discovered by the Great Internet Mersenne Prime Search (GIMPS), which uses computers to identify new prime numbers. GIMPS was running its Prime95 software to find large prime numbers with the exponent size of 14,942,209, which, “after minutes or hours” crashed the system.

Intel forum user Henk_NL wrote, “the problem is related to hyperthreading and the use of CPUsupportsFMA3. Overclocking, underclocking or just running at stock speed does not influence the outcome of the program.”

Henk_NL also wrote a handy guide for masochists to use in order to replicate the freezing:

“Steps to freeze your Skylake system:

– Download and install Prime95 for Windows on a Skylake system from the website at http://www.mersenne.org/download/ (If you want to familiarize yourself with the software use the readme, a background in math will be helpful, but is not needed.)

– In the menu go to ‘Advanced | Test’ and fill in the number 14942209 in the box labeled ‘Exponent to test’

– Let the program run for some time and at some point, minutes or hours, the system will freeze.”

“It is my fear that like the infamous FDIV bug this issue will require a new stepping and a product recall, since this has security implications as well,” Henk_NL warned.

Intel’s official statement on it forum reads:

“Intel has identified an issue that potentially affects the 6th Gen Intel® Core family of products.  This issue only occurs under certain complex workload conditions, like those that may be encountered when running applications like Prime95.  In those cases, the processor may hang or cause unpredictable system behavior.  Intel has identified and released a fix and is working with external business partners to get the fix deployed through BIOS.”

There is no indication as to when the BIOS fix will be released.

Amazon Enters the Semiconductor Market With its First ARM Chip

Amazon’s Annapurna Labs has announced that it is entering the semiconductor market, selling its first ARM-based processors, a “foundation for next-generation digital services for the connected home,” according to a press release.

Annapurna Labs, established in 2011, was snapped up by Amazon last year for $350 million. Before the buyout, Annapurna was heavily rumoured to be working on its own line of ARM chips, VentureBeat reports. The Alpine PoC product line will be sold to OEMs to support “home gateways, Wi-Fi routers, and Network Attached Storage (NAS) devices.”

“In the fast-growing home application marketplace, new use cases and consumer needs are rapidly invented and adopted,” the press release reads. “To stay competitive, OEMs and service providers therefore need to quickly add support for the new features that give consumers the ability to enjoy the latest applications without changing hardware or waiting for months to get updated software.”

“Our Alpine platform-on-chip and subsystems product line gives service providers and OEMs a high-performance platform on which they can design hardware that will support growing consumer demands for innovative services, fast connectivity, and many connected devices.”

While ARM processors are still a niche market compared to the Intel-dominated server market, the architecture has come a long way in the last thirty years, with ARM cores powering single board computers like the Raspberry Pi and Pine A64, as well as Apple iPhones, iPods, Microsoft’s early generation Surface and Surface 2 tablets, and Nintendo’s DS series of handheld consoles.

“There is significant growth in the home Wi-Fi segment with most of the demand occurring on high-performance routers. As a leading provider in this segment, we are committed to providing our customers with high performing solutions,” Tenlong Deng, Vice President of ASUS Networking & Wireless Devices Business Unit, said. “The increased demand for new applications and use models requires additional compute and more flexibility. We are collaborating with Annapurna on these technologies and believe that they have one of the most advanced and flexible silicon solutions in the marketplace.”

Image courtesy of Wikimedia.

Intel Buys Altera for $16.7 Billion

Altera, the US-based manufacturer of programmable logic devices, has been purchased by Intel in an all-cash deal worth $16.7 billion, according to the Wall Street Journal (paywalled, via Engadget).

“Altera is now part of Intel, and together we will make the next generation of semiconductors not only better but able to do more,” Brian Krzanich, CEO of Intel, said in a press release. “We will apply Moore’s Law to grow today’s FPGA business, and we’ll invent new products that make amazing experiences of the future possible – experiences like autonomous driving and machine learning.”

The deal, which is the biggest in Intel’s history, will unify Intel’s Xeon processors with Altera’s field programmable gate arrays, which are already in use together by tech giants such as Facebook, Google, and Microsoft, unifying the conjunction under one commercial banner. Intel will begin selling the pair as a bundle to start with, but aims to unify the systems into a single chip in due course.

“As part of Intel, we will create market-leading programmable logic devices that deliver a wider range of capabilities than customers experience today,” Dan McNamara, Corporate Vice President and General Manager of the Programmable Solutions Group at Intel, and former Altera employee, added. “Combining Altera’s industry-leading FPGA technology and customer support with Intel’s world-class semiconductor manufacturing capabilities will enable customers to create the next generation of electronic systems with unmatched performance and power efficiency.”

Judge Rules NVIDIA Violated Three Samsung Patents – Sales Ban Threatened

NVIDIA must be regretting filing the lawsuit accusing of Samsung of building GPUs without permission – surreptitiously claiming that NVIDIA invented the GPU – that it subsequently lost back in October. Samsung filed a countersuit against NVIDIA, alleging that the latter was infringing on a number of its patents. Judge David Shaw of the United States International Trade Commission (ITC) has now ruled that NVIDIA is indeed in violation of three of Samsung’s patents.

While the decision is not yet final, the judge considers NVIDIA to be in violation of Samsung’s US6147385US6173349, and US7804734 patents, for an SRAM module, a shared strobe buffer, and data strobe buffer, respectively.

Samsung argued during the case that its patents allowed chip manufacturers to put “what used to fill an entire circuit board with dozens of discrete components all onto a single chip the size of your thumbnail.”

If the ruling enforced, it could result in a sales ban of any infringing  NVIDIA chip. However, patent US6173349 expires during 2016, so any ban against technology that violates that patent would only be in effect for a matter of months.

Following the decision, NVIDIA’s stock dropped by 27 cents to $32.66 during after-hours trading.

“We are disappointed,” said  NVIDIA spokesperson Hector Marinez, in a statement to Bloomberg. “We look forward to seeking review by the full ITC which will decide this case several months from now.”

Samsung has yet to comment on the matter.

Image courtesy of Wikimedia.

US Researchers Develop Light-Based CPU

A group of researchers from the University of Colorado Boulder, the University of California, Berkeley, and the Massachusetts Institute of technology have created a CPU that eschews electricity to transfer data in favour of light, which operates at astronomical speeds but uses a fraction of the energy required to run a standard processor. The remarkable photonic chip has been revealed in a new paper published in the academic journal Nature.

“Light based integrated circuits could lead to radical changes in computing and network chip architecture in applications ranging from smartphones to supercomputers to large data centers, something computer architects have already begun work on in anticipation of the arrival of this technology,” Miloš Popović, Assistant Professor at CU-Boulder’s Department of Electrical, Computer, and Energy Engineering and a co-corresponding author of the study, told CU News Center.

Measuring in at 3mm by 6mm, the photonic CPU operates at a bandwidth density of 300 gigabits per second per square millimetre, a rate of up to 50 times higher than that of the conventional electrical-based microprocessors of the current market. The chip uses 850 optical input/output (I/O) components to transmit data at superfast speeds.

“One advantage of light based communication is that multiple parallel data streams encoded on different colors of light can be sent over one and the same medium – in this case, an optical wire waveguide on a chip, or an off-chip optical fiber of the same kind that as those that form the Internet backbone,” he Popović, adding, “Another advantage is that the infrared light that we use – and that also TV remotes use – has a physical wavelength shorter than 1 micron, about one-hundredth of the thickness of a human hair,” said Popović. “This enables very dense packing of light communication ports on a chip, enabling huge total bandwidth.”

AMD Begins Transition to Socket AM4 Zen Architecture

Documents have surfaced, via Benchlife.info, that suggest AMD is starting its transition from Excavator architecture to Zen architecture, with the company’s new Socket AM4 arriving on new motherboards by March 2016.

AMD has been using its Socket AM3 for over six years, so is well overdue an upgrade. The AM4 socket will support both Zen CPUs and Bristol Ridge APUs, plus DDR4 RAM and future FX CPU and APU support. DDR3 will not be supported, however. The 14nm Zen processors will support Simultaneous Multi-Threading Support Technology (SMT), allowing a performance increase of up to 40% Instruction Per Clock (IPC).

Recent unverified reports suggest that the Zen architecture has been fully tested by AMD and has “met all expectation[s]” with no “significant bottlenecks”, with hopes high that the new processor line could rejuvenate the ailing chipmaker and be more “competitive against Intel” following the relative failure of its Fury GPU series this year.

AMD’s Zen architecture, built on the company’s new 14nm process, will prioritise increasing per-core performance over core count and multi-threading, and will sport 95W TDP.

The Zen’s March release date, which constituted an independent rumour last week, is sooner than expected, which was projected for a Q4 2016 release. This separate leak seems to corroborate the early release.

Smaller & Cooler Night Vision Thanks To Graphene

Everyone knows that green hue of the night vision goggles, from TV or games you’ve seen them help police and search and rescue teams with spotting people from a distance or soldiers using them to gain that upper hand in the night. One thing you may have noticed though is that the night vision you see people wearing tends to be large devices and are often very heavy. The reason for this is quite simply because most night vision units require cryogenic cooling due to the heat the materials and electronics generate; that could soon change though with the use of graphene.

Graphene is a semi-conducting material, meaning it absorbs electric charges and its roughly 100 times stronger than steel. MIT researchers have built a new chip out of the material, designed to help keep night vision goggles cool and even minimize the size of the night vision goggles.

While able to pick up a hand and logo the next step they hope to achieve is to increase the resolution of the images, with its size enabling the devices to be inserted into devices as small as smartphones they want to make sure that the technology is of a high enough quality to be used in everyday systems. One of the suggested uses is in your windscreens, meaning that your screen could display night vision in real time, reducing all those lights that block your eyes while your driving.

Intel’s Broadwell-E i7-6950X Rumoured to Have 10 Cores and 20 Threads with 25MB Cache

Intel’s new Broadwell-E processors are due for launch during the first quarter of 2016, and a potential leak has revealed that its top-end model, the i7-6950X, will feature 10 cores, up to 20 threads, and a 25MB cache.

According to Chinese tech website Xfastest, the i7-6950X will surpass expectations, boasting more than the 8 cores and 16 threads previously expected, and offering significantly greater specs than the previous Broadwell architecture.

Xfastest’s report reads (translation courtesy of iLeakStuff):

“Some people may think that Intel’s new processor will go as high as 8 cores and 16 threads, but in fact is not the case, this time Intel will launch four new processors: New models are i7-6950X, i7-6900K, i7-6850K and i7- 6800K, where i7-6950X is the Extreme versions of the same frequency and i7-5960X, the clock is 3.0GHz and support Intel Turbo boost, but the number of cores increase into 10 cores, plus HT technical support, up to 20 threads, and the cache capacity are further enhanced from Broadwell, from the original 20MB cache now becomes 25MB”

“Intel Broadwell-E processors use X99 PCH, currently marketed X99 motherboard can receive BIOS update to support the new CPUs i7-6950X, i7-6900K, i7-6850K and i7-6800K, so you do not have to re- buy motherboards”

Qualcomm Officially Unveils the Snapdragon 820 Processor

Qualcomm has officially unveiled its new Snapdragon 820 Processor at an event in New York City. A number of leaks have already revealed the Snapdragon 820’s benchmark, as well as rumours that the chip is prone to overheating.

“As one of the most cutting-edge mobile processors ever created,” Qualcomm’s site reads, “the Qualcomm® Snapdragon 820 processor with X12 LTE supports the ultimate in connectivity, graphics, photography, power and battery efficiency.”

The Snapdragon 820 is X12 LTE-enabled through its new set of modems, capable of achieving Cat 12 downlink speeds of up to 600Mbps vai 3x20MHz Carrier Aggregation support, 33% faster than the X10 LTE equivalent, plus Cat 13 uplink speeds of up to 150Mbps though 64-QAM (Quadrature Amplitude Modulation) support.

Qualcomm’s chip features a Quad-core, custom 64-bit Kyro CPU (of up to 2.2GHz) and a Hexagon 680 DPS, bringing with them improvements to performance (double the previous iteration) and battery life, plus an Adreno 530 GPU, increasing graphics performance by 40% while drawing less power than the Adreno 430. The chip supports camera sensors of up to 28-megapixels, 4K video capture, playback, and display output, and LPDDR4 1866MHz dual-channel RAM. It also supports Quick Charge 3.0, USB 3.0, and Bluetooth 3.0.

Full specifications for the Qualcomm Snapdragon 820:

CPU

Up to 2.2 GHz quad-core (Quad-core custom 64-bit Qualcomm® Kryo)

GPU

Qualcomm® Adreno 530 GPU

Up to OpenGL ES 3.1+

DSP

Qualcomm® Hexagon 680 DSP

Camera

Up to 28 MP

Qualcomm® Spectra Image Sensor Processor (14-bit dual-ISP)

Video

Up to 4K Ultra HD capture and playback

H.264 (AVC)

H.265 (HEVC)

Display

4K Ultra HD on-device

4K Ultra HD output

1080p and 4K external display support

Charging

Qualcomm® Quick Charge 3.0   LTE Connectivity

LTE Connectivity

Snapdragon X12 LTE with Global Mode

LTE Cat 12/13 (up to 600 Mbps DL 150 Mbps UL)

Up to 600 Mbps, 256-QAM DL

Up to 150 Mbps UL, 64-QAM UL

Carrier Aggregation

3x20MHz DL, 2X20MHz UL

Global Mode

  • LTE FDD and TDD
  • WCDMA (DB-DC-HSDPA, DC-HSUPA)
  • TD-SCDMA
  • EV-DO and CDMA 1x
  • GSM/EDGE

Additional features include:

  • LTE/Wi-FI link aggregation
  • LTE-U
  • LTE Broadcast
  • LTE Dual SIM, Dual Active (DSDA)
  • HD Voice over 3G and VoLTE
  • Wi-Fi calling with LTE call continuity

Wi-Fi

Qualcomm® VIVE 802.11ac

2×2 MU-MIMO

Tri-band Wi-Fi

RF

Qualcomm® RF360 front end solution

Location

Qualcomm® IZat Gen8C Security

Security

Qualcomm® Haven Security Suite:

  • Qualcomm® SecureMSM hardware and software
  • Snapdragon StudioAccess Content Protection
  • Qualcomm® SafeSwitch theft prevention solution
  • Qualcomm® Snapdragon Sense ID 3D fingerprint technology
  • Qualcomm® Snapdragon Smart Protect

Storage

UFS 2.0

eMMC 5.1

SD 3.0 (UHS-I)

Memory

LPDDR4 1866MHz dual-channel

Process Technology

14 nm

USB

USB 3.0/2.0

Bluetooth

Bluetooth 4.1

NFC

Supported

Part Number

8996

AMD Investing Heavily to “Win the Graphics Battle”

AMD has been having a rough time as of late, reporting a net loss of $197 million during the third-quarter of 2015, forcing the company into initiating a restructure that will cut 5% of its workforce globally. AMD’s EMEA Component Sales Manager Neil Spicer, however, has told CRN that he is “confident” that the company’s fortunes will turn, and if it “invest[s] heavily” it can “[win] the graphics war”.

“From a personal stance, I am confident [we can be profitable],” Spicer said. “I believe we are working with exactly the right customers, and over the last few years we have become much simpler to execute and do business with.”

“Moving forwards to 2016, we have to have profitable share growth,” he said, adding that AMD must carefully invest in the right areas. “So it’s choosing the right business to go after, both with the company itself and the ecosystem of partners. There is no point in us as a vendor chasing unprofitable partners.”

AMD is still hoping to ride the wave of Windows 10, released this Summer, and the upgrade cycle it has already initiated. “We want to focus [in the areas] we are good at – that’s where we are going to invest heavily. That’s things like winning the graphics battle with gaming and so forth, and we want to be part of this Windows 10 upgrade cycle,” Spicer said.

“Our hope is through our education and market knowledge, that the reseller building that PC for the local dentist or butchers will be building it through an AMD platform,” he added. “Because for £300, or whatever price is decided between the reseller and the business, we should be able to bring better or more performance for the same price point, than our competitors.”

In addition to more focused investment, Spicer say the company intends to form closer relations with its resellers, making the business of selling AMD products more profitable for every party involved. “With the channel you have to measure what’s important to channel customers, so with things like profitability in the channel,” he said. “We want people happily making money on selling AMD products. We don’t have the luxury of being a loss leader; people want profitability selling our products.”

“We are really focused on profitability in the channel, and part of that is also to clear inventory. We don’t want customers sat on weeks and weeks of inventory, because they are putting cash on something that is not selling. So we focus heavily on sell out. That’s with a number of things, such as marketing resources, education training. So we are focused heavily on that from a channel perspective.”

AMD will also no doubt also be hoping that GlobalFoundaries’ newly-developed 14nm FinFET process will give the company a boost.

GlobalFoundaries Builds First AMD Chips on 14nm FinFET process

GlobalFoundaries has proudly revealed that it has built its first AMD chips on its advanced 14nm FinFET process, involving LPP silicon, with AMD planning to integrate the results into its products, including CPUs, GPUs, and APUs, soon. The process allows chips to deliver greater processing power over a smaller area while drawing less power to do so.

“FinFET technology is expected to play a critical foundational role across multiple AMD product lines, starting in 2016,” Mark Papermaster, AMD‘s Senior Vice President and Chief Technology Officer, said. “GlobalFoundaries has worked tirelessly to reach this key milestone on its 14LPP process. We look forward to GlobalFoundaries’ continued progress towards full production readiness and expect to leverage the advanced 14LPP process technology across a broad set of our CPU, APU, and GPU products.”

“Our 14nm FinFET technology is among the most advanced in the industry, offering an ideal solution for demanding high-volume, high-performance, and power-efficient designs with the best die size,” Mike Cadigan, Senior Vice President of Product Management for GlobalFoundaries, added. “Through our close design-technology partnership with AMD, we can help them deliver products with a performance boost over 28nm technology, while maintaining a superior power footprint and providing a true cost advantage due to significant area scaling.”

After qualifying its 14nm process during the third-quarter of this year, GlobalFoundaries will be ” ramping with production-ready yields” and “excellent model-to-hardware correlation” at its Fab 8 facility in New York, with full-scale production intended during 2016.

Class Action Lawsuit Launched Against AMD Over Bulldozer Core Count

AMD is set to face legal action over claims that it falsely advertised one of its previous generation CPU architectures. The Bulldozer CPU had a mixed response when released, due in part to its unique design which hampered the chip’s competition against Intel’s equivalent processor. Now, according to Legal Wire, AMD is set for a belated kicking over the Bulldozer architecture after a class-action lawsuit was filed in the U.S. District Court for the Northern District of California.

The lawsuit alleges that AMD falsely advertised its Bulldozer CPUs as having eight cores, despite the chip being unable to handle eight instructions simultaneously, and thus is guilty of false advertising, fraud, breach of express warrant, negligent misrepresentation and unjust enrichment under the Consumer Legal Remedies Act of California’s Unfair Competition Law.

The trouble stems from the Bulldozer’s “Clustered Integer Core” micro-architecture, which combines two integer cores with a one floating-point core and a shared L2 cache, with multiple modules combined to form the CPU. But, according to Tony Dickey who filed the suit, the two integer cores cannot operate independently, which leaves the chips only able to operate four simultaneous commands, not eight.

Dickey claims that “tens of thousands of consumers” that do not understand the complexity of CPUs have been fooled into buying chips that cannot operate in the same manner as a “true eight-core” processor would, causing “material performance degradation”.

Image courtesy of Softpedia.

Google Could be Building Its Own Processors

Reports suggest that Google will soon start building its own processors. While the Cupertino company has long relied on other vendors for its chips, with the new Google Pixel C tablet, set for release later this year, running on an NVIDIA Tegra X1, a new job posting for the Mountain View, California Pixel C team shows Google are searching for a Multimedia Chip Architect. The Pixel C is described by Google CEO Sundar Pichai as the “first Android tablet built end-to-end by Google.”

According to the job posting, the responsibilities for the Multimedia Chip Architect will be:

  • Propose chip architecture based on product requirements
  • Prototype design in FPGA or simulator
  • Evaluate performance of various performance algorithms
  • Lead a chip development effort
  • Work with other engineers to take chip to product shipment

“Normally, I wouldn’t read too much into a job posting because often system designers need people with chip-level expertise,” chip analyst Jim McGregor told Business Insider. “However, with the trend towards vertical integration, especially at Microsoft and Apple, it wouldn’t surprise me if Google developed their own chips, especially for Android productivity tablets to compete with the Surface Pro and iPad Pro.”

Google has declined to comment on the job posting or any potential plans to build its own processors.

Image courtesy of TechRadar.

Leak Reveals Intel’s 14nm Xeon-D and Pentium-D Line-up

Intel first unveiled its Xeon-D processors, built on the 14nm Broadwell architecture and high-powered SoCs, way back in September 2014, but now, thanks to CPU World, we now have an updated line-up for both the Xeon-D and Pentium-D ranges, which now includes 12 and 16 core SKUs and caches of 18 to 24MB caches.

When we first glimpsed the Intel Xeon-D platform – formerly known as Broadwell DE – it consisted of the D-1518, D-1528, D-1537, D-1548, plus the Pentium D 1503, D 1507, D 1517. The updated line-up now includes the high-end Xeon D-1577, Xeon D-1567 and Xeon D-1557. The premier Xeon D-1577 boasts 16/32 cores/threads plus a 24MB cache. While the price is unknown, it is expected to launch during the 4th quarter of 2015.

The full updated line-up courtesy of WCCF Tech can be seen below:

SKU Cores/Threads Cache TDP Launch Date
Xeon D – 1577 16/32 24 MB (TBC) Q4 2015
Xeon D – 1567 12 or 16 Cores (TBC) 18 or 24 MB (TBC) (TBC) Q4 2015
Xeon D – 1557 12/24 18 MB (TBC) Q4 2015
Xeon D – 1548 8/16 12 MB 45W Q4 2015
Xeon D – 1537 8/16 12 MB 35W Q4 2015
Xeon D – 1528 6/12 9 MB 35W Q4 2015
Pentium D – 1519 4 or 6 Cores (TBC) 6 or 9 MB (TBC) (TBC) Q4 2015
Xeon D – 1518 4/8 6 MB 35W Q4 2015
Pentium D – 1517 4/4 6 MB 25W Q4 2015
Pentium D – 1507 2/2 3 MB 25W Q4 2015
Pentium D – 1503 2/2 3 MB 17W Q4 2015

iPhone 7 to Feature Innovative new Intel LTE Chip

A new report suggests that the new Apple handset, the iPhone 7, is set to feature the Intel 7360LTD modem chip, which doesn’t sound terribly exciting, but given that it’s also reported that Intel already has 1,000 employees working on the brand new hardware for the upcoming handset, it should be pretty special.

The new LTE chip is set to give the next-gen Apple handset a hefty speed boost, offering much faster wireless speeds that I’m sure any user of the device will welcome. The new chip is said to be capable of up to 450Mb/s downlink, will support an impressive 29 LTD bands, meaning that using the phone virtually anywhere in the world shouldn’t be an issue, with all major 4G LTE bands supported.

“Sources with knowledge of the situation say that Apple eventually would like to create a system-on-a-chip (SOC) that includes both the phone’s Ax processor and the LTE modem chip. A system-on-a-chip design could deliver significant returns in improved speed and better power management.” said Venturebeat

If this works out, a partnership between Intel and Apple could be very lucrative and could open up a lot of possibilities for both companies in the future. Of course, Apple is sticking with the LTE chip for now, as the iPhone 7 is expected the run on the new 10nm A10 chip.

Upcoming AMD 8-Core CPU Performance Leaked

Benchmarks of AMD’s upcoming “Heirofalcon” SOC have been leaked which indicate the chip’s capabilities and astonishing power to performance ratio. The SOC features 8 ARM 64-bit A57 cores operating at a frequency of 2.0GHz while only utilizing a maximum TDP of 30w. In terms of its specification, the Heirofalcon SOC is based on the 28nm manufacturing process and contains 4MB of L2 cache. Additionally, the CPU has a dual-channel DDR3/DDR4 memory controller with ECC support up to speeds of 1866MHz. As a result, the chip is incredibly versatile.

AMD plans to release a number of different versions with varying wattage demands which should relate to frequency alterations. Rather surprisingly, the leaked benchmarks provide a great amount of detail and include comparisons to older AMD chips. Please note, the benchmark was conducted using an early engineering sample which might not reflect the final version.

As we can see from the data, AMD’s Hierofalcon performs exceedingly well for an ARM-based CPU given the low TDP and 2.0GHz frequency. 

In multi-core workloads, the chip once again manages to achieve great results, but falls well behind in tasks like kPipe. However, this is expected given its core architecture.

Finally, we can see the power efficiency rating across various benchmarks which shows how amazingly efficient the Hierofalcon CPU is. Throughout testing, the results were extremely consistent and signified a return to form. As always, it’s important to take any leaked benchmarks with a grain of salt, but the Hierofalcon looks very promising!

Thank you WCCFTech for providing us with this information. 

Apple Guilty Of Using University of Wisconsin’s Patents Without Permission

We all use technology every day, and within those thousands of pieces of technology we find thousands more built up with designs and ideas from hundreds and thousands of areas. To protect companies interests in these designs and ideas they use patents, and the normal process is that if you wish to use a patents idea or the technology it protects then you request the owners permission. Sadly it would seem that some big companies use technology without supporting the little ones and in this case, Apple has been found guilty of just such a misuse.

The University of Wisconsin holds a patent that covers some technology designed to improve chip efficiency and given that almost every piece of technology these days use chips it’s obviously something that could support the university for long time into the future. In this case, though it would seem that the courts have voted in favour of the University of Wisconsin and has found Apple liable for using the technology without their permission, specifically in iPhones and iPads. The case is set to reappear in court, with the judge stating that it could cost Apple around $862.5 million, which isn’t the worst news for them given that just last month they had another case brought against them regarding the same technology but in the iPhone 6’s and the iPad’s A9 and A9x chips.

This isn’t the first time that these issues have been brought up, in 2008 Intel had the same charges brought against them, but this was immediately settled out of court. Although I doubt Apple is too worried about the money in the long-term.

Thank you Engadget for the information.

DARPA Developing On-Chip Liquid Cooling

The US Defense Advanced Research Projects Agency is helping to develop on-chip liquid cooling for field-programmable gate array (FPGA) systems that could easily be adapted for use with CPUs and GPUs. On-chip cooling of this manner would allow manufacturers to shrink the size of processors without having to consider the addition of heatsinks and fans, while increasing the lifespan of chips.

Thomas E. Sarvey, Graduate Research Assistant at the Georgia Institute of Technology, presented the paper Embedded Cooling Technologies for Densely Integrated Electronic Systems, revealing its on-chip liquid cooling research to date, during the IEEE Custom Integrated Circuits Conference 2015.

“We have eliminated the heat sink atop the silicon die by moving liquid cooling just a few hundred microns away from the transistors,” Muhannad Bakir, Associate Professor and ON Semiconductor Junior Professor in the Georgia Tech School of Electrical and Computer Engineering, said. “We believe that reliably integrating microfluidic cooling directly on the silicon will be a disruptive technology for a new generation of electronics.”

The research team cut microfluidic channels into the surface of the FPGA devices and attached a bespoke Altera-supplied water cooling system. It demonstrated the set-up, using another air-cooled FPGA for comparison, to DARPA. The liquid cooled chip clocked in at 24oC, compared to 60oC with the air-cooled control test.

Thank you The Stack for providing us with this information.

Intel to Enable SGX on Latest Batch of Skylake CPUs

Intel’s Software Guard Extensions (SGX) originally arrived on the Haswell architecture and provides an instructions set which allows programs to offset private memory subsets for data purposes. On launch, the first batch of Skylake CPUs had this feature disabled for some unknown reason. Thankfully, the latest batch and all future Skylake samples contains SGX by default according to a Product Change Notification.

The CPUs in question are the Xeon E3-1200 v5, Core i5, and Core i7 variants with a different S-Spec code to determine each chip’s batch revision. The SGX enabled CPUs should become available on October 26th and doesn’t require any mechanical changes or re-certification. While this isn’t going to be a major concern for consumers, it’s interesting to see SGX being disabled on launch. Please note, only the S-Spec code will change and the naming scheme will remain the same.

Skylake prices have soared since the launch date and even surpassed the 5820K at certain retailers. Hopefully, as the supply chain improves, prices will decrease and the architecture could become a good upgrade path for those on older Intel chips.

What CPU are you currently using in your setup?

Thank you The Tech Report for providing us with this information.

AMD’s Carrizo APU Reduces Carbon Footprint by 46%

Earlier this year, AMD launched the A-series APUs under the “Carrizo” codename which strive for energy efficiency and lower wattage demands. In 2014, AMD outlined the 25×20 energy strategy to produce chips 25 times more efficient than current products by 2020. According to AMD’s research team, the extremely efficient Carrizo architecture has put the company on course to reach its 2020 target. More specifically, Carrizo chips alter the core voltage to gauge power demands and ensures the maximum frequency is only used when required.

In the enthusiast market, AMD has struggled to compete with Intel especially in single-threaded performance. However, this is a fairly niche sector and it’s sensible for AMD to work hard to manufacture low-cost, high-yield APUs which provide an excellent wattage to performance ratio. In the future, discrete graphics cards might become obsolete and replaced by APUs as computational demands are offset to a server. Whatever the case, AMD needs to make their products more energy efficient and that also applies to the Radeon brand. Thankfully, the Fiji architecture is a step in the right direction and illustrates AMD’s policy towards modern CPUs and GPUs.

Despite this, the majority of press coverage will surround AMD’s future high-end desktop CPUs and I hope they can produce something to shake up the market and make Intel feel less comfortable.

Thank you Venturebeat for providing us with this information.

Intel Expands Its Skylake Line-Up

Intel has announced the expansion of its 6th generation Core processor line-up, revealing seven new models in its Skylake series. The chips, built-in the LGA1151 package, are compatible with motherboards running Intel’s 100-series chipset. The overclockers’ favourites the Core i7-6700K and the i5-6600K are to be joined by the i7-6700, i5-6600, i5-6500, i5-6400, i3-6320, i3-6300, and i3-6100 processors.

The Core i7-6700 and i5-6600 chips differ from the i7-6700K and i5-6600K in that neither has unlocked BClk multipliers, and both have lower clock speeds out of the box.

The i7-6700 has a 3.40GHz clock with 4.00GHz max Turbo Boost, compared to the i7-6700K’s 4.00GHz with 4.20GHz Turbo, and the i5-6600 clocks at 3.30GHz, climbing to 3.90GHz with Turbo Boost, with the i5-6600K offering 3.50GHz and 3.90GHz Turbo Boost.  The i7-6700 and i5-6600 are priced at $312 and $213, respectively.

The other Skylake quad-core processors include the Core i5-6500, which clocks at 3.20GHz with 3.60GHz Turbo Boost, priced at $202, plus the Core i5-6400, with a 2.70GHz clock and 3.30GHz Turbo Boost, which will cost $187.

The last three chips, which comprise the dual-core line-up, are the Core i3-6320, Core i3-6300, and Core i3-6100. The $157 i3-6320 processor has 3.90GHz clock speeds, while the i3-6300, priced $147, clocks at 3.80GHz, and the i3-6100 has speeds of 3.70GHz and costs $117. Being i3 processors, none of the dual-core processors have Turbo Boost capacity, but all feature HyperThreading, offering four logical CPUs.

Intel’s quad-core Skylake processors boast a TDP of 65W, while the dual-cores are rated at 47W.

Thank you TechPowerUp for providing us with this information.

Image courtesy of WCCF Tech.

Intel Processors Vulnerable to Rootkit Exploit Since 1997

A researcher from the Battelle Memorial Institute has revealed that every Intel x86-based processor – and possibly some AMD processors – since 1997 are vulnerable to a rootkit exploit that could grant hackers access to the low-level firmware of a PC. Christopher Domas revealed the concern at the Black Hat 2015 conference in Las Vegas this week.

The vulnerable component of the chip is the System Management Mode, which is the part responsible for subsystem controls, such as power distribution. The exploit does require full system privileges, but a successful attack allows a hacker to delete a computer’s Extensible Firmware Interface, or even replace it with a rootkit. Such an attack would be completely undetectable by security scanners, and a rootkit would remain in place regardless of what is done to the board’s software of drives.

Since becoming aware of the bug, Intel has been working on a patch, but since the vulnerability has existed for nearly 20 years, it seems a little late. There’s no telling just how many PCs have fallen victim to this exploit, and it remains unlikely that any patch would reach every endangered processor. Thankfully, the difficulty of launching such an attack, both with the level of system privilege and coding skill required to abuse an exposed processor, means there should be few casualties.

Thank you HotHardware for providing us with this information.

Intel Skylake Series Prices Revealed

With the Intel Skylake-S series of processors is set to officially unveiled on 5th August, prices for each model have been leaked, and a cursory glance at the list below suggests a performance boost over the Intel Haswell Refresh for less money. An average of about 7% less:

Could we really be getting an upgrade on the Haswell Refresh at a reduced price? Well, it seems unlikely. It is to be assumed that the listed prices are merely wholesale figures, and that the retail cost will add at least another 25% to the price. Even taking into account an inevitable retail price increase, the Haswell Refresh equivalent of the top-listed Skylake-S, the Intel Core i7-6700K, is the Intel Core i7-4790K, which retails for around $350. A 25% hike on the i7-6700K puts it at approximately $400, which still seems reasonable for the impressive CPU.

The Skylake-S range of processors, described by Intel as its “most important chip architecture”, is set to be officially unveiled at Gamescom on 5th August, with the retail release expected to follow shortly after, expected to be before the end of August. The box art for the i7 6700K and i5 6600K has already been leaked after going on-sale early in Australia.

Image courtesy of VRWorld

19-Year-Old’s Supercomputer Chip Startup Gets DARPA Funding

Ambitious 19-year-old Thomas Sohmers, who launched his own supercomputer chip startup back in March, has won a DARPA (Defense Advanced Research Projects Agency in the US) contract and additional funding for his company, worth $100,000. The startup, Rex Computing, is currently putting finishing touches to the architecture of its final verified RTL, which is expected to be completed by the turn of the year. The new Neo chips will be sampled next year, before moving into full production, using TSMC’s 28 nanometre process, in mid-2017.

Rex Computing recently raised $1.45 million capital during its first round of funding – with which it intends to hire more engineers to facilitate the production of its Neo chip – but will still have to rely on a small team, working to a punishing deadline. Despite constraints, though, Sohmers seems optimistic that Rex can deliver the Neo on time.

“When Intel does this, they have 300 or more people on many teams over 18 months. We’re doing it with five on a tight schedule,” Sohmers said. “The cost for us going to TSMC and getting 100 chips back is, after you include the packaging and just getting the dies to our door, around $250,000. They sell them in blocks with shared costs of the mask among other companies, which is how we’re getting our first prototypes made.”

The secret to Rex’s early success, in terms of funding and support, Sohmers claims, is to use the term “supercomputer” as infrequently as possible. “In this age of social networks and messaging apps being the big thing in Silicon Valley, it’s almost impossible to get funded if you’re pitching something for the big iron systems,” he explained.

Thank you The Platform for providing us with this information.

Image courtesy of Wikimedia.