For those of you hoping for massive performance jump with the launch of Pascal, prepare to be disappointed. Every new generation tends to improve performance but some more than others. According to previous rumours, Nvidia is using their GP104 die to replace the GTX 980Ti with the GTX 1080 and 1070. Now, the latest reports are suggesting that Nvidia will launch 3 different Pascal SKUs, all based off of GP104, at Computex.
As the xx4 die, GP104 has traditionally been viewed as the smaller chip to the larger x10 or x00 dies that traditionally power flagships. Due to this, don’t expect Pascal to surpass the 980Ti by any large amount. Today’s news also furthers that impression. By splitting GP104 into 3 SKUs, we can expect performance between the 3 cards to be pretty close. It wouldn’t make sense to have so many close performing cards to the flagship which suggests that GP104 won’t be real flagship material.
By slipping GP104 into 3 SKUs, we will likely run into the same situation as the GTX 560Ti 448/570/580 and the 660Ti/670/680. If we take our past experience with those cards as the guideline, we can expect differentiation, not just on the core but the memory bandwidth as well. This makes the previous rumours about the GTX 1070 using GDDR5 while the GTX 1080 will use faster GDDR5X. The 1060Ti as I am calling it may feature either a gimped 192bit bus or the same situation faced by the GTX 970 with a section of VRAM being slower.
Right now, all we have to differentiate the 3 SKUs are the model numbers, the GTX 1080 will be GP104-400-A1, the GTX 1070 GP104-200-A1, and the 1060Ti will be using the GP104-150-A1. It will be interesting to see how Nvidia will differentiate the cards and how they compete against current Maxwell models. Computex can’t come soon enough!
Even when much of the excitement about VRAM is coming from HBM2, that technology isn’t quite ready for prime time yet. For now, HBM2 is still a ways away and still a premium product. To hold the line, memory vendors have come up with GDRR5X, a significantly improved version of GDDR5. In what is unquestionably good new, Micron has just started sampling their GDDR5X modules to customers, way ahead of their original summer target.
GDDR5X has been moving along quickly since JEDEC finalized the specifications back in January. It was also only last month that Micron got their first samples back from their fabs to test and validate. This means that GDDR5X was easier to implement than expected and the quality of the initial batch was good enough that there wasn’t much to change in the production process.
Micron will be offering GDRR5X in 1GB and 2GB IC’s, allowing for 8GB and 16GB VRAM GPUs on as narrow as 256bit memory buses. The biggest advantage of GDDR5X is the doubling of bandwidth from 32byte/access to 64byte/access. Combined with higher clock speeds that allow for up to 16Gbps and improved power efficiency, the new memory will be a good match for Pascal and Polaris while we wait for HBM2.
Engineers at the Micron Development Center in Munich have announced that they have gotten their first samples of GDDR5X back from their fab before schedule and have started testing. In addition to that, Micron is expecting to ramp up volume production of GDDR5X on their 20nm memory process sometime in mid-2016. GDDR5X is an evolution on GDDR5 rather than a new memory technology and is expected to tide the industry over till HBM2 and HMC come online.
In early testing, some of the GDDR5X samples have already hit 13Gbps, just short of the eventual 14Gbps goal for the production modules. Combined with a new improved prefetch and new quad data rate, GDDR5X is expected to double the bandwidth over GDDR5 while increasing capacity and reducing power consumption. With progress going well, samples will begin to ship to partners (like AMD and Nvidia) in the spring. This means we will be unlikely to see any GDDR5X based cards until fall 2016.
While GDDR5X will still fall short of HBM2 bandwidth, it will undoubtedly be cheaper. It will also allow GPUs to be made with narrower buses while still maintaining the same overall bandwidth, allowing for reduced power consumption, cheaper GPUs and faster GPUs. We can expect the mainstream and even performance segments to utilize GDDR5X while the budget cards stick with GDDR5 and the enthusiast cards use HBM2. For more on GDDR5X, check out our write-up here.
JEDEC Solid State Technology Association is one of the world leaders in the memory standards field and today published JESD232, the specification for GDDR5X graphics memory. With both sides of the graphics card battle seemingly set to use this new standard going into 2016, the standard should herald the release of new graphics cards making use of the RAM.
GDDR5X graphics memory (or SGRAM) is derived from the commonplace GDDR5 used in the majority of current graphics cards while identifying key areas in which the existing standard can be enhanced in both design and operability that make them more able to handle applications that benefit from very high memory bandwidth. The aim for GDDR5X is to reach data rates in the region of 10 to 14 Gb/s, twice as fast as GDDR5. While this falls short of the enormous 256 Gb/s HBM2 GRAM is meant to be capable of, GDDR5X should be suited to more affordable grades of graphics card where HBM is price ineffectual. GDDR5X should also be able to ensure an easy switchover from the previous standard for developers, with the new standard retaining usage of GDDR5’s pseudo open drain signaling.
How GDDR5X impacts Micron’s development of GDDR6 remains to be seen, with both technologies seemingly targeting the same area of the graphics card market. Regardless, with HBM2 for enthusiast grade cards and this newly standardized GDDR5X for the rest of the field, 2016 should be an exciting time for the GPU market whether you’re a fan of AMD or nVidia.
Ever since HBM1 was revealed and launched with Fury X, many have been looking forwards to what HBM2 would bring along in 2016. While HBM1 brought large power savings and a major boost in memory bandwidth, it was largely limited to a relatively low 4GB capacity. HBM2, however, is set to provide a boost in capacity and bandwidth by increasing the number of stackable dies. We’re now getting reports that AMD’s upcoming Polaris chips will utilize HBM2.
As a major revamp of the GCN architecture, a Polaris flagship GPU would be the natural product to debut HBM2. A flagship GPU much more powerful than current generation chips due to the new architecture and process node would likely require more memory bandwidth to feed it and a high memory capacity as it would be meant for VR and 4K gaming. Being the largest chip in the lineup, the flagship would also benefit from the major power savings, helping offset its core power consumption. The confirmation of HBM2 also suggests that we will be getting high-end Polaris chips this year.
At the same time, AMD is also confirming that they will continue to use GDDR5 and likely GDDR5X as well. At CES, AMD showed off a low powered Polaris chip using GDDR5 that was able to provide the same performance as Nvidia’s GTX 950 but with a significantly lower power consumption. With such a leap in efficiency, the HBM2 chips will likely be light years ahead of current cards in terms of efficiency if GDDR5 already shows such massive gains.
The graphics card market is full of interesting power struggles and if recent reports are true, it seems 2016 will be one of the biggest battles yet. AMD may have already put out some cards with HBM memory, and we’ve heard that Nvidia will be doing the same soon too, but don’t count GDDR memory out just yet! It seems that the upcoming GDDR5 standard is being developed by Micron, which will power mid-end graphics cards, while HBM2 will likely remain for higher end cards.
Of course, there’s some confusion here as JEDEC are already working on the GDDR5X standard, so where Micron fits in really remains to be seen, but that’s something we’ll have to wait and see. GDDR5X is said to double the bandwidth, so is GDDR6 is new standard, or just a further refinement of 5X? Either way, we can expect it to adopt a lower node, most likely starting from 20nm and working down from there, allowing for higher clocks, and lower voltages, although these kinds of improvements are the obvious targets for any increase in performance these days.
Our guess is that the revised GDDR standards will be acting as a bridge until HBM matures enough to cover a wider range of cards and budgets. Either way, 2016 is shaping up to be an exciting time in the GPU market, with new memory, new architectures, new cards and so much more on the horizon.
We have previously reported the rumours that Nvidia was planning to use GDDR5X on their upcoming Pascal graphics cards, a rumour that not everyone bought right away. But there are quite a few reasons that we could see this happen and the newest leak seems to support this.
Last time it was a German site that leaked the GDDR5X information and this time we get news from a Russian outlet that got their hands on what looks like leaked slides from Micron of the upcoming GDDR5x memory. The site also suggests that this won’t be limited to only Nvidia, but that AMD also will want to get on board and use this type of memory on some of their graphics cards next year.
HBM, and HBM2 might be very exciting and be the next mainstream graphics memory, but it is still a costly one to produce and the production is also limited. This leaves room for the next GDDR5 standard to make its entry. GDDR5X offers double the data-rate per memory access of 64 byte/access compared to 32 byte/access of the current GDDR5 standard. Where current GDDR5 tops out at around 7Gbps, the new standard will initially offer 10-12 Gbps with a later goal to achieve 16 Gbps.
The implementation of the new memory should be relatively easy and cheap for manufacturers as the new chips will retain the same pin layout. With all this information, we can assume that HBM will be reserved for the top-tier graphics cards for the foreseeable future while GDDR5x will breath more power into mid-level and entry-level cards and allow them to perform better at the ever-increasing resolutions.
The tech world is abuzz with rumours regarding NVIDIA’s next-generation architecture, Pascal, and the latest word (in German) is that the GPU producer will be introducing a successor to GDDR5 with it. Speculation says that NVIDIA will debut GDDR5X with its next-gen GeForce cards. While it will likely retain the familiar 256-bit memory interface, the GDDR5X’s bandwidth will stretch to 448GB/sec, blowing all its AMD rivals – bar the Fiji-based HBM cards, which are thought to be under threat due to reported supply problems – out of the water. HBM2 should also feature in its new graphics cards, with a 2048-bit memory bus at 1GHz and a 512GB/sec memory bandwidth.
NVIDIA is thought to have been testing its Pascal architecture since last month, but the latest rumours suggest that the tests involve the GP100 and GP104 chips. To clarify, for those unfamiliar with NVIDIA’s GPU naming convention, its GM204 chip powers the GTX 980 and the GTX 970, with the GM100 integral to the GTX Titan X and GTX 980 Ti. Therefore, the GP100 and GP104 will mark the high-end Titan cards and the consumer GeForce cards, respectively.
NVIDIA’s Pascal architecture with GDDR5X is expected to hit the market towards the end of 2016.