From the GPU-Z screenshot, we pretty much get a good idea of the card’s performance. The 2048 shader cores, 128 TMUs and 32 ROPs all clock in at a good 1070 Mhz. Pixel fill rate comes in at 34.2 GPixel/s which is pretty much expected given the Tonga configuration. Texture fillrate is 137 GTexel/s which is much better than what the R9 380 and R9 280X boasted. 4GB of 6125Mhz GDDR5 VRAM wrap it up by giving 172GB/s via the 256bit bus. Overall, these specs place the card solidly between the R9 380 and R9 290/390.
In 3DMark 11 Extreme, the card managed to score 4024 overall with a relatively weak Intel Core i5 and 3768 in graphics. The R9 290 scores around the 4200 mark and the R9 280X at about 3300. Based off our estimates from extrapolating Tonga/GCN1.2 improvements over the R9 290X/GCN1.0, we would expect the 380X fall a bit short of the R9 290 but still surpass the GTX 780 in most cases. This is despite the 780 scoring about 3600 in 3DMark 11 since that test tends to favour Nvidia cards more.
Overall, AMD looks to have winner int he midrange with this card. Depending on the price, the 380X can steal some marketshare back from Nvidia which has a sizable gap between the 970 and 960 in terms of performance. Given some of the limitations of the 960, Nvidia may want to consider a cut-down 970 that is not memory bottlenecked in order to do battle. As one of the last 28nm and GCN cards, AMD is making sure to go out with a bang.
Just after we’ve received word on Nvidia’s new GTX 950, it looks like Nvidia is prepping things up for the card’s arrival. According to Hardware.fr, Nvidia is set to cut prices on the 750 Ti by about €10 – €15. It looks like Nvidia wants to clear out as much of the 750 Ti stock before the replacement 950 arrives.
Given how close the specifications are between the 750 Ti and 950 are, it’s not surprising Nvidia wants to get rid of the 750 Ti as fast as possible. The 750 Ti comes with a core configuration of 640:40:16 which is only a tad less than the 768 shaders the 950 is armed with. While I don’t know how many TMUs or ROPs the 950 has, it likely falls around between 40:16 and 64:32. With the 960 above at 1024:64:32, there is little room for a 950 Ti though Nvidia has not been shy about filling up every possible niche, even if it means cannibalizing its own cards.
With 768 Maxwell 2.0 shaders, the 950 should do battle at the old 750 Ti price point of about $120-$150 USD. This should put it right against the R7 370 for low/medium 1080p which performs a bit ahead of the 750 Ti. Given the large gap in price between the 950 and the 960, Nvidia may end up launching a 950 Ti with 896 shaders or with 768 shaders and improved TMU and ROPs. With Pascal not likely to come till late 2016 especially for the low end, the 950 may be here to stay for quite a while.
AMD is putting the final touches on everything and preparing to launch their new Radeon R300 series graphics cards very soon, but before it even hits the market we already get information about the next generation of AMD graphics cards. The R9 300 series is set to launch in June during Computex in Taipei and it will continue to use the 28nm process as the 20nm process just isn’t viable yet for these kind of products, the costs are just too high.
But the next generation of AMD cards from the Arctic Islands series, codenamed Greenland, will be built on the 14nm FinFET technology. This means that AMD could skip the 20nm process entirely. Another detail revealed is that the Greenland card will use the second generation of HBM memory with increased bandwidth and capacity. It is expected that AMD’s 14nm FinFET process will be produced by Globalfoundries OEM.
This could mean some promising times ahead of us with more powerful GPUs that use even less power than they do today, but also heavily improved memory in both capacity and speed. I can hardly wait to see what AMD has to offer here, although we should be looking forward to the next generation R9 300 cards instead. Computex isn’t far away, so it will be an exciting summer.
The R9 290X has led AMD’s single GPU offerings for what seems like quite a long time now. Released in October 2013 the R9 290X is only a year old but it’s ageing process has been accelerated by successive faster Nvidia graphics cards; the GTX 780 Ti and recently the GTX 980. To ensure competitiveness in the market place AMD has maintained the R9 290X at an attractive price point although Nvidia’s GTX 980, being a generation ahead in architectural terms, has thrown a spanner in the works. Many AMD partners have taken it upon themselves to issue price cuts on the R9 290X, independent of AMD’s official pricing guidance. On the wave of the R9 290X price cuts today we are assessing Sapphire’s newest launch: the R9 290X Vapor-X 8GB graphics card. It uses an identical cooling solution to the 4GB Sapphire R9 290X Vapor-X and it’s also visually similar to the Tri-X cooling solution equipped on slightly cheaper Sapphire cards, except with a different colour scheme. The obvious flagship feature of this new card, is the doubling of VRAM, aimed at gamers tackling the newest video-memory-intensive titles like Middle-earth: Shadow of Mordor.
Packaging and Accessories
The packaging and accessory bundle is a much better equipped than most R9 290X’s on the market as it includes a free mouse-mat, HDMI cable and dual power adapters.
A Closer Look
The card itself is a sheer monster: triple fan, triple slot and backplate equipped.
At the bottom we find a rather dense vapour-chamber style heatsink.
At the end of the card we get a glimpse at the five heat-pipes being used. There’s a trio of 8mm heat-pipes at the centre of the contact and a couple of 6mm at the edge of the GPU.
The card nearly takes up three slots in thickness which isn’t surprising given how hot the R9 290X can run: you need a lot of heatsink to tame Hawaii. Along the top we find a pair of 8 pin connectors for power delivery and a BIOS switch for switching between UEFI and legacy BIOS operation modes.
On the rear of the card we find a nice looking backplate and individual heatsinks for the VRM phases: pretty cool!
The I/O offers a pair of DVI, HDMI and a DisplayPort. That’s enough connectivity to power six displays with the help of a DisplayPort MST hub.
In May of this year ASUS decided to introduce a new product series into its graphics card line up, the STRIX line. STRIX first showed up on two ASUS graphics cards sporting the AMD R9 280 and the Nvidia GTX 780 GPUs. The headline feature was a 0dB fan operation mode below 65 degrees celsius operation. Shortly after those releases ASUS expanded the STRIX moniker to a range of gaming peripherals to try and create a comprehensive brand for gaming needs. In the graphics card space the STRIX series has phased out the primacy of the ASUS DirectCU series of graphics cards. Although, the term phased out should be used loosely since ASUS effectively market the STRIX GTX 900 series graphics cards as STRIX and DirectCU products at the same time. While confusing, the duality of the naming pattern allows clear recognition of what these STRIX GTX 900 series cards offer. They offer DirectCU II cooling solutions but also the STRIX 0dB fan mode.
Today we are reviewing the ASUS STRIX GTX 970 graphics card which is inspired by the DirectCU philosophy as we find a direct copper contact cooling solution with a pair of fans. ASUS have improved the power delivery to a DIGI+ 6 phase super alloy power design and to give end users some further benefits ASUS have pushed the clock speed up by about 65MHz too. The much paraded 0 dB fan mode is also equipped and I would also like to point out that other manufacturers have jumped on the passive fan bandwagon. MSI and EVGA are the most recent vendors to offer some mimicked variant of this.
Packaging and Accessories
The ASUS STRIX GTX 970 comes in a dual branded box as we’ve already mentioned: both DirectCU II and STRIX branding sits side-by-side. The accessory package is basic: a driver DVD and generic speed setup guide. No power or display adapters here.
Nvidia’s Maxwell GTX 980 arrived into the high-end graphics card market with a thump just a few weeks ago, causing significant whiplash to AMD’s highest end offerings. The GTX 980 holds a notable advantage in terms of performance, price and power efficiency over AMD’s flagship R9 290X. During the GTX 980 launch it has been easy to forget that Nvidia actually launched the GTX 970 too which comes in $230 cheaper with only a marginal decline in raw GPU horsepower over the flagship GTX 980. On paper, at least, the GTX 970 looks to be a performance to price champion in the high-end graphics card market as it offers the best synthesis of price, performance and power consumption.
Today we get our first glimpse at what the Nvidia GTX 970 has to offer as we examine a custom GTX 970 from Gainward under their Phantom line. As the name suggest Gainward equip their infamous Phantom cooling solution as well as a hefty out of the box overclock, from what we can infer this treatment commands a 10~% price premium over Nvidia’s reference MSRP of $329.
Packaging and Accessories
The accessory package that Gainward provide with the Phantom GTX 970 is basic: a power adapter and DVI to VGA adapter with a quick install guide.
At eTeknix we were fortunate enough to have received a pair of GTX 980s for Nvidia’s launch day so we thought we’d see what they have to offer in SLI. When running a pair of the same GPUs in SLI it is ideal to get identical graphics cards for the most consistency of cooling, clock speeds, VRAM sizes and so on. In our case that wasn’t possible as we have one GTX 980 from Nvidia which is the reference model (check our review of that here) and we have one GTX 980 from Gigabyte which is their flagship G1 Gaming model (check our review of that here).
For our SLI testing both the GTX 980s were driving an ASUS ROG SWIFT PG278Q display In our review of both GTX 980s we know a single GTX 980 graphics card is more than enough for a smooth 60 frames per second (FPS) at 1440p in the vast majority of titles. However, if you want to effectively use the 144Hz refresh rate of the ASUS ROG SWIFT, you’re going to need to churn out as close to 144 FPS as possible. In this scenario SLI GTX 980s actually make sense because on their own one GTX 980 is not enough to drive such a high frame rate. Of course, it goes without saying you have to have deep pockets to afford a 144Hz 1440p monitor and a pair of GTX 980s – but even if you don’t have that kind of money seeing the numbers is still interesting.
We put both GTX 980 graphics cards onto our Core i7 4960X and X79-based test system ensuring adequate spacing and that both have access to sufficient PCIe bandwidth for SLI operation.
In order to keep both GPUs running effectively we made sure the better cooled Gigabyte GTX 980 sits in the traditional “hot position” of an SLI configuration, the Nvidia GTX 980 reference card is at the end to give it more room to breathe.
Gigabyte’s model comes heavily overclocked so we had to underclock it for the purposes of this testing to make it resemble as close to a reference GTX 980 as possible. By removing 101MHz off the core we were able to match the GPU clock speeds on both cards, although the Gigabyte card (the GPU-Z on the right) has a slightly faster boost speed.
We also didn’t want to run into any cooling issues. We know from our reviews that Gigabyte’s card doesn’t thermal throttle but Nvidia’s reference design does. As a result we set a custom fan profile within MSI Afterburner to make sure both GPUs stay under 80 degrees to avoid clock speed variations. We did this because it’s easier to make sure both GPUs do not thermal throttle than it is to make sure Gigabyte’s GTX 980 thermal throttles in the same way as the reference graphics card. The main thing to note is that we didn’t want significant clock speed mis-matches so this was the smartest option.
Running the SLI configuration in Unigine Heaven for a while gave us an idea of what stable clocks both graphics cards were using. The core and memory clocks were pretty much identical which is great news for getting some accurate test results. We will of course overclock both cards towards the end and see how that helps overall performance.
Nvidia’s GTX 980 is now official and with it comes the wonders of the Maxwell architecture for high-end desktop users. In this review we are taking a look at Gigabyte’s custom cooled G1 Gaming GTX 980 graphics card, pictured above, but we encourage you to check out our Nvidia GTX 980 review first if you haven’t already done so. That review will give you more background on the basis of the Maxwell GTX 980 graphics card whereas this review is more focused on what Gigabyte have done with their GTX 980 to make it different. On paper the most obvious change is a hefty overclock of 100+ MHz in tandem with an unlocking of the GTX 980’s power limit to unleash all of its power.
The standout feature from Gigabyte’s perspective is the latest iteration of their WindForce cooling solution which they claim is now capable of 600W compared to the 450W it previously dealt with.The new WindForce cooling solution features six heat pipes, three fans, a VRAM heatsink and a totally redesigned aesthetic.
Part of that aesthetic includes a WindForce LED logo and a styled metal backplate with the G1 Gaming branding on it.
Gigabyte have also tweaked the standard display connectivity by adding an extra DVI port, this gives the end user a little bit more flexibility for running multi-monitor setups.
Other things worth noting are that this GPU features speed binned GPUs and an 8 phase GPU power design.
Packaging and Accessories
The packaging theme is fairly similar to what we’ve seen on previous Gigabyte cards except now Gigabyte is unifying all its gaming products under the G1 Gaming line instead of having a separate WindForce line for VGA products.
Around the back those features we’ve already talked about are discussed further.
The accessory pack is basic and includes just power adapters and a quick-start guide. You shouldn’t even be using the power adapters on a graphics card this expensive though.
Nvidia’s newest Maxwell architecture first made its appearance in February this year when Nvidia revealed their entry level GTX 750 Ti graphics card. One of the obvious hallmarks of Maxwell, that the GTX 750 Ti revealed, was an incredible level of power efficiency as well as a drastic increase in compute performance. Since Nvidia gave us a taste of Maxwell we’ve been eagerly awaiting that high-end variant and that’s exactly what we get today with the GTX 980’s launch. The GTX 980 is a direct upgrade of Nvidia’s “middle” desktop GPU; it takes the GK104 and bumps it up to GM204. This means one of two things; firstly, the GTX 980 is the true successor to the GTX 680/GTX 770 and secondly, the GTX 980 means we will see an even higher end GM210 Maxwell part arrive later to replace GK110. However, let’s not get ahead of ourselves as today is all about GM204. GM204 is 28nm Maxwell, just like the GTX 750 Ti, which means all the rumours about the potential for Nvidia to release two iterations of Maxwell; one 28nm and one next year based on 20 or 16nm, remain credible.
On paper the GTX 980 doesn’t look that impressive; it offers the same size manufacturing process as its predecessors with a reduction of CUDA cores and ROPs. However, don’t let the numbers fool you: Maxwell is not like-for-like comparable with Kepler as it is a totally redesigned architecture. As anticipated, Maxwell’s power efficiency results in the GTX 980 boasting a TDP of 165W; that’s 85W lower than the GTX 780 that it succeeds in naming terms. It also offers a welcomed video memory boost to 4GB to match AMD’s offerings and silence the critics who claim Nvidia doesn’t offer enough video memory to be 4K ready: except on the insanely priced GTX Titan series products. Pricing isn’t bad either: compared to the GTX 780 it replaces it holds a $50 premium and is $100 cheaper than the GTX 780 was when it launched.
A brief specifications analysis shows Nvidia has no competition where the GTX 980 comes in. AMD’s best offering is the R9 290X which is geared to fit in somewhere between the GTX 780 and GTX 780 Ti. The GTX 980, on the other hand, is geared to outperform the GTX 780 and GTX 780 Ti. Not to mention that AMD was already quite some way behind Kepler on power efficiency, now Maxwell looks set to turn that small gap into a valley (maybe even a Unigine one!).
Packaging and Accessories
Nvidia’s press sample of the GTX 980 comes simply packaged, no accessories, just the card in a sleek-looking box. Simple, yet effective. The claim to be “The World’s Most Advanced GPU” also shows a fair amount of bravado on Nvidia’s behalf; let’s hope it lives up to those expectations.
After much anticipation and speculation we can finally present our review of AMD’s new “Tonga” based graphics card. Today we are reviewing Tonga Pro, that’s the first iteration of Tonga, which forms the R9 285 graphics card. It is expected that a Tonga XT variant will arrive at a later date to form the R9 285X. We’ve already known the specifications of the R9 285 for a while since AMD officially revealed them a few weeks back, but let’s go over them again:
On paper the R9 285 should be very similar in performance to AMD’s R9 280 and as a result it should be faster than Nvidia’s GTX 760 given the fact AMD’s R9 280 was. At $250 MSRP the R9 285 is a direct competitor to the Nvidia GTX 760 – AMD’s marketing campaign for the R9 285 is spearheaded by the fact it beats the GTX 760. At $250 it is also a direct competitor to the R9 280, especially as it offers similar performance. I wouldn’t be surprised if AMD allows them both to co-exist at the same price point because they do offer slightly different things. The R9 280 offers more memory but the R9 285 better power efficiency, while the two trade blows on performance depending on the type of game or applications. Things get a bit confusing though when we start looking at actual retail pricing instead of MSRPs – AMD’s R9 280 can be had for as low as $210 so at that price the R9 285 starts to look a bit expensive at $250.
AMD sent us Sapphire’s R9 285 Dual-X 2GB graphics card for review. This card comes factory overclocked from the 918MHz stock speed to 965MHz on the core, and from the 5500MHz on the memory to 5600MHz on the memory. It also comes equipped with Sapphire’s custom Dual-X cooling solution so it will perform better than a “reference” R9 285 graphics card.
AMD’s R9 285 is position below the R9 280X but above the R9 270X, it does appear AMD wants to replace the R9 280 with the R9 285 – although we are awaiting confirmation on this – we can now confirm the R9 285 makes the R9 280 End-Of-Life (EOL).
AMD claims that the R9 285 offers the best of both the R9 290 series and the R9 280 series. By this they mean it has all the performance of an R9 280 series card but it also has all the updated features of the new R9 290 series such as built in H.264 decoding, FreeSync support, the AMD TrueAudio DSP and the bridge-less XDMA CrossFire feature.
Just over a month ago we published our AMD driver analysis article looking at the progress two generations of AMD flagship GPUs, the HD 7970 and R9 290X, had made with driver updates. We compared each flagship card’s launch drivers to the latest drivers and calculated the improvements in a variety of games and benchmarks. Now that we’ve done AMD it is of course time to do the same for Nvidia. We will be taking Nvidia’s last two GTX series flagships, the GTX 680 for the GTX 600 series and the GTX 780 Ti for the GTX 700 series, and comparing their launch drivers to the current latest drivers. This means for both graphics cards we will be testing two scenarios on identical test systems, the first scenario is with the drivers the graphics cards shipped with at launch and the second is the most current driver release for the graphics cards at the time of doing the testing.
Nvidia’s GTX 680 launched on March 22nd 2012, exactly four months after AMD’s HD 7970 launched. At launch the Nvidia GTX 680 shipped with Nvidia Forceware 301.10 drivers. The GTX 780 Ti launched on the 7th of November 2013 with the special press driver package 331.70, although this was identical to the 331.65 package except with “official” GTX 780 Ti support. We used the 331.65 package since this supports the GTX 780 Ti and is identical to the unavailable 331.70 package. The latest driver package for both these Nvidia graphics cards is 340.52. The story with Nvidia is a similar one to AMD, both the flagships of last two generations are based on the same microarchitecture which is, in Nvidia’s case, 28nm Kepler. This means that the GTX 680 has had 2.5 years of driver updates while the GTX 780 Ti hasn’t even had a year yet. Of course with the GTX 780 Ti being based on the same Kepler 28nm architecture as the GTX 680 by the time the GTX 780 Ti was released many of the largest performance optimisations had already been made for the Kepler architecture. This means the GTX 680 will show greater performance improvements compared to the GTX 780 Ti. If you read our equivalent AMD driver analysis you will see that the story with the HD 7970 and R9 290X is nearly identical.
One thing I would like to note before we dive into the results is that you shouldn’t use these results to compare the GTX 680 to the HD 7970, or the R9 290X to the GTX 780 Ti. Why? Because for our Nvidia tests we used Gigabyte’s GHz Edition GTX 780 Ti and MSI’s Lightning Edition GTX 680, both of these are heavily overclocked Nvidia cards out of the box. For the AMD tests we used two cards that were close to, if not at, stock speeds. This was not done intentionally to favour Nvidia but these were the only non-reference cards we had to hand, and we only use non-reference cards as we want to avoid inconsistencies associated with thermal throttling that AMD and Nvidia cards both experience. In short, Nvidia’s results have a 10%~ or more advantage than the AMD ones due to the factory overclocked cards tested. For those who wonder why we didn’t just downclock the Nvidia cards to stock speeds, well we could have done that but the Turbo boosting baked into Kepler cards means no two cards will perform the same even when clocked at the same speed.
AMD’s Tonga Pro R9 285 is just around the corner, September 2nd to be exact, and so far we’ve seen a compact mini-ITX version from Sapphire as well as a STRIX version from ASUS. Now it is Powercolor’s turn for the spotlight with their TurboDuo R9 285. Powercolor’s card features their dual slot cooling solution with two “double-bladed” fans: that’s smaller blades at the centre and larger ones at the edge. A modest factory overclock takes the core from the stock 918 to 945MHz and the 2GB of GDDR5 memory retains the stock 5.5GHz speed.
Two 6 pin power connectors provide power to the R9 285 GPU core which has 1792 GCN cores, 112 TMUs, 32 ROPs and a 256 bit memroy interface.
The Powercolor R9 285 TurboDuo will probably stick to reference pricing of $250 and will be available in early September.
The idea behind the ASUS STRIX range is that it can operate silently up until 65 degrees and then after that the fans will spin up. For that reason ASUS has only equipped it to power efficient GPUs, unsurprisingly that meant the GTX 780 saw a STRIX variant and the best AMD card that saw one was the R9 280 – anything above that is too hot-running for the STRIX cooler to be effective. With the R9 285 being released ASUS now has another STRIX variant up its sleeves thanks to the improved power efficiency of Tonga.
The star of the show is the ASUS STRIX cooling solution which features a hybrid fan mode, like you’ll see on many power supplies. The basic logic is that sub-65 degrees celsius the fans do not spin, after that they kick in with a pretty standard fan profile. The core of the cooling design is a DirectCU II implementation: a dense aluminium heatsink array supplemented by direct contact copper heat pipes and a pair of what appear to be 80mm fans.
As the box denotes ASUS have factory overclocked the card, although by how much is anyone’s guess. This ASUS STRIX R9 285 graphics card will offer 2GB of GDDR5 although we may see a 4GB variant later down the line. Expect pricing to command a 10% or more premium over the R9 285 MSRP of $249.99. Availability will be from September 2nd – that’s 1 week tomorrow.
During the “30 Years of Graphics and Gaming Commemoration” event AMD officially announced the presence of the R9 285 graphics card. AMD claims the R9 285 GPU will be their most power efficient GPU yet, although the tech-specs are still under-wraps until its official release which is rumoured for September 2nd. The AMD R9 285 is rumoured to have 2GB or 4GB of GDDR5 memory over a 256 bit memory interface. The R9 285 is based on the Tonga Pro silicon and there will also be an R9 285X Tonga XT variant. The R9 285 Tonga Pro features 1792 GCN cores while the R9 285X gets 2048.
AMD claims all its partners will be unveiling R9 285 products on launch day. Pricing, clock speeds and other details are still all to be confirmed but you can stay on top of all our R9 285 coverage right here. During their live stream event AMD revealed the ASUS STRIX R9 285, pictured at the top, which they auctioned off on eBay with the proceeds going to the charity “Child’s Play”.
Maxwell, Maxwell, Maxwell…where art thou? Since Nvidia announced its GTX 750 Ti and GTX 750 we’ve seen very little about Maxwell for the desktop graphics card space. A few entries for Maxwell-based Nvidia Quadro products have shown up in an Nvidia Quadro driver release which reveals Maxwell is back. Why is this important? Because new GPU architectures from Nvidia often arrive in Quadro series products near the launch of their desktop counter-parts. The entries that showed up were as follows:
NVIDIA_DEV.0FF3 = NVIDIA Quadro K420 (GK107)
NVIDIA_DEV.13BB = NVIDIA Quadro K620 (GM107)
NVIDIA_DEV.13BA = NVIDIA Quadro K2200 (GM107)
NVIDIA_DEV.11B4 = NVIDIA Quadro K4200 (GK104)
NVIDIA_DEV.103C = NVIDIA Quadro K5200 (GK110)
NVIDIA_DEV.13B3 = NVIDIA Quadro K2200M (GM107)
The sad part for the eagle-eyed among you is that these are only GM107 based products. GM107 is of course the Maxwell GPU at the heart of the GTX 750 Ti and GTX 750 so it isn’t anything “new”. The move by Nvidia is perhaps a hint that they are looking to bring more of Maxwell to the market in the near future, but realistically we still have no accurate date on when “new” Maxwell GPUs will be released – such as the much discussed GM204.
Will we see AMD’s next-generation graphics cards arrive this year? If so, will they be based on the next-gen 20nm process shrink? Those are questions we’ve been pondering for a while now and if AMD’s most recent conference call for its Q2 financial performance is anything to go by then we now have a much better idea. During its conference call AMD’s Lisa Su, Senior Vice President and Chief Operating Officer, told listeners that “We (AMD) will be shipping products in 20 nanometre next year and as we move forward obviously a FinFET is also important”. Therefore we can strongly expect AMD’s 2015 releases to arrive with 20nm technology, but we should also expect anything released this year to still be 28nm in design. That’s not to say 28nm will be replaced as soon as 2014 is over, 28nm will likely continue in a lot of new 2015 products just because the 28nm process is mature, profitable and well-refined.
AMD’s CEO Rory Read has already commented on AMD’s potential transition to 20nm stating that AMD is waiting for the optimal crossover point between profitability, cost of the technology and cost of the product. With TSMC only properly gearing up 20nm production a few months ago it seems likely that the crossover point will not arrive until 2015.
AMD and Nvidia both talk fairly big when it comes to driver updates. With every driver iteration that is released we hear the usual technical (or should that be marketing?) talk about improved performance in this, that and the other. After a lot of thinking I decided I wanted to investigate further. Wouldn’t it be interesting to see how much progress AMD and Nvidia actually make with their drivers over the duration of a product’s life cycle? We’ll be starting this two piece series with AMD and in particular I want to look at the last two flagship single GPUs of each generation. I’ll be putting the XFX AMD HD 7970 Double Dissipation 3GB graphics card on the test bench along with the XFX AMD R9 290X Double Dissipation 4GB graphics card: that’s the flagship single GPUs of the HD 7000 and R9 2xx series. I will be benchmarking both graphics cards on an identical test system at stock clocks under two different scenarios. Scenario 1 is using the AMD driver package that they launched with and scenario 2 is using the most recent AMD driver package made available. In this way we are able to see the driver progress that AMD’s HD 7970 and R9 290X have made since they were both launched.
AMD’s HD 7970
AMD’s HD 7970 was launched on December 22nd 2011 and used AMD Catalyst driver package version 11.12 RC11, this was a special beta driver release for the AMD HD 7970 as official support wasn’t added until Catalyst 12.2 WHQL was released. AMD’s R9 290X launched on October 24th 2013 and used AMD Catalyst driver package version 13.11 Beta 6. The most recent driver package release from AMD (at the time of writing this article) is Catalyst 14.7 RC1. Of course AMD’s HD 7970 has had a significant amount more time on the market, nearly 3 years, whereas the R9 290X has had less than 1 year. It is also worth noting both the R9 290X and HD 7970 are built on virtually identically 28nm GCN architecture so many of the largest optimisations had already been made for the GCN architecture before the R9 290X was even released. That’s a long-winded way of saying we will see dramatically more progress with the HD 7970 than the R9 290X. However, either way it will be really interesting to see what the results show, so let’s get on with some testing!
An Nvidia product going by the description of a “GM200 A1 Graphics Processor” has been spotted in shipment from Taiwan to Bangalore where it will arrive at an Nvidia facility for further testing. The A1 stepping signifies the pre-production status of the chip and it will be upgraded to an A2 stepping before being pushed into mass production for the consumer market. According to the source the GM200/GK210 codename GPU will be built on the 28nm fabrication process using Maxwell architecture. GM200/GK210 is rumoured to feature over 4000 CUDA cores and a widened memory bus of 512 bits. Given the size of the 28nm process this new GPU will be very larger, over 600 mm². Nvidia will have to rely on the efficiency gains of the Maxwell architecture in order to keep such a large chip running cool. The GM200/GK210 GPU will form the next generation of Titan flagship, for now called the GTX Titan II. Expect the launch to be in the first half of 2015.
The launch of Nvidia’s GM204 Maxwell-based video cards is expected to be fairly close. We should see the GTX 800 series based on the new architecture by the end of the year for sure, current rumours are touting the third quarter which means by the end of September. There will not be any process node upgrades with the GTX 800 series despite TSMC being ready with their 20nm process, the current 28nm process will prevail with the first wave of products, the “A Stepping”. The GTX 880 Ti, GTX 880, GTX 870 and GTX 860 will all launch with a 28nm GM204 GPU. Expect the GTX 880 Ti to have the fully enabled and unlocked GPU die and as you descend down the stack more parts of the die will be soldered off.
The surprising news is that next year the Maxwell “B Stepping” will involve a die-shrink. However, this is not going to be a 28nm to 20nm shrink but instead a 28nm to 16nm shrink. Nvidia will apparently be the first to make use of TSMC’s 16nm FinFET technology. This will take place mid-Q1 (so in February sometime) and means that there will be a 4-6 month spacing between Maxwell A and B. Interestingly the 16nm variants are rumoured to get the same names as their 28nm predecessors, this may confuse the retail product stack even more as we’ll end up with two GTX 880 Tis, one 28nm and one 16nm. This won’t be the first time Nvidia released a new stepping into the same product series, we saw two GK110 steppings with the GTX 700 series. This is what allowed graphics cards like the GTX 780 GHz Edition to be released when other GTX 780s based on the first GK110 stepping struggled to get near those frequencies. However, with the GK110 example no die-shrink was involved. Due to that fact we could see alternate outcomes. Nvidia might potentially release a new series, the GTX 900, where they will re-release the GTX 800 product stack but at 16nm. Or we might see the 16nm parts re-released within the existing GTX 800 series with new names, such as adding a “+”, changing the 0 to 5 or adding some other kind of name signifier.
AMD are preparing to launch a new 28nm GPU next month in a bid to counteract Nvidia’s popular GeForce GTX 760 series. Codenamed Tonga the new GPU will be replacing the much loved, but ageing Tahiti Pro hardware. AMD have already got a competitor for the GTX 760 with their R9 280, but Nvidia have them beat in terms of power consumption and heat, and while cards such as the R9 270X offer similar power and heat levels, it’s a slower card than the Nvidia offerings and this is something we expect AMD will be looking to address.
If AMD can reduce power consumption and cost whilst offering similar levels of performance, then they could be onto a winner with Tonga. With the 28nm Tonga hardware expected to feature 2048 GCN2 stream processors, 128 TMUs, 32 ROPs and a 256-bit wide GDDR5 interface it’s certainly no slouch.
The card does feature a narrower memory bus, but with an increased stream processor count and likely higher clock speeds it should be able to balance that out nicely. Stay tuned for more information.
Thank you TechPowerUp for Providing us with this information.
A new Nvidia GTX 800 series mobile GPU has shown up in the retail channel in Thailand hinting at the potential for a high-end Maxwell part. The news is really exciting because it would be the first high-end Maxwell part we have seen, the GTX 750 Ti is all we’ve seen so far from Maxwell. The listing shows an ASUS G750JS gaming notebook sporting an Nvidia GTX 870MX, currently we only know of the GTX 870M which is a Kepler GK104 based part. The GTX 870MX nomeclature along with the GTX 880MX are both apparently reserved for Maxwell based parts, likely 28nm given 20nm production issues. The fact that this notebook might be sporting a Maxwell based high-end GPU is also reinforced by the different part number: GTX 870M variant goes by the part number G750JS-T4104H while this model uses G750JS-T4108H.
Of course after all of that discussion there is also just the possibility that the GTX 870MX is a typo and that the change in part number could just be from a stock refresh or the change in part number could refer to a change in other components used and it could be totally unrelated to the GPU.
3 days ago we heard the news that there could be a new AMD flagship single GPU graphics card incoming. The rumour had it that this new GPU was to be based on a “fully utilised” variant of the Hawaii GPU, known as Hawaii XTX, the current R9 290X is Hawaii XT. Now VideoCardz reports that this rumour has been confirmed by their industry sources. As expected the Hawaii XTX card will have a fully enabled Hawaii GPU which means 48 compute units, a whopping 3072 GCN cores, 192 texture mapping units and 64 render output units. Other than that we aren’t 100% sure what the name of the card will even be. The initial expectation is that it will be the R9 295X but it could also be the R9 290XT or something else along those lines.
The card will likely compete with Nvidia’s flagship GTX Titan Black or GTX 780 Ti, but with a much lower price point. An extra 128 GCN cores over the R9 290X may not seem like much, but if AMD can get the cooling right (which they clearly didn’t do with the R9 290X) then it could make a huge difference. Pricing will most likely start at $600+ since the R9 290X costs $550. Pricing will probably be dependent on the type of cooling solution the cards receives – a water cooled model might start at $700. It is also quite possible AMD will not release a reference version and leave it up to board partners to release custom ones.
We’ve already heard some early rumours about the Nvidia GM204 and now we’re hearing some details about the GM210. As you might expect, and as the name suggests, the GM210 is expected to replace the GK110 GPU we currently have. According to details the GM210 has a 512 bit memory interface, 384 texture mapping units (TMUs) while most the other details remain unspecified or unknown. The GM210 GPU will likely make use of 20nm if TSMC can get volume production into gear in time, but the use of 28nm technology is also possible. Rumours that came out early last month suggested we might see a 28nm Maxwell based GTX 800 series followed by a 20nm Maxwell based GTX 900 series shortly after when the 20nm process becomes mature enough to allow for such mass production. That same logic could apply here: we might see a 28nm GM210 then a new stepping based on 20nm at a later date using the same GPU.
The second interesting note about this GM210 GPU is that it may be called the GTX 880 Enterprise Edition. This is not that unexpected as we’ve seen Nvidia aim the GTX Titan, Titan Black and Titan Z at the more enterprise market. The GTX 880 Enterprise Edition is rumoured to contain 8 full ARMv8 cores allowing for greater functionality in the HPC (high performance computing) market. If the GTX 880 Enterprise Edition does end up being the flagship GM210 part then we may of course see more cut-down versions used to create the rest of the GTX 800 product stack. As always, rumours are rumours so treat them as such!
More details have emerged on Nvidia’s upcoming 20nm Maxwell GPUs which will form the new GTX 800 series, however, as always these are very premature and somewhat vague details. As a result we encourage readers to remember they could be untrue, inaccurate or subject to change. The GM204 is the part in question and GM204 is believed to be the direct successor to the GK104 Kepler GPU that currently makes up the GTX 770, GTX 760, GTX 680, GTX 670 and GTX 660 Ti. Like GK104 the GM204 makes use of a 256 bit wide memory bus which also implies the card will come as standard with 2GB of GDDR5 VRAM with an option for 4GB models if Nvidia vendors choose to double up. We’ve also heard about the existence of the flagship GM210 GPU which will succeed the GK110 GPU, GM210. As a result we expect the GM210 to lead the top of the GTX 800 series (for example the GTX 890, GTX 880 Ti, GTX 880), while the GM204 GPU will likely make up the mid-range (GTX 870, GTX 860 Ti, GTX 860). Speculation suggests that it is possible the GM204 GPU may use Maxwell architecture but make use of the 28nm process to keep the costs down, this leads to further logical reinforcement that GM204 will be a mid-range Maxwell GPU.
AMD’s R7 260X graphics card has become infinitely more popular since AMD reduced initial launch pricing down from $140 to $120. Of course the competitiveness of the R7 260X has been helped by the fact its biggest rival, Nvidia’s GTX 650 Ti Boost, has been discontinued and is now hard to find. This leaves Nvidia’s newly released Maxwell based GTX 750 to fight the R7 260X instead of the GTX 650 Ti Boost which is problematic for Nvidia as the GTX 750 is a much slower card that costs a similar price.Today we have an R7 260X from HIS Digital, their HIS R7 260X iPower IceQ X² 2GB GDDR5 graphics card. The R7 260X is known for being quite a hot running graphics card because AMD took the HD 7790 design, overclocked it even more and rebranded it. Therefore HIS’ implementation needs to effectively deal with the heat of the R7 260X and keep noise under control. If you haven’t read our launch day review of the R7 260X you can do so here.
Out of the box the HIS R7 260X is just a stock R7 260X in terms of its clock speeds so quite honestly we should expect within margin of error performance of the reference card as there are no thermal limitations a non-reference design can overcome. I am disappointed HIS haven’t overclocked the card and that they are charging more than reference pricing, in my opinion this card is priced too close to the R7 265 and GTX 750 Ti to be truly competitive, we hope HIS are just delayed in reducing their prices in response to AMD’s February price cuts….although February was an awfully long time ago.
Packaging and Bundle
The box points out what some of the HIS features mean such as iPower and iTurbo.
The rear details some of the components used and some of the generic AMD features.
Included with out sample was just a DVI to VGA adapter and warning document. The retail version will also get a driver CD, quick install guide and HIS sticker.
While Nvidia’s GTX Titan Z and AMD’s R9 295 X2 graphics cards may be stealing all the headlines, the real battle between Nvidia and AMD is occurring at those lower end price points where the bulk of graphics cards are sold. A quick look at the Steam Hardware Survey reveals just how popular the sub $200 price point is. For those looking for an even more affordable entry into gaming, the $100 price point is vital. What’s currently on offer at the $100 price point from AMD? Well their latest addition is the R7 250X, a rebranded HD 7770 GHz Edition looking to steal the title of “best $100 gaming graphics card”. Nvidia is yet to refresh their entry level range so at the $100 price point they still offer the GT 640 for $90 or the GTX 650 for $110. Yet, as we will see throughout this review, the AMD R7 250X finds itself in an incredibly competitive position because of Nvidia’s unwillingness to reduce prices on their entry level product stack. Today we are taking a look at a Powercolor R7 250X, but it is as close to a reference R7 250X as you will find. This card packs a basic cooling solution, stock R7 250X speeds and is about as “cheap and cheerful” as you’ll find. How do Nvidia’s offerings stack up against AMD’s newest budget friendly offering? Well let’s proceed through this review and find out!
As we’ve mentioned this particular Powercolor R7 250X is identical to the reference R7 250X. The only difference is Powercolor are not offering this with 2GB of GDDR5 memory whereas you will find some other vendors offering the R7 250X with 2GB. The closest Nvidia competitors are the GT 640 GDDR5 and GTX 650 which cost $90 and $110 respectively.
Packaging and Bundle
Our sample came direct from AMD and isn’t a retail package so there’s nothing fancy to see in terms of the packaging.
The accessory pack is representative of retail though. This card simply comes with a quick install guide and driver CD. No power adapters are provided so you’re expected to have a 6 pin to spare from your power supply…while the card has a VGA output so a DVI to VGA adapter would be redundant.
The AMD HD 7950 was probably the best GPU of the entire HD 7000 series in my honest opinion. It offered stellar value for money for high performance gaming – often overclocking to HD 7970 levels for a much lower price. The HD 7950’s popularity then got the better of it as the card soared to mining fame being the GPU of choice for miners all over the world. The price of HD 7950s went through the roof before stock eventually ran out and people had to start turning to new and current alternatives like the R9 280X. However, the HD 7950 is back with a bang. Given the logical name of the R9 280 it carries the great features we grew to love on the HD 7950 but with a higher clock speed and slightly lower price than the R9 280X. Today we have with us a rather sexy looking R9 280 courtesy of XFX. We have with us the XFX R9 280 Double Dissipation Black Edition OC graphics card (R9-280A-TDBD) which is XFX’s cherry picked line. It features their dual 90mm fan cooler, has an overclocked frequency and has fully unlocked voltages on the core and memory for overclocking.
Below you can see how XFX’s R9 280 stacks up against rival graphics cards. It holds a $20 premium over the reference card but comes with a memory and core overclock to give it some extra grunt.
Packaging and Bundle
The packaging comes with the usual style but has a new addition letting us know it is UEFI BIOS ready.
The back lists out the usual key features.
Inside we find a rather modest looking brown card box.
There’s the usual array of documentation including a warranty statement and driver CD.
The accessory bundle includes two power supply adapters (dual molex to 6 pin and dual 6 pin to 8 pin) and a CrossFireX bridge.