Ashes of the Singularity is a futuristic real-time strategy game offering frenetic contests on a large-scale. The huge amount of units scattered across a number of varied environments creates an enthralling experience built around complex strategic decisions. Throughout the game, you will explore unique planets and engage in enthralling air battles. This bitter war revolves around an invaluable resource known as Turinium between the human race and masterful artificial intelligence. If you’re into the RTS genre, Ashes of the Singularity should provide hours of entertainment. While the game itself is worthy of widespread media attention, the engine’s support for DirectX 12 and asynchronous compute has become a hot topic among hardware enthusiasts.
DirectX 12 is a low-level API with reduced CPU overheads and has the potential to revolutionise the way games are optimised for numerous hardware configurations. In contrast to this, DirectX 11 isn’t that efficient and many mainstream titles suffered from poor scaling which didn’t properly utilise the potential of current graphics technology. On another note, DirectX 12 allows users to pair GPUs from competing vendors and utilise multi graphics solutions without relying on driver profiles. It’s theoretically possible to achieve widespread optimization and leverage extra performance using the latest version of DirectX 12.
Of course, Vulkan is another alternative which works on various operating systems and adopts an open-source ideology. Although, the focus will likely remain on DirectX 12 for the foreseeable future unless there’s a sudden reluctance from users to upgrade to Windows 10. Even though the adoption rate is impressive, there’s a large number of PC gamers currently using Windows 7, 8 and 8.1. Therefore, it seems prudent for developers to continue with DirectX 11 and offer a DirectX 12 render as an optional extra. Arguably, the real gains from DirectX 12 will occur when its predecessor is disregarded completely. This will probably take a considerable amount of time which suggests the first DirectX 12 games might have reduced performance benefits compared to later titles.
Asynchronous compute allows graphics cards to simultaneously calculate multiple workloads and reach extra performance figures. AMD’s GCN architecture has extensive support for this technology. In contrast to this, there’s a heated debate questioning if NVIDIA products can even utilise asynchronous compute in an effective manner. Technically, AMD GCN graphics cards contain 2-8 asynchronous compute cores with 8 queues per core which varies on the model to provide single cycle latencies. Maxwell revolves around two pipelines, one designed for high-priority workloads and another with 31 queues. Most importantly, NVIDIA cards can only “switch contexts at draw call boundaries”. This means the switching process is slower and gives AMD and a major advantage. NVIDIA has dismissed the early performance numbers from Ashes of the Singularity due to its current development phase. Finally, the game’s release has exited the beta stage which allows us to determine the performance numbers after optimizations were completed.
AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?
Is the Hardware Competitive?
The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.
NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.
Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.
Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.
Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.
Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.
The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.
Do AMD GPUs Lack Essential Hardware Features?
When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.
Have The Drivers Improved?
Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.
The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.
Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.
Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.
However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.
Is GeForce Experience Significantly Better?
In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:
“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”
NVIDIA’s Sean Pelletier released a statement at the time which reads:
“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.
GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51
We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”
As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software. If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.
Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.
On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.
The Far Cry franchise gained notoriety for its impeccable graphical fidelity and enthralling open world environment. As a result, each release is incredibly useful to gauge the current state of graphics hardware and performance across various resolutions. Although, Ubisoft’s reputation has suffered in recent years due to poor optimization on major titles such as Assassin’s Creed: Unity and Watch Dogs. This means it’s essential to analyze the PC version in a technical manner and see if it’s really worth supporting with your hard-earned cash!
Far Cry Primal utilizes the Dunia Engine 2 which was deployed on Far Cry 3 and Far Cry 4. Therefore, I’m not expecting anything revolutionary compared to the previous games. This isn’t necessarily a negative concept though because the amount of detail is staggering and worthy of recognition. Saying that, Far Cry 4 was plagued by intermittent hitching and I really hope this has been resolved. Unlike Far Cry 3: Blood Dragon, the latest entry has a retail price of $60. According to Ubisoft, this is warranted due to the lengthy campaign and amount on content on offer. Given Ubisoft’s turbulent history with recent releases, it will be fascinating to see how each GPU this generation fares and which brand the game favours at numerous resolutions.
“Far Cry Primal is an action-adventure video game developed and published by Ubisoft. It was released for the PlayStation 4 and Xbox One on February 23, 2016, and it was also released for Microsoft Windows on March 1, 2016. The game is set in the Stone Age, and revolves around the story of Takkar, who starts off as an unarmed hunter and rises to become the leader of a tribe.” From Wikipedia.
The initial unveiling of AMD’s Fury X was eagerly anticipated due to the advent of high bandwidth memory, and potential to revolutionize the size to performance ratio of modern graphics cards. This new form of stackable video RAM provided a glimpse into the future and departure from the current GDDR5 standard. Although, this isn’t going to happen overnight as production costs and sourcing HBM on a mass scale has to be taken into consideration. On another note, JEDEC recently announced GDD5X with memory speeds up to 14 Gbps which helps to enhance non-HBM GPUs while catering to the lower-mid range market. The Fury X and Fury utilizes the first iteration of high bandwidth memory which features a maximum capacity of 4GB.
There’s some discussion regarding the effect of this limitation at high resolutions but I personally haven’t seen it cause a noticeable bottleneck. If anything, the Fury range is capable of outperforming the 980 Ti during 4K benchmarks while it tends to linger behind at lower resolutions. AMD’s flagship opts for a closed-loop liquid cooler to reduce temperatures and minimize operating noise. In theory, you can argue this level of cooling prowess was required to tame the GPU’s core. However, there are some air-cooled variants which allow us to directly compare between each form of heat dissipation.
Clearly, the Fury X’s water cooling apparatus adds a premium and isn’t suitable for certain chassis configurations. To be fair, most modern case layouts can accommodate a CLC graphics card without any problems, but there’s also concerns regarding reliability and the possibility of leaks. That’s why air-cooled alternatives which drop the X branding offer great performance at a more enticing price point. For example, the Sapphire Nitro OC R9 Fury is around £60 cheaper than the XFX R9 Fury X. This particular card has a factory overclocked core of 1050MHz, and astounding cooling solution. The question is, how does it compare to the Fury X and GTX 980 Ti? Let’s find out!
Packing and Accessories
The Sapphire Nitro OC R9 Fury comes in a visually appealing box which outlines the Tri-X cooling system, factory overclocked core, and extremely fast memory. I’m really fond of the striking robot front cover and small cut out which provides a sneak peek at the GPU’s colour scheme.
On the opposite side, there’s a detailed description of the R9 Fury range and award-winning Tri-X cooling. Furthermore, the packaging outlines information regarding LiquidVR, FreeSync, and other essential AMD features. This is displayed in an easy-to-read manner and helps inform the buyer about the graphics card’s functionality.
In terms of accessories, Sapphire includes a user’s guide, driver disk, Select Club registration code, and relatively thick HDMI cable.
Freebies are something that we all like and AMD has now bundled the new Hitman game with some of their graphics cards and processors as well as systems prebuilt with these components. AMD has partnered with IO Interactive again to bring this deal and they also joined the AMD Gaming Evolved program in order to get the best out of the hardware with top-flight effects and performance optimizations for PC gamers.
The bundle deal runs from the February the 16th and it is valid with the purchase of selected products from participating retailers – as it always is. In this round, AMD bundles Hitman with their Radeon R9 390 and 390X graphics cards as well as their FX 6 and 8 core processors (PIB). The bundle will last until 30th of April 2016 or whilst supplies last. Vouchers can be redeemed until 30th of June 2016.
The new Hitman game is offered in a seasonal fashion with a base game and periodic add-ons that will continue the story, but it is handled in the best possible way. The full experience with the full season off new missions won’t cost more than other games costs in themselves without DLCs and this AMD bundle also includes the full game rather than just the initial release. You will also get access to the BETA for Hitman that will run from the 19th to the 22nd February.
Those that have upgraded to Windows 10 will have the best experience with this new game as it has been built to take advantage of DX12, a feature that will make a very noticeable difference for AMD CPU users.
“Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs, called asynchronous compute engines, to handle heavier workloads and better image quality without compromising performance. PC gamers may have heard of asynchronous compute already, and Hitman demonstrates the best implementation of this exciting technology yet.”
You can find all the fine print and redeem your game code on the official Hitman mini-site. The beta phase is almost here, so it might be time to make that upgrade that you’ve holding back with. The full hardware specifications and recommendations have also been published a few days ago, in case you missed them.
Rise of the Tomb Raider originally launched on November 10th and received widespread critical acclaim from various press outlets. Unfortunately, the game went under the radar because Fallout 4 released on the same day. This was a strategic error which hindered the game’s sales and prevented consumers from giving it their undivided attention. It’s such a shame because Rise of the Tomb Raider is a technical marvel when you consider the Xbox One’s limited horsepower. Even though it’s not technically an exclusive, PC players had to wait until after the Christmas period to enjoy the latest exploits of everyone’s favourite heroine.
The PC version was created by Nixxes Software who worked on the previous Tomb Raider reboot as well as a number of other graphically diverse PC games. The studio is renowned for creating highly polished and well-optimized PC versions featuring an astonishing level of graphical fidelity. Prior to release, NVIDIA recommended a GTX 970 for the optimal 1080p experience and 980 Ti for 1440P. Since then, there have been some performance patches from the developer and driver updates to help with scaling across various hardware configuration. This means it will be fascinating to see the performance numbers now that the game has matured and gone through a large number of post-release hot fixes.
“Rise of the Tomb Raider is an action-adventure video game developed by Crystal Dynamics and published by Square Enix. It is the sequel to the 2013 video game Tomb Raider, which was itself, the second reboot to its series. It was released for Xbox One and Xbox 360 in November 2015 and for Microsoft Windows in January 2016. It is set to release for PlayStation 4 in late 2016.
The game’s storyline follows Lara Croft as she ventures into Siberia in search of the legendary city of Kitezh, whilst battling a paramilitary organization that intends on beating her to the city’s promise of immortality. Presented from a third-person perspective, the game primarily focuses on survival and combat, while the player may also explore its landscape and various optional tombs. Camilla Luddington returns to voice and perform her role as Lara.” From Wikipedia.
So, let’s get to it and see how some of the latest graphics cards on the market hold up with the latest from Crystal Dynamics!
EK WB has expanded their full-cover water blocks a lot lately and that goes for both motherboards and graphics card. The newest cooler is for the last, a graphics card, or more specifically a full-cover water block solution for MSI’s Radeon R9 390X GAMING 8G graphics card and it’s called the EK-FC R9-390X TF5.
The new full-cover water block replaces the original TwinFrozr V cooler that comes with the graphics card out-of-the-box, and while it is an amazing GPU cooler, you can’t integrate it into your full custom loop. The new EK-FC R9-390X TF5 actively cools the GPU, RAM as well as VRM (voltage regulation module) as water flows directly over all these critical areas. In return, it will allow you to run the card at much lower temperatures and higher overclocks.
The base is made of nickel-plated electrolytic copper while the top is made of either quality POM Acetal or acrylic depending on the variant. As usual, both versions are available which allows you to match the design you prefer. The cooler also features EK WB’s pre-installed screw-in brass standoffs that allow for a safe installation procedure.
The EK-FC R9-390X TF5 water block also features EK’s unique central inlet split-flow cooling engine design for best possible cooling performance. The system also works flawlessly with the reversed water flow without adversely affecting the cooling performance and it will also work well in liquid cooling systems using weaker water pumps.
To complete the setup, EK also offers a retention backplate made of black anodized aluminum. The backplate offers additional cooling to the backside of the circuit board, especially around the VRM area, besides giving the card a sleeker look.
The new EK WB EK-FC R9-390X TF5 is available now for an MSRP of €122.95 and the backplate will set you back an additional €29.95
We only have two major players left in the consumer graphics card market, AMD and Nvidia, and Nvidia has had the lead for quite some time now. The new AMD Fury, Fury X, and Nano cards are impressive on their own, but they still couldn’t quite beat Nvidia’s cards on the full scale.
The newest Windows 10 drivers seem to have given AMD an edge again as they have shown performance increases on all current generation AMD cards. However, the most impressive result to come out of this is that the Fury X managed to leap ahead of the Nvidia GeForce 980 Ti according to the latest comparisons by TechPowerUp via WCCFtech.
With the older test setup, Nvidia was ahead of AMD most of the way. The GTX 980 Ti was 8% ahead of the R9 Fury X, and the GTX 980 was 2% ahead of the R9 390X at 1440p. Moving up to 4K resolution and the GTX 980 Ti and R9 Fury X come in at the same result and so do the GTX 980 and R9 390X.
Older drivers and test setup
After the move to the newest Windows 10 drivers, which aren’t the recently announced Crimson update, Nvidia’s lead shrinks. At 1440p, the GTX 980 Ti that previously was 8% ahead of the R9 Fury X now comes in at the same result while the R9 390x makes up 5% and gets 3% ahead of the GTX980. Even the R9 290X gets a huge boost of 9% over the GTX 970 card.
Again, moving up to 4K resolution and we see that AMD takes the full lead. The R9 Fury X jumps ahead of the GTX 980Ti by 5%, the R9 390X and 290(X) also stay ahead of the GTX 980 and GTX970/GTX780 Ti respectively. This is pretty impressive and really shows what a driver optimising can do.
When AMD first launched their R9 290 and 290X GPUs back in 2013, many had mixed feelings for the blower style cooler. While the cooler was one of the best efforts yet from AMD, it was not enough for the hot Hawaii chips, leading to high temperature, throttling and noisy operation. In the end, many opted for custom coolers which were not blowers and did a better job at cooling. Two years later, it looks like XFX is planning on releasing the 390/X series cards equipped with what appears to be the original 290X cooler.
Using the Grenada core, the R9 390X is fundamentally the same as the 290X, with maybe better binning and process improvements to differentiate them. XFX is also using the older cooler and not the revamped one AMD launched with the R9 390X in a while ago. The new 390X blower cooler take’s its design cues from the Fury X and Nano. Given XFX’s choice of using the 2013 cooler and not the 2015 model, either XFX has a lot of stock left or there is little difference between the 2015 and 2013 models. You can check out the 2015 model below.
There is undoubtedly a market for blower style GPUs as they tend to exhaust more of the GPU heat out of the case. This is especially important for SFF and builds with poor case cooling. If the cooler is still lacking though, there won’t be many users who will pick it up. The biggest advantage is that with a reference board, watercooling blocks will be easier to source. It will be interesting to see how well the blower card does, both performance and sales wise.
Here at eTeknix, we strive to give the consumer the best possible advice in every aspect of technology. Today is no different and we are excited to bring you the CrossfireX review of the Sapphire Tri-X R9 390X graphics cards.
Based on the slightly aging Hawaii architecture, performance was expected to be fairly low, however, as we found in our standalone review that really wasn’t the case. Alone, this card has the power to directly take on the GTX 980 and is poised to be at the low-end of the brand new AMD R9 Fury range. At a price of £350, it is perfectly priced to fill in the gap between the R9 390 and R9 Fury.
When we test in CrossfireX, we aim to use two identical graphics card to ensure that everything is as similar as possible. When using the same cards, you can almost guarantee the same cooling capabilities, power draw, core clock, boost clock and so on. This then gives us the best possible outcome for maximum performance as the computer does not need to compensate for any differences.
The launch of the R9 300 series has been a very bumpy road. Not only was it released as a rebranded R9 200 series, it was then nailed by AMD itself by releasing the R9 Fury range. In themselves, they are a great progression from the original R9 200 releases, adding anywhere from 10-30% performance increase and the cost was more favourable than the NVIDIA counterparts. The market for the R9 300 series is small, but that hasn’t stopped manufacturers designing some great additions to the range.
On the test bench today is the PowerColor DEVIL R9 390X, although it isn’t a DEVIL as we know it. From previous releases, the DEVIL graphics cards have been extremely big and completely bonkers. Just look back to the DEVIL 13 (290x x2), that was not only one of the biggest graphics cards ever made, but it came with its own tool kit, case support bar and even a RAZER mouse. If we all remember back to Computex 2015, PowerColor had sneakily left a prototype on display; initial reports flooded in that it was the first ever R9 Fury X pictured which was believable thanks to the AIO cooling solution. That rumour bubble was quickly popped though and it was confirmed that the card was, in fact, an R9 390X model. We already know how the R9 390X performs, so let’s see if PowerColor are able to unleash hell with this new graphics card.
The outer box skin is very plain, no specs or details to what’s inside apart from the logo’s. The actual “Devil” logo could be interpreted as the flames of hell being quenched by water, or an AMD CPU being cooled by water if you want to be more literal.
The back of the box is where a great deal of the visual information is. I feel it’s slightly too cluttered; maybe the card specifications should have been printed on the side panel along with power and system requirements to leave more room for the diagrams.
If you were expecting a tool-kit, GPU holder and mouse with this; you are going to be disappointed. The accessories include a premium hard covered gaming surface, PCIe power adapter, installation instructions and driver disk.
1080p, 1440p, 1660p, 2160p; just a random bunch of numbers with a ‘p’ after them can mean nothing to some people; however, to gamers it means a whole world of display quality goodness. For the last few years, 1080p (1920 x 1080 pixels) monitors have been the normal standard for a ‘decent’ gaming setup and what most graphics cards are tested at. Then we started moving up to higher resolutions such as 2560 x 1440p and 2560 x 1600p.
For some, this wasn’t enough; despite the pixel densities growing larger, as humans we wanted even more pixels. This resorted to users buying multiple monitors and connecting them one next to another and activating AMD ‘EyeFinity’ or NVIDIA ‘Surround’ to have an almost 180° viewing range. Even though the latter part of the pixel count didn’t change, this meant that monitor set-ups were hitting 5760 pixels wide by using three 1920 x 1080p monitors.
Then we move onto today, 1080p and 1440p has been surpassed by what has now become the new ‘standard’ of gaming, 2160p, or 4K. At this resolution, even the most powerful of graphics cards can struggle to churn out the desired 60FPS which we have come to accept as the acceptable standard. So what about when you put three 4K monitors next to each other and ask for 11520 x 2160 of pixelated goodness (or 6480 x 3840 if you prefer your monitors in portrait mode.)
Before we go rushing into things, there are some issues regarding our particular test system. The provided AOC monitors (U2868PQU) has known issues with AMD graphics cards and 60Hz refresh rate. Symptoms can present themselves as minor screen flickering to a complete system freeze. This was made worse when trying to display at 11520 x 2160; however, after multiple tests, we found the issue was subdued by putting the monitors into Portrait mode. This isn’t the ideal gaming set-up, however, in the interest of bringing you the information; I endured the pain of a 2″ thick bezel between each monitor.
To get to the technical nitty gritty, a typical 4K monitor at 60Hz refresh rate can present 497 Million information pixels per second, so this set-up can present almost 1.5 Billion pixels per second; yes 1.5 BILLION. To put that into perspective, the Samsung Galaxy S6 Edge has a screen size of 1440 x 2560 and a refresh rate of 60Hz; that works out to a mere 221 Million pixels per second in comparison.
Today is the day we get a taste of the new AMD R9 300 series graphics cards. The R9 300 range is the starting platform for what AMD will be supporting the Fury range due sometime in the near future. There have been some rumours surrounding the R9 300 range up to the 390X, that it will be a rebadged range; this is true. However, this has allowed AMD to fine tune the GPU and Sapphire to hand pick the best quality components to reap as much performance as possible.
Today we have the Sapphire Tri-x R9 390X 8GB. It comes from the factory as 100% DirectX12 compliant which is great for those looking at building a new computer in the new few weeks ready for Windows 10. Along with DX12, the R9 390X comes with Virtual Super Resolution, which renders up to 4K and downscales to your resolution; giving you the best visual quality without the need for an expensive monitor. We then see all of the usual features that Sapphire has bundled in with the graphics cards such as Free sync, Eyefinity and the amazing Tri-X cooler.
The R9 390X card is very similar when comparing the PCB to its older counterpart, however, small changes have been made and a new cooling shroud design has been used. Are these tweaks enough to set this graphics card apart from the R9 290X? Let’s find out.
The box is a similar design to the previous R9 290x Tri-X box, just slightly moved around and more orange detailing. Contents include an HDMI cable, sticker and the relevant manuals.
Overall the card is much sleeker than the previous design, something that I like personally. I think more people would prefer to have this in their computer than the previous design.
The PCB rear looks almost identical to that of the R9 290x. The cooling shroud just hangs over the end of the card showing the bare heat sink from below. Sadly Sapphire didn’t include a backplate with this card.
A close up on the power connectors shows the card draws power from twin 8-pin PCI power cables.
At the end of the card, we see a slightly revamped offering. Down to a single DVI port, a single HDMI and triple DisplayPorts. This allows Eyefinity through a DisplayPort MST hub.
The new AMD graphics cards are almost here and as with most hardware launches, cards are already finding their way into the wild. Now it seems someone has taken their brand new XFX Radeon R9 390X to pieces and what they found was rather interesting.
The teardown of the new GPU revealed that the interior of the card is virtually identical to that of the current AMD Radeon R9 290X, confirming any rumours and speculation of the card in a rebrand of the current/last generation. However, it’s not all doom and gloom, as the new card features 8GB of Vram and higher clock speeds when compared with its 290X counterpart, so it is for all intents and purposes a better card, but how much better, remains to be seen.
What’s interesting, is that the BIOS for these new cards is already finding its way online, prompting some users to flash their 290X to a 390X, although at this early stage, I’m not sure that’s entirely a good idea.
Will you be buying one of these “new and improved” cards, or are you holding out for the new Fury series of cards, of which we do know are a fresh design?
Almost every time someone manages to get their hands on a piece of tech early, they try and sell it on eBay. User MoNkEyHuGgEr369 from Youtube seems to be no exception and has posted his AMD XFX R9 390X for sale on eBay already. From his unboxing video on Youtube, the card appears to be legit but don’t take our word on it.
With bidding starting at $550 USD, that’s about a $100 markup over the purchase price of $450 though sales tax might eat up half of that. So far there are no bids yet but there’s still almost 3 days left. Those who can’t wait can shell out $715 to grab the first on market R9 390X. With heavy import fees as well, the price balloons for anyone outside the US.
Even with International Priority shipping at $47, the card won’t be reaching anywhere outside the United States till about June 20th. That means you’ll be getting the card about 4 days after the official launch meaning you might actually get your card first if you wait till June 16th. Of course, the bragging rights to claiming the first ever R9 390X available to the public may be well worth the money and time to some.
With a name hearkening back to earlier times, AMD is set to release Fury to combat Nvidia’s Titan X. While we’ve long known that Fury carries HBM, the exact lineup details have been more sketchy. A new report coming from hwbattle [Korean] suggests that the Fury lineup will consist of three chips, Fury Pro, Fury XT and Fury Nano. Giving past naming conventions, the XT will be the high-end model followed by the Pro and Nano.
Three models does make some sense as there is quite a performance gap in AMD’s lineup if they want to beat the Titan X. While it’s good to have a product that can match or beat the Titan X, it’s also important to have cards that can slot in against the 980Ti and the 980. It does jive with the fact that some rumors had suggested that Fury would beat Titan X while others have said it loses to the 980Ti. With 3 different Fury cards, it’s very well possible for the XT model to beat Titan X while the Pro or Nano might fall short of the TitanX or even the 980Ti.
According to the source, reference models will launch with water-cooled models as have been previously leaked. In addition, air cooled reference models are also expected to debut at launch, potentially in a three fan configuration like the HD 7990 and other non-reference models of the past. Given the expected specifications and how hot comparable products like the 295X2 and Titan X are, heat is expected to be high, explaining the water cooling but the source suggests that noise won’t be an issue with the reference coolers this time around.
For the first month or so of availability, only reference models will be available. The supply of these models is expected to be limited, but non-reference cards from partners are expected to arrive between mid-August and early September. It’s critical to note that all this information so far is an unconfirmed leak. AMD will reveal more information next week on June 16th so stay tuned till then.
Computex 2015 – As we are live at Computex, we wanted to find out of the leaked photos actually were of the upcoming AMD Radeon R9 390X or not. So we caught up with PowerColor and asked them some questions to find out.
And we now know it for sure and sadly have to debunk the previous news. The pictured card is NOT the new R9 390X, nor is it an official cooling design.
The representative at the PowerColor booth confirmed this. Underneath the beefy cooler is an AMD Radeon R9 290X card and it is equipped with a prototype DEVIL 13 Hybrid cooling solution. So sadly, this is neither a finished new card nor the highly expected Radeon R9 390X card.
PowerColor could however confirm that they are working on the new AMD R9 300 series and will have their cards ready shortly after the official reveal June 16th at E3, so stay tuned and we’ll make sure to keep you updated as soon as we have more information.
Computex is going on and hardware manufacturers are busy showing off their newest products from all categories surrounding our PCs, and sometimes something gets released a little too early. This time we get pictures of what very well could be the new AMD R9 390X from Powercolor, equipped with the hybrid cooling solution dubbed the DEVIL 13.
While we can’t fully confirm that this actually is the R9 390X, it most likely is. TweakTown was the first to post the pictures and other outlets quickly picked up on the card too. The design might not be the final one as the launch of this specific card is still some time away, but it still gives us a great view on it.
The card has an 8-pin and 6-pin power connector as well as a black backplate to stabilize the card and give it a great look.
The PowerColor R9 390X DEVIL 13 is far from a slim or small card and it actually exceeds the 2 slots height with its beefy cooler. It still comes with an external radiator that has to be mounted to your chassis.
We know by now that the new AMD Radeon R9 390X card will come with 8GB memory and that it will be presented on the 16th during E3. We could also reveal the possible MSRP pricing earlier today.
I am ready for AMD’s new graphics cards and tired of waiting. Bring it on.
AMD’s latest Fiji XT graphics card has been pictured again. The picture of the reference card confirms a large number of rumours that have been circulating over the past few weeks. As somewhat expected, the PCB is extremely short compared to AMD cards like the 290/290X and the previous 7970/280X. The key factor to the short size is likely the HBM, or High Bandwidth Memory which significantly reduces the area required as well as moving it onto the GPU package. That just leaves the VRM and various interconnects on the rest of the PCB. Mini-ITX fans will be sure to love this card though they’ll have to either find a different cooler if they can’t accommodate the liquid cooling radiator.
Another key clarification point is the confirmation of liquid cooling coming standard. On the left edge of the picture, you can see the coolant tubes connecting the 120mm x 120mm radiator to the coldplate. While the Radeon logo from the teaser video makes an appearance, the sides of the GPU appear to be more black than grey, though this could be a contrast issue. The GPU will also come with a backplate, LED lights for the Radeon logo and 2 x 8 pin PCIE power connectors. Along with the R9 295X2, this card may be one of the best looking reference designs AMD has put out in recent years.
AMD has released a short teaser for what appears to be a Radeon branded product, most likely a graphics card. Titled “It’s Coming”, the teaser video is extremely short, coming in at about 7 seconds total and 3 of those seconds contain a black space. From the timing of the teaser, it’s most likely related to AMD’s upcoming 390X flagship GPU or an HBM GPU, depending if those two are one and the same. Earlier teaser images of the 390X had showed a similar design esthetic with the use of black and silver body and red Radeon logo. Those images showed what appeared to be a liquid cooling radiator, something this teaser and other leaked images did not show.
AMD could do some more work about their marketing and teasers. while great for building anticipation and hype, the teaser is extremely short on details. Interestingly enough this time around, we’ve had more leaks about the 980Ti which is based on known technology, rather than on the new 390X and its groundbreaking HBM technology. AMD has to build up excitement around it’s products if they’re to regain their market share. Whether or not these type of teasers will work remains to be seen.
Yesterday we were teased with a blurry shot of the new and upcoming AMD Radeon R9 390x card in its water-cooled version and we could see that it was very short. But just how short it is was revealed with a new photo showing the liquid cooled card pictured next to its 120mm radiator.
A card with this size will fit into pretty much any chassis, no matter how small as long as it allows for full height cards and a 120mm radiator with a fan. If the rumoured performance sticks as well and we get a card that can hold up to the current dual-GPU R9 295×2 and Titan X graphics cards, then AMD has a true winner that will blow users away.
The expected specifications are 4096 stream processors, 256 texture units, 120ROP unit, 4096-bit 1.25GHz 8GB HBM memory (bandwidth of 640GB/s) while we already know that it will have three DisplayPort 1.2a and an HDMI 2.0a port.
There will also be an air-cooled version that will be slightly longer and expected to only feature 4GB HBM memory. While we say longer here, it will still be significant shorter than the current Radeon R9 290x cards.
The new GPU chips will be slightly larger than the current generation due to the integrated memory, but in return it saves a lot more PCB space where the memory modules used to be located. All that’s left is basically the GPU and VRM.
AMD just revealed the first cards in the new R9 300 series, but that wasn’t so exciting as it could have been as they all were mobile versions and rebrands. This however is far more interesting, as AMD has come out with the first official teaser for the upcoming Fiji XT graphics card, most likely dubbed the R9 390X.
The teased card shows a different image than we’re used to with AMD reference cards. It doesn’t feature the normal single fan cooler nor the hybrid liquid and air cooling solution seen on the R9 295×2 card. It does however reveal what appears to be a fully liquid cooled solution with a 120mm or 140mm radiator in the background.
We also notice a very short card, but with the new HBM memory being stacked on the GPU, AMD needs a lot less PCB space. It’s really only the GPU and VRM that needs to be cooled as the RAM is located on the GPU. On the rear of the card we find three DisplayPort 1.2 and one HDMI 2.0 port as connection options, seeing a move away from the old DVI connector.
The image below was the second tease released shortly after the card above. It is a close-up on the actual Fiji XT chip that will power this new beast of a graphics card with its 4096 CPU cores, 8GB HBM memory, and 4096 Wide IO memory interface.
Thank you WCCFtech for providing us with this information
We’ve had quite a few leaks and rumours for some time when it comes to AMD’s new Radeon R9 300 series graphics cards, ranging all the way back to the first possible cooler shroud that could be a hybrid cooling system. But now sources tell TweakTown that AMD’s newest generation of graphics cards won’t arrive as they are portrayed in the current rumours and leaks.
The source didn’t want to go into too much detail when talking to our friends at TT, but did say that “the new Radeon R9 390X will arrive with specifications and possibly features that are different to what the rumors currently suggest.” The most interesting part of the source’s comment is that the new HBM1 memory will actually provide the performance in real-life as it does on paper. If that is true, then Nvidia could be in some serious trouble down the road – at least until they can adapt their own processes and parts to match.
To summarize HBM, the first version to be released will have around 640GB/s bandwidth and the second generation will double that up to 1.2TB/s. Current cards provide an average of 300GB/s bandwidth, so even the first generation of HBM will double that.
There hasn’t really been much change on the memory side of graphics since the introduction of GDDR5 memory, so I can see how this could become a game changer and it’s hopefully something that will get AMD back on track so we see some more competition on the market. Competition is the best thing for us as consumers as it results in more effort in the R&D department as well as lower prices.
Thank youTweakTown for providing us with this information.
There is still a little doubt whether AMD will release their new R9 300 series graphics cards at E3 or Computex, but the timeframe sounds about right with all the previously leaked and released information. One part of the information was almost overlooked and it’s quite a vital one. The Fiji VR Dual GPU is apparently still in the drawing board stages.
This means that we might not see the new dual-GPU card at the launch event, but there’s still the single GPU card to look forward too. And let’s face it, most will opt for that card anyway. That doesn’t mean that there isn’t anything to look forward to on AMD’s Radeon R9 395X2 as it is a card that appears to be optimized for virtual reality experiences which is another thing that is expected to hit the market with a big impact soon.
The R9 395×2 is codenamed Fiji VR GPU and that name already gives away the purpose. A VR optimized GPU might be something that could give AMD another leading edge in a market that otherwise has been quite dominated by Nvidia the past years.
“Affinity Multi-GPU for scalable rendering, a technology that allows multiple GPUs to work together to improve frame rates in VR applications by allowing them to assign work to run on specific GPUs. Each GPU renders the viewpoint from one eye, and then composites the outputs into a single stereo 3D image. With this technology, multi-GPU configurations become ideal for high-performance VR rendering, delivering high frame rates for a smoother experience. “
The dual-GPU card is focused on minimizing latency for VR by letting the card use one GPU for each eye. The pure specifications aren’t expected to be much different from the R9 390, except that everything including memory gets a double up. The price is expected to get a slight bonus over double.
I for one am looking forward to both the single and dual GPU cards with HBM memory and can’t wait to see how they’ll perform in all sorts of situations and gaming scenarios.
Thank you WCCFtech for providing us with this information.
Here at eTeknix, we strive to bring you the latest information as soon as we get it. Our sources have revealed to us a possible reveal date for AMD’s newest line-up of graphics cards; the 300 series.
The release date is rumoured to be around the E3 Expo in June. Now this isn’t the first time AMD have revealed a new series of hardware at E3; back in 2013 AMD revealed the FX-9590 CPU, the first consumer CPU available with a stock core clock of 5GHz. Along with this, some of AMD APU’s were also revealed. Now it’s been a while since we’ve seen anything new from AMD, we have seen refreshes and new interim releases like the R9 285 and R9 290x 8GB graphics cards, but these are based on older technologies. It’s the same story with the CPU and APU side of things, the most recent release being the Kaveri based A8-7650k. Anyone remember this $4k(USD) monster back at the 2013 Expo?
With all consumers and press sights being focused on the 300 series graphics cards, it would be amazing if AMD casually pulled out a new range of FX CPU’s and APU’s. Especially so if they’re all released at the world’s biggest gaming Expo.
Prices on the current range of AMD products do seem to have been slashed lately, take the R9 295×2 for example. Six months ago, that was still up around the £900 mark, where now it can be bought for close to £500. Does this all point to AMD pushing out all the excess stock to retailers to clear it before the launch of the 300 series?
What are you expecting to see from AMD? Are you waiting for the reveal of the 300 series specifications before you commit to a new graphics card? Are you attending E3? If not, don’t fret, we’ll keep you all updated with the news.
AMD is getting ready to launch their new 300 series very soon and Nvidia isn’t just standing by on the sidelines to watch, they want to be prepared. According to the latest leaks coming through Sweclockers, who have an impressive track record of being right on the spot with Nvidia rumours, Nvidia already has their new GeForce GTX 980 Ti ready, they just don’t want to release it yet.
The timeframe for the GTX 980 Ti is still set to the end of Q2 or Q3 2015, but with the option to switch it up and release it earlier in case AMD’s new flagship GPUs will kick their butts. I know that many people are waiting for the card, based on comments on our previous articles, but this is also good news. It will give Nvidia time to optimise and tweak the card, give board-partners more time to create better custom PCB and cooler solutions, but also to improve on it in case AMD’s 300 series cards will surpass the leaked performance figures.
The sad side of the news is however that it looks pretty much like the Titan X with half the memory. The GTX 980 Ti will feature the full version of the GM200 core with 3072 CUDA cores, 192 texture units and a 384-bit memory interface for the 6GB VRAM. Where the Titan X is running in at the $999 price tag, the GTX 980 Ti will most likely cost around $699.
Thank you Sweclockers for providing us with this information