Ashes of the Singularity is a futuristic real-time strategy game offering frenetic contests on a large-scale. The huge amount of units scattered across a number of varied environments creates an enthralling experience built around complex strategic decisions. Throughout the game, you will explore unique planets and engage in enthralling air battles. This bitter war revolves around an invaluable resource known as Turinium between the human race and masterful artificial intelligence. If you’re into the RTS genre, Ashes of the Singularity should provide hours of entertainment. While the game itself is worthy of widespread media attention, the engine’s support for DirectX 12 and asynchronous compute has become a hot topic among hardware enthusiasts.
DirectX 12 is a low-level API with reduced CPU overheads and has the potential to revolutionise the way games are optimised for numerous hardware configurations. In contrast to this, DirectX 11 isn’t that efficient and many mainstream titles suffered from poor scaling which didn’t properly utilise the potential of current graphics technology. On another note, DirectX 12 allows users to pair GPUs from competing vendors and utilise multi graphics solutions without relying on driver profiles. It’s theoretically possible to achieve widespread optimization and leverage extra performance using the latest version of DirectX 12.
Of course, Vulkan is another alternative which works on various operating systems and adopts an open-source ideology. Although, the focus will likely remain on DirectX 12 for the foreseeable future unless there’s a sudden reluctance from users to upgrade to Windows 10. Even though the adoption rate is impressive, there’s a large number of PC gamers currently using Windows 7, 8 and 8.1. Therefore, it seems prudent for developers to continue with DirectX 11 and offer a DirectX 12 render as an optional extra. Arguably, the real gains from DirectX 12 will occur when its predecessor is disregarded completely. This will probably take a considerable amount of time which suggests the first DirectX 12 games might have reduced performance benefits compared to later titles.
Asynchronous compute allows graphics cards to simultaneously calculate multiple workloads and reach extra performance figures. AMD’s GCN architecture has extensive support for this technology. In contrast to this, there’s a heated debate questioning if NVIDIA products can even utilise asynchronous compute in an effective manner. Technically, AMD GCN graphics cards contain 2-8 asynchronous compute cores with 8 queues per core which varies on the model to provide single cycle latencies. Maxwell revolves around two pipelines, one designed for high-priority workloads and another with 31 queues. Most importantly, NVIDIA cards can only “switch contexts at draw call boundaries”. This means the switching process is slower and gives AMD and a major advantage. NVIDIA has dismissed the early performance numbers from Ashes of the Singularity due to its current development phase. Finally, the game’s release has exited the beta stage which allows us to determine the performance numbers after optimizations were completed.
AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?
Is the Hardware Competitive?
The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.
NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.
Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.
Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.
Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.
Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.
The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.
Do AMD GPUs Lack Essential Hardware Features?
When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.
Have The Drivers Improved?
Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.
The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.
Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.
Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.
However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.
Is GeForce Experience Significantly Better?
In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:
“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”
NVIDIA’s Sean Pelletier released a statement at the time which reads:
“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.
GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51
We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”
As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software. If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.
Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.
On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.
The Far Cry franchise gained notoriety for its impeccable graphical fidelity and enthralling open world environment. As a result, each release is incredibly useful to gauge the current state of graphics hardware and performance across various resolutions. Although, Ubisoft’s reputation has suffered in recent years due to poor optimization on major titles such as Assassin’s Creed: Unity and Watch Dogs. This means it’s essential to analyze the PC version in a technical manner and see if it’s really worth supporting with your hard-earned cash!
Far Cry Primal utilizes the Dunia Engine 2 which was deployed on Far Cry 3 and Far Cry 4. Therefore, I’m not expecting anything revolutionary compared to the previous games. This isn’t necessarily a negative concept though because the amount of detail is staggering and worthy of recognition. Saying that, Far Cry 4 was plagued by intermittent hitching and I really hope this has been resolved. Unlike Far Cry 3: Blood Dragon, the latest entry has a retail price of $60. According to Ubisoft, this is warranted due to the lengthy campaign and amount on content on offer. Given Ubisoft’s turbulent history with recent releases, it will be fascinating to see how each GPU this generation fares and which brand the game favours at numerous resolutions.
“Far Cry Primal is an action-adventure video game developed and published by Ubisoft. It was released for the PlayStation 4 and Xbox One on February 23, 2016, and it was also released for Microsoft Windows on March 1, 2016. The game is set in the Stone Age, and revolves around the story of Takkar, who starts off as an unarmed hunter and rises to become the leader of a tribe.” From Wikipedia.
Originally launching in both a 2GB and 4GB variant, Nvidia is reportedly planning to discontinue the lower capacity model. By offering only a 4GB tier, Nvidia is hoping to make the card more attractive to buyers as they will only see the 4GB version. At this point in time, there is no word yet if the 4GB 960 will keep its current price or drop to fill in the void left by the departing 2GB model.
The GTX 960 features the full GM206, Nvidia’s budget Maxwell die. While the card does decent against AMD’s R9 380, it does fall behind a bit in terms of overall performance. With the launch of the GTX 950 as well, the 960 has become even more of a niche product. The 950 features only 256 fewer shaders and 12 TMUs, not a large margin by any means, placing its performance to near 960 levels. With such competition, it is understandable why Nvidia will try to differentiate the card more by only having a 4GB model.
The biggest question is whether or not the GTX 960 will actually need 4GB of VRAM. While 4GB might be needed for 1440p, the 960 is solidly a 1080p performing card. That has historically been the domain of 2GB of cards and by the time 4GB is required for 1080p, the GPU core of the 960 may well be lacking. One also must consider the fact the 950 also has a 4GB model and would age about the same as the 960. Both cards are also limited by the 128bit memory interface which may hinder the use of such a large frame buffer.
Undoubtedly though, the extra frame buffer would make the 960 more future proof if only just. It will be interesting to see if Nvidia does follow through with this move in the end. We will follow this story as it develops and bring you more information as it arrives so stay tuned!
Thank you HWBattle for providing us with this information
With just over a month until the expected launch, more information on AMD’s Radeon R9 380X have surfaced. Last time around, we got a glimpse of the XFX Double Dissipation model and today we’re treated to the full specifications
Unlike the R9 285/380, the 380X will feature the full Tonda die. Tonga was originally launched last year cut down to 1792 shader units, 112 TMUs and 32 ROPs over a 256bit GGDR5. Keeping the same 256bit bus, the 380X will feature 2048 shader units, 128 TMUs and 32 ROPs, a hardware parity with the aged 280X. Given the architectural improvements GCN 1.2 introduced starting with Tonga, the 380X should place at least 10% faster than its predecessor. Clock speeds also get a boost up to between 1000~1100Mhz while GDDR5 speeds will stay about the at around 5500Mhz~6000Mhz.
With better performance and DX12 support, among other advantages compared to the GTX 770, AMD has a good chance to dominate the $150 gap between Nvidia’s GTX 960 and 970. The 380X may also come standard with 4GB of VRAM as 2GB is probably a bit too low for this tier of performance. If the 380X does well in the market, it will be interesting to see if Nvidia will respond with an even more cut down GM204 in the form of a GTX 960Ti, cut prices on the 970 or simply just wait it out till Pascal.
Thank you HWBattle for providing us with this information
We all like free stuff and MSI seem to agree to that, at least they have bundled a free copy of Assassin’s Creed Chronicles: China and Assassin’s Creed Chronicles: India with the MSI Radeon R9 380 GAMING or MSI Radeon R7 370 GAMING graphics cards.
Assassin’s Creed Chronicles: China brings you to China in 1526 where you play Shao Jun. Shao Jun is the last remaining Assassin of the Chinese Brotherhood and was he trained by the legendary Ezio Auditore, my personal favourite Assassin in the series. Your job is it to restore the fallen Brotherhood, but not without taking your revenge.
Assassin’s Creed Chronicles: India takes you to the middle of a conflict between the Sikh Empire and the East India Company in 1841. You are taking on the role of Arbaaz Mir and the mission is to return a mysterious item, but that’s likely not going to be as easy as it sounds.
The MSI Radeon R7 370 GAMING and MSI Radeon R9 380 GAMING graphics cards are equipped with MSI’s highly awarded TWIN FROZR V cooling solution that allow them to stay cool and quite throughout your battles.
The promotion and thereby free games started on September 21st and will run until October 31st, 2015, or while supplies last. Game codes that haven’t been redeemed by July 31st, 2016 will expire, so better activate it before the code gets lost in your mail folder somewhere.
Ever since AMD debuted Tonga Pro in the R9 285, everyone had been waiting for the full Tonga XT die. Earlier this week, we got our first hint with the glimpse of the XFX Double Dissipation R9 390X. Today, we’re getting word that the R9 380X will finally arrive in late October, a little over a month from now. This will fill the relatively large gap between the R9 380 and 390.
With GCN 1.2, the R9 380X will bring the efficiency gains first demonstrated in the R9 285. The card will feature 2048 Stream Processors, 128 TMUs and 32 ROPs connected to 4GB of GDDR5 across a 256bit bus. With GCN 1.2’s improved architecture, the 380X should perform about 10% faster than the 280X at the same clocks. Despite a drop in raw bandwidth compared to the 280X, the introduction of delta color compression should alleviate any issues. The card should also feature good DX12 support with asynchronous compute as well as FreeSync.
Unlike the earlier R9 370X which was limited to China, the 380X will be available worldwide. With full Tonga on tap, AMD should be able to strike at the hole Nvidia has left between the 960 an 970. This should hopefully help AMD make some more revenue, gain some market share and be more competitive overall. The only spoiler would be if Nvidia somehow introduced a GTX 960 Ti.
Thank you Fudzilla for providing us with this information
Switching education level from one school to another, or even just switching the grade can require you to upgrade the hardware you use for it and each year manufacturers and resellers launch their back to school promotions for just this situation. Usually, you get either an extra good price or extra good bundle if you can verify your that you’re currently taking some form of education.
HP did so too, but that isn’t really the big news here. The interesting part is that they outed the AMD Radeon R9 380 graphics card as an option for their new premium Tower PCs, the HP Envy Tower series.
“Powerful and stylish, the HP ENVY Tower is designed for content creators who need high-performance processors and strong graphics capabilities for editing videos and photos. For performance, customers have the choice of up to NVIDIA GTX 980 or AMD Radeon R9 380 discrete graphics “
Sadly the actual product page doesn’t load so we don’t know any more details about this GPU, whether it is from the actual new 300 series or if it’s just a rebrand. There are theories of rebrands of the R9 285, 280, and 280x, but if any of them are true, then most likely the 285 one. Anything else would be suicide for AMD at this point considering the advances and features added between the generations.
When we look at the released information and the R9 380 being mentioned as equal option to the Nvidia GTX 980, we can guess that if it’s a rebrand, then it’ll be an optimized version in order to keep up with the GTX 980.
Thank you TechPowerUp for providing us with this information
We’ve seen rumours about the AMD Radeon R9 300 series for quite some time now. and with the release dates getting closer each day, more and more of these rumours are compiled into more reliable information.
The 390 and 380 series are confirmed for a Q2 2015 release, but the other release times are more or less speculations based on history and leaks, but they seem very likely.
One of the almost sad things bout this is the use of GCN 1.1 (Graphic Core Next) in the 380 and 380x and it shows us that we’ll only really get one new chip in this generation – the Fiji used in the 390 and 390x cards.
Where the R9 380 series will be a rebranded R9 290, the 370 will be a rebrand of the current R9 285 – but when we say rebrand it just means that they will use the same chip architecture. Clock speeds and other aspects might have been tuned and optimised.
So, the wait is almost over for those who want to get their hands on AMD’s next gen cards with HBM memory.
Thanks to 3Dcenter for providing us with this information
With Nvidia’s GTX 970 and 980 pretty much dominating the market now, we all can’t wait for AMD’s next move. We will still have to be a little patient though, but it does look like it will be worth waiting for.
Two separate benchmarks have surfaced during the last week, claiming to be from AMD R9 390X cards. The first is a 3D Mark 11 score of X8121 where the current 290X only scores around X4700. Very impressive and if true without a doubt due to the new memory as the GPU still will be built on the 28nm process.
The second benchmark is from an R9 390x quad crossfire setup scoring impressive 38,875 in the Fire Strike Extreme test – about 33% more than a heavy overclocked GTX 980 Quad-SLI setup. The user posting the quad benchmark also posted what is supposed to be the PCB of the card.
Please keep in mind that these are rumours. We do however know that the new AMD Rx 300 series will launch in Q2 2015, most likely during Computex. The R9 380 will double up to 4096 GCN 1.2 cores and use 4GB of stacked HBM memory as was confirmed in an investor conference call following the AMD q4-2014 and fiscal year reports.
Thanks to MyDrivers for providing us with this information
The new Nvidia GeForce GTX 980 and 970 have blown us away with their mixture of high performance and low power consumption; so where are the AMD cards to compete with them? We have the R9 285, but with AMD shooting down rumours about the R9 285X, we may now be waiting until 2015 before we see a new generation of cards from AMD.
The Radeon R300 series, also currently known as “Pirate Island” is still nowhere to be seen, but rumour suggests that AMD will launch their new high-end cards early next year – best estimate would be just inside 6 months. There are said to be three ranges in the Pirate Island series; the high-end “Bermuda”, the mid range “Fijian” or “Fiji” and the low-end “Treasure Island”. It is expected that the Radeon R9 380 “Fiji” will be the first to market.
It has been suggested that the cards will use a 20nm process, but with TSMC currently using their 20nm fabs for Apple products, it seems unlikely that AMD will get to market with 20nm hardware, Nvidia pushed forward with 28nm for good reason and AMD and Nvidia are likely to stay with it and skip to 16nm when the time for the next-next generation comes. However, if AMD can pull off a range of cards at 20nm, it could prove a big win for the company in this never-ending battle between themselves and Nvidia.
Pirate Island will incorporate other new AMD technologies such as GCN 2.0, we should also see improved connectivity such as HDMI 2.0 and an updated DisplayPort to stay competitive with Nvidia.
If you’re waiting for the new AMD cards, don’t hold your breath for too long; with any luck we may see something new on display at CES 2015 in January.
Thank you MyDrivers for providing us with this information.