While we’ve pretty much confirmed that GP104 will replace the current Maxwell chips with the new GTX 1080 and 1070, things are less clear from AMD. We got some clarification yesterday from the release of a new roadmap that appeared to show Polaris 10 replacing current Fiji cards. With a new statement as part of their Q1 earnings release, AMD is shedding a bit more light on where they see Polaris 11 fitting in.
“AMD demonstrated its “Polaris” 10 and 11 next-generation GPUs, with Polaris 11 targeting the notebook market and “Polaris” 10 aimed at the mainstream desktop and high-end gaming notebook segment. “Polaris” architecture-based GPUs are expected to deliver a 2x performance per watt improvement over current generation products and are designed for intensive workloads including 4K video playback and virtual reality (VR).”
From the statement, we can see that Polaris 11 is meant for mainstream desktop and high-end gaming notebook segment. To me, this suggests that Polaris 10 will be branded 480 and 480X which has been the mainstream segment. With at 2304 stream processors, this would make for a good 390X replacement and once you consider the significant improvements GCN 4.0 brings, it would be competitive with Fury. Polaris 11 seems to be targeting the low power segment with notebooks and like x70/x70X which have historically been the top end notebook cards.
If our speculation is correct, this means AMD is transitioning to a release schedule similar to Nvidia. The mainstream chip with Polaris 10 would come in first with a slight improvement over the current Fiji flagships. A few months later, in early 2017, we will see Vega with HBM2 come in as a true upgrade over Fury X. Starting off, it looks like GP104 and Polaris 10 will battle it out quite equally so it will be interesting to see how it all plays out.
Ashes of the Singularity is a futuristic real-time strategy game offering frenetic contests on a large-scale. The huge amount of units scattered across a number of varied environments creates an enthralling experience built around complex strategic decisions. Throughout the game, you will explore unique planets and engage in enthralling air battles. This bitter war revolves around an invaluable resource known as Turinium between the human race and masterful artificial intelligence. If you’re into the RTS genre, Ashes of the Singularity should provide hours of entertainment. While the game itself is worthy of widespread media attention, the engine’s support for DirectX 12 and asynchronous compute has become a hot topic among hardware enthusiasts.
DirectX 12 is a low-level API with reduced CPU overheads and has the potential to revolutionise the way games are optimised for numerous hardware configurations. In contrast to this, DirectX 11 isn’t that efficient and many mainstream titles suffered from poor scaling which didn’t properly utilise the potential of current graphics technology. On another note, DirectX 12 allows users to pair GPUs from competing vendors and utilise multi graphics solutions without relying on driver profiles. It’s theoretically possible to achieve widespread optimization and leverage extra performance using the latest version of DirectX 12.
Of course, Vulkan is another alternative which works on various operating systems and adopts an open-source ideology. Although, the focus will likely remain on DirectX 12 for the foreseeable future unless there’s a sudden reluctance from users to upgrade to Windows 10. Even though the adoption rate is impressive, there’s a large number of PC gamers currently using Windows 7, 8 and 8.1. Therefore, it seems prudent for developers to continue with DirectX 11 and offer a DirectX 12 render as an optional extra. Arguably, the real gains from DirectX 12 will occur when its predecessor is disregarded completely. This will probably take a considerable amount of time which suggests the first DirectX 12 games might have reduced performance benefits compared to later titles.
Asynchronous compute allows graphics cards to simultaneously calculate multiple workloads and reach extra performance figures. AMD’s GCN architecture has extensive support for this technology. In contrast to this, there’s a heated debate questioning if NVIDIA products can even utilise asynchronous compute in an effective manner. Technically, AMD GCN graphics cards contain 2-8 asynchronous compute cores with 8 queues per core which varies on the model to provide single cycle latencies. Maxwell revolves around two pipelines, one designed for high-priority workloads and another with 31 queues. Most importantly, NVIDIA cards can only “switch contexts at draw call boundaries”. This means the switching process is slower and gives AMD and a major advantage. NVIDIA has dismissed the early performance numbers from Ashes of the Singularity due to its current development phase. Finally, the game’s release has exited the beta stage which allows us to determine the performance numbers after optimizations were completed.
After many fruitful years of partnerships with Apple, AMD is reportedly continuing the relationship with their latest Polaris based GPUs. Apple has alternated MacBook Pro suppliers between Nvidia and AMD in the past but tended towards AMD more with the Mac Pro. According to the source, the performance per watt of 14nm Polaris combined with the performance per dollar of the chips is what sold Apple.
AMD has long pursued a strategy os using smaller and more efficient chips to combat their biggest rival Nvidia. Prior to GCN, AMD tended to have smaller flagships that sipped less power and had lesser compute abilities. This all changed around with GCN where AMD focused on compute more while Nvidia did the opposite. This lead to Nvidia topping the efficiency charts and combined with their marketing soared in sales. If the rumours are true, Polaris 10 will be smaller than GP104, its main competitor.
With Polaris, AMD should be able to regain the efficiency advantage with both the move to 14nm and the new architecture. We may see Polaris based Macs as soon as WWDC in June, just after the cards launch at Computex. In addition to a ‘superior’ product, AMD is also willing to cut their margins a bit more in order to get a sale as we saw with the current-gen consoles. Perhaps, is AMD plays their cards well, we may see Zen Macs as well.
Back at E3 2015 nearly a year ago, AMD showed off their Project Quantum PC featuring 2 Fiji GPUs in a tiny form factor. Ironically, the feature AMD device used an Intel CPU instead of an AMD one and ended up using a single Fury chip instead of the dual Fiji we have come to know as the Radeon Pro Duo. Along with supply issues, we likely won’t see Project Quantum for a while. According to Diit though, when it does arrive, it will use AMD’s own Zen CPU and new Vega GPUs.
The main reason AMD chose to use an Intel CPU was simple. AMD CPUs were not up to snuff and with the Project Quantum aimed at being the best, it required a top-end CPU, one from Intel. With Zen set to debut later this year though, AMD has a chance to showcase the potential of their chip, showing that is capable of driving the fastest graphics cards out there without holding anything back.
On the graphics side, the delay on the CPU side means Vega, the full-on Fiji replacement with HBM2 will have a chance at Project Quantum. Vega should have no trouble beating FuryX and potentially even the Radeon Pro Duo. By delaying, AMD also reaps the benefits of moving the entire system to 14nm FinFETs, finally making the true VR PC for those that want the best.
AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?
Is the Hardware Competitive?
The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.
NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.
Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.
Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.
Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.
Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.
The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.
Do AMD GPUs Lack Essential Hardware Features?
When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.
Have The Drivers Improved?
Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.
The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.
Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.
Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.
However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.
Is GeForce Experience Significantly Better?
In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:
“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”
NVIDIA’s Sean Pelletier released a statement at the time which reads:
“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.
GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51
We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”
As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software. If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.
Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.
On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.
As always, most of the focus on Polaris has been on the top end chip. This has meant that much of the talk ahs been focused on the Polaris 10, the R9 390X/Fury replacement. Today though, we’ve been treated to a leak of the mainstream Polaris chip, Polaris 11. Based off of a CompuBench leak, we’re now getting a clearer picture of what Polaris 11 will look like as the Pitcairn replacement.
The specific Polaris 11 chip spotted features a total of 16CUs, for 1024 GCN 4.0 Stream Processors. This puts it right where the 7850/R7 370 is right now. Given the efficiency gains seen by the move to GCN 4.0 though, performance should fall near the 7870XT or R9 280. The move to 14nm FinFET also means the chip will be much smaller than Pitcairn currently is. Of course, this information is only for the 67FF SKU so there may be a smaller or more likely, a larger Polaris 11 in the works.
Other specifications have also been leaked, with a 1000Mhz core clock speed. Memory speed came in at 7000Mhz, with 4GB of VRAM over a 128bit bus. This gives 112GB/s of bandwidth which is a tad higher than the R7 370 before you consider that addition of delta colour compression technology. GCN 4.0 will also bring a number of other improvements tot he rest of the GPU, most importantly FreeSync support, something Pitcairn lacks.
While we can’t guarantee the same SKU was used, Polaris 11 was the GPU AMD pitted against the GTX 950 back at CES. During the benchmark of Star Wars Battlefront, the AMD system only drew 84W compared to the Nvidia system pulling 140W. For the many gamers who buy budget and mainstream cards, Polaris 11 is shaping out very well.
AMD’s answer to the Titan lineup, the Radeon Pro Duo was first revealed last month at AMD’s Capsaicin event. Navigating a line fine line between the Radeon and FirePro lineups, the new graphics cards combines two of AMD’s top end Fiji GPUs. According to VideoCardz, we may see the first Radeon Pro Duos out in the wild sooner than expected. The chip will launch in just a couple of weeks on April 29th.
The Radeon Pro Duo features a pair of 28 nm Fiji GPUs, with two sets of 4,096 stream processors, 256 TMUs, 64 ROPs, and 4 GB of 4096-bit HBM memory. This means a total of 8192 stream processors, 512 TMUs , 128 ROPs and 8GB of HBM1. While the price is a hefty $1499, you do get a very nice custom Cooler Master water cooler with it. Peak performance is a high 16TFLOPS which is still 4.4TFLOPS more than Nvidia’s Tesla P100.
From AMD’s internal benchmarks of 3DMark, the Radeon Pro Duo should smash any other card on the market by a significant margin. Games, however, tend to be more fickle and the Radeon Pro Duo does rely on CrossFire for much of its performance. Given the many issues plaguing SLI and CrossFire this year, it will be interesting to see real world performance once the card becomes available.
Just 5 days into the new month and AMD has already released a new set of Crimson drivers for their Radeon GPUs. The latest version out is 16.4.1, a beta version hotfix for 16.3.2 which was released just a week ago. Coming so quick after 16.3.2 and still a beta, the number of changes aren’t that many but are welcome none the less. Interestingly, there looks like there will be no 16.4 driver, with AMD choosing to jump straight to 16.4.1.
First off, support for the Oculus Rift and HTC Vive is likely improved compared to 16.3.2. Furthermore, Quantum Break has received a number of optimisations, boosting performance by up to 35% in some cases. Hitman has also received some fixes to its DX11 High-Quality Shadows and frame cap issues experienced in several DX12 games have been resolved.
Even with these fixes in place, there is an ever-growing list of known issues that remain unresolved. Half of these issues have to do with Crossfire and nearly all of the other one related to bugs within AMD’s own Radeon Settings of Gaming Evolved Software. While quick and prompt driver releases are welcome, AMD needs to get to work fixing more issues rather than just another point release. Given the current track record, we may yet see 16.4.2 and 16.4.3 later this month.
As part of the ongoing process for technological advancement, 32bit support has begun to decline throughout the ecosystem. The latest firm to silently reduce support for 32bit systems is AMD with their GPUs. Starting with the latest Crimson Software 16.3.2 release, 32bit drivers for their latest GPUs have gone missing from their usual links. This follows the Radeon Pro Duo which only launched with 64bit drivers.
Moving away from 32bit makes a lot of sense as even mainstream GPUs are starting to have more than 4GB of VRAM, the same amount 32bit systems will handle. Once you add in system memory, there really isn;t a point to be using a 32bit system with the latest GPUs except for compatibility reasons. Furthermore, the market for 32bit drivers has been shrinking, with only about 13% of Steam users running a 32bit system. Given the intense ram requirements for games these days, 64bit is nearly a must. Dropping 32bit support also means more resources to put towards 64bit drivers and making those better.
The biggest complaint I have though is the silence from AMD. Rather than admit that they are reducing 32bit support, they silently started hiding their 32bit drivers. For users who click on 32bit drivers, they get sent to a page telling them to move to 64bit. At the same time, 32bit drivers continue to be made and are available with a bit of URL guessing (just change the “64” at the end of the 64bit bit link to “32”). Instead of trying to hide it, AMD should have made an announcement that 32bit support would end at X date in the future and continue for now to make 32bit drivers easy to access. This whole thing just smacks of bad PR and miscommunication. There is no shame to move away from 32bit and hopefully, AMD will get this.
With the Oculus Rift launching now, both AMD and Nvidia have released new drivers to ensure compatibility with the VR headset. Nvidia has released their GeForce driver 364.72 and AMD has responded with their first WHQL driver since the end of 2015, Radeon Crimson 16.3.2. This is the third driver to be released in March and just 10 days since the last one. In addition to Oculus Rift support and WHQL status, there are also a number of other fixes.
First off, fellow VR headset HTC Vive is also getting supported with the new driver. To power these VR headsets, the Radeon Pro Duo is also being supported after launching earlier in the month. Everybody’s Gone to the Rapture and Hitman are also getting updated Crossfire profiles for DX11. Some notables fixes are to FFXIV and XCOM2 and the Fury series will no longer suffer corruption after long idle times.
In terms of known issues, the list remains as long as before, with most of them relating to Crossfire bugs. The AMD Gaming Evolved in-game overlay will still crash some games when enabled. Hopefully, these issues will be resolved shortly in a future update. This continues AMD’s streak of quick driver releases with several a month. You can find the full release notes here.
One of the biggest changes DX12 brings to the table is the increased reliance on developers to properly optimize their code for GPUs. Unlike DX11, there will have fewer levers to tweak in the GPU driver, with more work being needed in the game engine and the game itself. To address this, AMD has announced a partnership with multiple game engine and game developers to implement DX12.
To kick start the effort, AMD is headlining 5 games and engines they are partnering with to ensure DX12 works smoothly with Radeon GPUs with the software. These are Ashes of the Singularity by Stardock and Oxide Games, Total War: WARHAMMER by Creative Assembly, Battlezone VR by Rebellion, Deus Ex: Mankind Divided by Eidos-Montréal and the Nitrous Engine by Oxide Games. These titles span a wide range from RTS to RPG and FPS which gives a sense that AMD is trying to cast as wide a net as possible.
In addition to this, AMD will also be working with EA and Dice to get the Frostbite 3 engine to enable DX12. This engine is of particular importance due to the many AAA EA and other titles using it. AMD is also hoping to push Asynchronous Compute and to make sure games will squeeze the most out of GCN using DX12.
This year, both AMD and Nvidia will be launching their new Polaris and Pascal based GPUs. Unfortunately, it looks like the flagship chips won’t be arriving till next year. Set to arrive in early 2017, Vega, also known as Greenland, is to be the flagship replacement for Fiji. According to information 3DCenter dug up, Vega will feature 4096 GCN shaders, the same amount as Fiji currently has.
With Polaris and Vega, there are suggestions that AMD has managed to improve GCN 4.0’s performance by 30% compared to current GCN offerings. This alone should allow a significant performance increase over the Fury X. Fiji was also limited due to the design of GCN being unoptimized for massive chips with too many shaders and if AMD has managed to fix this, Vega will perform better than expected.
Furthermore, Vega will utilize HBM2 which will finally remove the 4GB cap faced by HBM GPUs as well as reduce latency. The use of 14nm as well and other Polaris improvements will also allow for a cooler and less power hungry die. We can also expect Vega to come in at a die size similar to Hawaii rather than Fiji, with a true Fiji size successor to come later on in the process cycle.
QNAP’s newest server, the TDS-16489U, is an amazing one that sets itself apart from the rest in so many ways. I want one so badly even though I have absolutely no need for this kind of power. This must be how a normal person feels when they see a Bugatti Veyron. But let us get back to the new QNAP dual server.
The TDS-16489U is a powerful dual server that’s both an application server and storage server baked into on chassis for simplicity and effectiveness. It is powered by two Intel Xeon E5 processors with 4, 6, or 8 cores each while supporting up to 1TB DDR4 2133 MHz memory with its 16 DIMM slots. These are already some impressive specs, but this is just where the fun begins.
The dual server has 16 front-accessible drive bays for 3.5-inch storage drives as well four rear-facing 2.5-inch drive bays for SSD cache. Should this not be enough, then you can expand that further by use of NVMe based PCI-Express SSDs too. The system has three SAS 12 Gb/s controllers built-in to couple it all together.
There are just as many connection options as there are storage options in the TDS-16489U. It comes with two normal Gigabit Ethernet ports as well as four SFP+ 10Gbps ports powered by an Intel XL710. Should that not be enough, then you can use the PCI-Express slots to expand with further NICs of your choice. The system supports the use of 40 Gbps cards too. It also comes with a dedicated IPMI connection besides the normal networking. The PCI-Express x16 Gen.3 slots can also be used with AMD R7 or R9 graphics cards for GPU passthrough to virtualization applications. A true one-device solution for applications, storage, and virtualization.
The TDS-16489U combines outstanding virtualization and storage technologies as an all-around dual server. With Virtualization Station and Container Station, computation and data from the guest OS and apps can be directly stored on the TDS-16489U through the internal 12Gb/s SAS interface. Coupled with Double-Take Availability to provide comprehensive high availability and disaster recovery, backup virtual machines can support failover for the primary systems on the TDS-16489U whenever needed to enable data protection and continuous services. QNAP Virtualization Station is a virtualization platform based on KVM (Kernel Virtual Machine) infrastructure. By sharing the Linux kernel, GPU passthrough, virtual switches, VM import/export, snapshot, backup & restoration, SSD cache acceleration and tiered storage.
“Software frameworks for Big Data management and analysis like Apache Hadoop or Apache Spark can be easily operated on the TDS-16489U using virtual machines or containerized apps, and with Qtier Technology for Auto Tiering the TDS-16489U empowers Big Data computing and provides efficient storage in one box to help businesses gain further insights, opportunities and values,” said David Tsao, Product Manager of QNAP.
With all the above, we shouldn’t forget that it still also runs QNAP’s QTS 4.2 operating system that provides everything you know and love from that. Included is the comprehensive virtualization applications that we’ve also seen on our consumer models, but this is where you truly can take advantage of what QNAP created and run multiple Windows, Linux, Unix, and Android-based virtual machines on your NAS. All the backup solutions and failover, from local to other NAS or the cloud. You can do it all. Share files to basically any device anywhere is made as easy as possible.
Should you still not have enough storage in this impressive unit, then you can expand with up to 8 of the QNAP enclosures and reach a seriously impressive 1152 TB raw storage capacity controlled by this single 3U server unit. The CPU power, dual system capabilities, virtualization options and impressive storage option will let you deploy an impressive system with a very tiny size and total cost of ownership compared to traditional setups.
16-bay, 3U rackmount unit
2 x Intel Xeon E5-2600 v3 Family processor (with 4-core, 6-core and 8-core configurations)
64GB~1TB DDR4 2133MHz RDIMM/LRDIMM RAM (16 DIMM)
4 x SFP+ 10GbE ports
hot-swappable 16 x 3.5″ SAS (12Gbps/6Gbps)/SATA (6Gbps/3Gbps) HDD or 2.5″ SAS/SATA SSD, and 4 x
2.5″ SAS (12Gbps) SSD or SAS/SATA (6Gbps/3Gbps) SSD;
In the few days after AMD first demoed Polaris 10 to us at Capsaicin, more details about the upcoming graphics cards have been revealed. Set to be the big brother to the smaller Polaris 11, the better performing chip will drop sometime after June this year.
First off, we’re now able to bring you more information about the settings Hitman was running at during the demo. At Ultra Settings and 1440p, Polaris 10 was able to keep to a constant 60FPS, with VSync being possible. This means the minimum FPS did not drop below 60 at any point. This puts the card at least above the R9 390X and on par if no better than the Fury and Fury X. Of course, the demo was done with DX12 but the boost is only about 10% in Hitman.
Another detail we have uncovered is the maximum length of the engineering sample. Based on the Cooler Master Elite 110 case used, the maximum card length is 210mm or 8.3 inches. In comparison, the Nano is 6 inches and the Fury X 7.64 inches. Given the small size, one can expect Polaris 10 to be as power efficient as Polaris 11 and potentially be using HBM. Given that Vega will be the cards to debut HBM2, Polaris 10 may be limited to 4GB of VRAM. Finally, display connectivity is provided by 3x DP 1.3, 1x HDMI 2.0 and 1 DVI-D Dual Link though OEMs may change this come launch unless AMD locks it down.
After Samsung and Nvidia had their recent legal spat, more light has been shed on the world of GPU patents and licensing. While Intel holds their own wealth of patents, no doubt some concerning GPUs, Nvidia and AMD, being GPU firms, also hold more important patents as well. With Intel’s cross-licensing deal with Nvidia set to expire in Q1 2017, the chip giant is reportedly in negotiations with AMD to strike up a patent deal.
Being one of the big two GPU designers, AMD probably has many important and critical GPU patents. Add in their experience with APUs and iGPUs, there is probably quite a lot there that Intel needs. With the Nvidia deal expiring, Intel probably sees a chance to get a better deal while getting some new patents as well. Approaching AMD also makes sense as being the smaller of the two GPU makers, AMD may be willing to share their patents for less. It’s also a way to inject some cash into AMD and keep it afloat to stave off anti-trust lawsuits.
AMD also has a lot to offer with the upcoming generation. The GPU designer’s GCN architecture is ahead of Nvidia’s when it comes to DX12 and Asynchronous Compute and that could be one area Intel is looking towards. Intel may also be forced into cross-licencing due to the fact with some many patents out there, there have to be some they are violating. The biggest question will be if AMD will consider allowing their more important and revolutionary patents to be licensed.
With the Nvidia deal being worth $66 million a quarter or $264 million a year, AMD has the chance to squeeze out a good amount of cash from Intel. Even though $264 million wouldn’t have been enough to put AMD in the black for 2015, it wouldn’t have hurt to have the extra cash.
Even though a lot of information was shared from the Capsaicin live stream, some details weren’t made known till the after party. In an interview, Radeon Technologies Group head Raja Koduri spoke in more detail about the plans AMD has for the future and the direction they see gaming and hardware heading towards.
First up of course, was the topic of the Radeon Pro Duo, AMD’s latest flagship device. Despite the hefty $1499 price tag, AMD considers the card a good value, something like a FirePro Lite, with enough power to both game and develop on it, a card for creators who game and gamers who create. If AMD does tune the drivers more to enhance the professional software support, the Pro Duo will be well worth the cash considering how much real FirePro cards cost.
Koduri also see the future of gaming being dual-GPU cards. With Crossfire and SLI, dual GPU cards were abstracted away as one on the driver level. Because of this, performance widely varies for each game and support requires more work on the driver side. For DX12 and Vulkan, the developer can now choose to implement multi-GPU support themselves and build it into the game for much greater performance. While the transition won’t fully take place till 2017-2019, AMD wants developers to start getting used to the idea and getting ready.
This holds true for VR as well as each GPU can render for each eye independently, achieving near 2x performance benefit. The benefits though are highly dependent on the game engine and how well it works with LiquidVR. Koduri notes that some engines are as easy as a few hours work while others may take months. Roy Taylor, VP at AMD was also excited about the prospect of the upcoming APIs and AMD’s forward-looking hardware finally getting more use and boosting performance. In some ways, the use of multi-GPU is similar to multi-core processors and the use of simultaneous multi-threading (SMT) to maximize performance.
Finally, we come to Polaris 10 and 11. AMD’s naming scheme is expected the change, with the numbers being chronologically based, so the next Polaris will be bigger than 11 but not necessarily a higher performance chip. AMD is planning to use Polaris 10 and 11 to hit as many price/performance and performance/watt levels as possible so we can possibly expect multiple cards to be based on each chip, meaning probably 3. This should help AMD harvest imperfect dies and help their bottom line. Last of all, Polaris may not feature HBM2 as AMD is planning to hold back till the economics make sense. That about wraps it up for Capsaicin!
Being the fastest single-card graphics card to date, we all know that AMD’s new Radeon Pro Duo is fast. Just how fast though is the dual-Fiji giant we don’t yet know though the 16TFOPs number and similar performance to 2 FuryX’s do give a rough estimate. To shed some light on the card, we do have some internal benchmarks of 3DMark AMD has run with their latest and great graphics card.
Testing conducted by AMD Performance Labs as of March 7, 2016 on the AMD Radeon Pro Duo, AMD Radeon R9 295X2 and Nvidia’s Titan Z, all dual GPU cards, on a test system comprising Intel i7 5960X CPU, 16GB memory, Nvidia driver 361.91, AMD driver 15.301 and Windows 10 using 3DMark Fire Strike benchmark test to simulate GPU performance. PC Manufacturers may vary configurations, yielding different results. At 1080p, 1440p, and 2160P, AMD Radeon R9 295X2 scored 16717, 9250, and 5121, respectively; Titan Z scored 14945, 7740, and 4099, respectively; and AMD Radeon Pro Duo scored 20150, 11466, and 6211, respectively, outperforming both AMD Radeon R9 295X2 and Titan Z.
According to AMD, the Radeon Pro Duo is undoubtedly the fastest card, at least according to 3dMark Firestrike. At Standard (1080p), the Pro Duo manages to have 134% of the Titan Z’s performance, a card that Nvidia priced at $2999 at launch. The lead only grows at Extreme and Ultra with 148% and 152% respectively.
Against the R9 295X2, the Pro Duo still manages a decent lead, with about a decent 120% lead across all settings. While lower than the 140% you might expect from a pure hardware standpoint, the 4GB of HBM1 and the limits of GCN do play a role. It does mean there won’t be any surprises fo users running 2 Fury or FuryX’s in CFX as they won’t have anything to worry about. The biggest question is if the card is worth the premium over running your own CFX solution, a question many dual-GPUs cards have faced.
Even as Polaris approaches us quickly within 3 months, the planning for its successor has long been in the works. At their Capsaicin event, AMD took off the wraps for their upcoming GPU plans with a roadmap detailing the planned releases up till 2019. In keeping with the star nomenclature that started with Polaris and ditching the islands, we will have Vega and then Navi following Polaris.
Starting off with Polaris later this year, AMD’s main selling point it seems is the 2.5x performance per watt the new GCN architecture will bring. This is no doubt due to the combination of improved hardware itself, the new 14nm LPP process and DX12 finally making use of the previously wasted hardware resources like asynchronous controllers and shaders.
Moving along, we have Vega to release in what looks to be early 2017. The biggest change it seems is the use of HBM2, replacing GDDR5(X) and HBM1 no doubt. This means we can no doubt expected all Vega releases to utilize HBM2. While this may suggest Polaris won’t be using HBM2, it could also mean that only certain Polaris chips, likely only the high-end ones, will use HBM2.
Finally, we come to Navi, which should debut in early 2018. This release will have scalability and use of next-gen memory like Hyper-Memory Cube for instance. The scalability mention suggests either the use of smaller GCN units used to build the chip to better suit the market or a new process node. For now, we are probably better off trying to figure out what Polaris will be
Right before the Capsaicin event at GDC was about to begin, AMD teased everyone that they will reveal Polaris 10 running a demo for the Valve SteamVR benchmark. Unfortunately, that did not come to pass on the live stream, those of us at home still got a demo of Polaris 10 gameplay in the end.
“Showcasing next-generation VR-optimized GPU hardware – AMD today demonstrated for the first time ever the company’s forthcoming Polaris 10 GPU running Valve’s Aperture Science Robot Repair demo powered by the HTC Vive Pre. The sample GPU features the recently announced Polaris GPU architecture designed for 14nm FinFET, optimized for DirectX® 12 and VR, and boasts significant architectural improvements over previous AMD architectures including HDR monitor support, industry-leading performance-per-watt2, and AMD’s 4th generation Graphics Core Next (GCN) architecture.”
Running the latest Hitman title, Polaris 10 seemed to handle itself well enough. Performance, however, is hard to ascertain given the poor quality of the stream, unknown FPS count and unknown settings. For now, we can only speculate whether or not Polaris 10 is big Polaris or not and how it will perform in the end. Luckily, we only have to wait till June before the first Polaris chips arrive in our waiting hands.
Just yesterday, AMD hosted their Capsaicin live stream event from GDC. While there were some product announcements like the Radeon Pro Duo and the teasing of the upcoming Polaris 10 GPU, most of the time was spent on reiterating past statements. The key to this was VR and AMD spent a lot of the event focusing on this and ragged in a large number of industry insiders to shore up that point. Of course, we also get the usual cringeworthy humour from their engineers.
First off, AMD spoke about their investment in the pixel and HDR. Once again the focus was on improving the information each pixel portrayed to better present the whole image. Of course, AMD also talked about increasing pixel count more and more to get better image quality. The key to this are developments in new APIs such as DX12 and Vulkan as well as AMD’s own solutions in the form of GPUOpen which has been expanded upon with GeometryFX and other additions. One number mentioned was 16, or the 16ms that is allowed for each frame to be computed in order to allow 60FPS.
In order to power these effects, though, AMD is hoping that GPU scaling will continue to improve. This will be due to both improved scaling of multi-GPU (like for the Radeon Pro Duo) due to better support in DX12 and improved process does and architectures. AMD has noted that while GPUs haven’t been keeping track with Moore’s law in terms of performance per dollar, smaller GPUs have and it is important to be able to get 2 smaller GPUs to work together better since that solution would offer better bang for the buck.
In terms of VR, AMD is looking forward to working with developers to get the best performance out of their hardware to get the best experience. In line with this, AMD is pointing out how their hardware is more than ready for with ACE to allow the best performance under DX12. Combined with LiquidVR and their other software libraries, AMD is presenting a comprehensive solution to allow developers to tackle VR. AMD is also offering a certification program for VR ready systems with their hardware to ensure consumers know that the hardware they are getting can handle VR. With this, maybe VR will go mainstream soon enough.
After many months of waiting, AMD has finally unveiled their dual Fiji graphics card. Though not called FuryX2 as we originally expected, the name Radeon Pro Duo is just as fitting. For now, AMD still has not revealed the full specifications but the most important one, price, is a lofty $1,499 USD. For 2 Fiji GPUs and 16TFLOPs of performance, it may well be enough to entice the VR developers AMD is targeting.
As expected of the most powerful single card GPU yet, the power requirements are massive. The 2 GPUs draw power over 3 PCIe 8pin power connectors for up to 525W of power. Memory bandwidth is doubled but the memory remains split as with CFX or dual-GPU cards, with 4GB of HBM1 each over a 4096bit bus. In total, the card has 8192 Shader Cores, 512 TMUs and 128 ROPs with 4DP display connectors. Cooling is provided by a Cooler Master CLC unit with a 120mm extra-thick radiator.
Targetted towards VR and game developers, it makes sense as it offers the performance necessary to the run the most demanding of titles, especially in their unoptimized form. Furthermore, the use of AMD’s affinity multi-GPU, LiquidVR and DX12 will all serve to limit the impact of having 2 separate GPUs. This should allow for better scaling than we usually see with CFX and other solutions.
By targeting VR developers, AMD is able to get away with the hefty price tag, just as Nvidia was able to do the same with their Titan series. This may limit the market though some AMD fans and prosumers won’t mind too much. The price is only $200 more than 2 Fury X’s but it will use up fewer slots and be less of a hassle to arrange the cooling for it. The late launch however, is more of a problematic issue as Polaris and Pascal are fast approaching. It remains to be seen if AMD’s gamble will pay off.
Prepare your wallets for summer 2016 because both AMD and Nvidia are going to release their new GPUs then. Yesterday, we got the first hint about Nvidia’s GTX 1080 which is reportedly launching May 27th. For AMD, the details for Polaris have always been a bit vague, with only mid-2016 being the only hint. Today, a new rumour has popped up with the suggestion that AMD will launch Polaris in June 2016. Furthermore, AMD will be providing a sneak peak of Polaris at Capsaicin next week.
A June launch puts Polaris right into the area of Computex and E3, perfect events to showcase the new GPUs. Launching at the same time as Nvidia also avoids certain issues as AMD has gotten into trouble both launching before and after Nvidia so maybe launching at the same time will be the key. Set to be on the 14nmLPP process, AMD has a good chance to snag some marketshare away from Nvidia.
Next week, we may get a few more details from AMD about what Polaris will look like in the sneak peak. One can only hope the sneak peak will be more than just a picture or another demo but something more substantive. On March 14th, AMD’s Capsaicin webcast from GDC will likely reveal FuryX2 as well as showcase some of their VR developments. With AMD having hit their worst marketshare yet recently, they have started their come back and can only up. Hopefully, Polaris will deliver what is needed.
AMD has been much quicker of late with their driver updates. Just over a week ago, AMD released Radeon Software Crimson Edition 16.21 and they are now following it up with Edition 16.3 with more features and fixes. Befitting a full .x release, 16.3 updates not only the driver part but also new features for Radeon Settings, AMD’s Catalyst Control Center replacement.
First off are a number of resolved support and performance issues. Upcoming AAA release Hitman gets specific driver support and Crossfire Profile support. This is expected as the game is bundled with certain Radeon GPUs and AMD is involved in the development. The Park also gains Crossfire support as well.
For performance improvements, AMD is focusing heavily on the Fury lineup, with performance jumping 16% and 60% in Rise of the Tomb Raider and Gears of War Ultimate Edition respectively. The only other notable performance boost is for the R9 380(X) series with a 40% improvement in Gears of War Ultimate Edition. The improvements in Gears of War Ultimate Edition will be welcome as the game has been performing erratically and we hope other cards see some love as well.
In terms of new features, Vulkan support has been added along with Per-Game Display Scaling, a Language Menu and Two Display Eyefinity. Notably, AMD Crossfire Status Indicator has been added along with a Power Efficiency Toggle to turn off some power savings features. Furthermore, AMD XConnect technology, AMD’s external GPU enclosure over Thunderbolt 3 solution has received preliminary support.
In terms of fixes, the most notable one is choppy core clocks which is fixed at the cost of turning off power efficiency. Rise of the Tomb Raider also won’t crash anymore if tessellation is turned on. There still remains a known of known issues as well. Hopefully, we will see a driver to remedy those problems shortly. You can find the full release notes here and driver here.
With GDC just a week away, everyone is getting ready for major announcements from AMD and Nvidia. AMD however, will also be hosting their separate live streamed event at Ruby Skye in San Fransisco. Named after Capsaicin, a chemical behind a spicy pepper’s kick, AMD will be showcasing their latest innovations in Virtual Reality and Gaming. This means we may get a product reveal or too from the event as well as maybe some more.
Given the recent focus on VR, it is very likely that FuryX2 Gemini will finally be launched since the VR HMD are finally ready. This falls right in line with what has revealed about FuryX2 being ready already and the VR focus of the live stream.
In addition to that, we may finally get some more information about Polaris, though a launch may still be a long way away. March is still too early for Polaris to launch given the mid-2016 remarks but more demo units, especially higher end Polaris wouldn’t be out of the question.
Finally, we can expect AMD to showcase their LiquidVR solution in partnership with the Oculus Rift or HTC Vive. That will likely come along with their other gaming and VR oriented solutions and DX12. The event will be streamed on AMD’s investor relations page so be sure to check it out when the time comes.
After a precipitous decline in dGPU market share over the past few months and quarters, AMD is starting to reverse the trend. In the past quarter, Q2 2015, AMD managed to increase their market share by 2.3%, or a 10% increase in their total market share. While the number seems tiny especially given Nvidia’s numbers, any positive increase is good news for the beleaguered company. This comes even as the dPGU shrank by 4.9%.
The whole of 2015 was pretty terrible for AMD, with some of their worst financials yet, with both the CPU and GPU divisions flagging. However, it looks like AMD finally hit the bottom and the changes they are implementing are starting to take hold. If AMD manages to keep it up, the dire predictions that some analysts had of AMD doing even worse in Q1 2016 will likely not come to pass.
AMD still has a lot of work to do though as market share overall is still depressed compared to 2014. With this past them though, AMD can look forward to Polaris and Zen to drive their new growth. After all, once you’ve hit rock bottom, there is only one way to go and that is up. A final interesting note is that Q4 actually saw lower shipments than Q3, a surprising twist given the holiday season is in Q4. Maybe many are holding out for Polaris and Pascal?
AMD have been pushing hard to improve their software experience, as well as improving the frequency of graphic driver updates, helping them better compete with the relentless release of Nvidia’s Game Ready drivers. So far, their new system has been a big improvement and today is no exception, as AMD push the release of the latest Radeon Software Crimson Edition.
Version 16.2.1 is marked as a “non-WHQL” release, so basically a beta release. However, the driver comes with the CrossFireX profile for the latest AAA release, Far Cry Primal. On top of that big profile release, you can also expect some game specific bug fixes for Fallout 4 as well as Rise of the Tomb Raider.
FreeSync users have cause to celebrate too, we hope, as AMD is also including a bug fix for choppy display on systems that use both FreeSync and CrossFire at the same time, which I’m sure you can imagine is a frustrating issue for a system that’s meant to provide a smoother visual experience.
Radeon Software Crimson Edition 16.2.1 Highlights
Crossfire Profile available for
Far Cry Primal
A black screen/TDR error may be encountered when booting a system with Intel + AMD graphics and an HDMI monitor connected
Choppy gameplay may be experienced when both AMD Freesync and AMD Crossfire are both enabled
Display corruption may be observed after keeping system idle for some time
Fallout 4 – Flickering may be experienced at various game locations with the v1.3 game update and with AMD Crossfire enabled
Fallout 4 – Foliage/water may ripple/stutter when game is launched in High/Ultra settings mode
Fallout 4 – Screen tearing in systems with both AMD Freesync and AMD Crossfire enabled if game is left idle for a short period of time
Fallout 4 – Thumbnails may flicker or disappear while scrolling the Perk levels page
Far Cry 4 – Stuttering may be observed when launching the game with AMD Freesync and AMD Crossfire enabled
FRTC options are displayed on some unsupported laptop configurations with Intel CPU’s and AMD GPU’s
Radeon Settings may sometimes fails to launch with a “Context Creation Error” message
Rise of the Tomb Raider – Corruption can be observed at some locations during gameplay
Rise of the Tomb Raider – Flickering may be experienced at various game locations when the game is left idle in AMD Crossfire mode under Windows 7
Rise of the Tomb Raider – Game may intermittently crash or hang when launched with very high settings and AA is set to SMAA at 4K resolution
Rise of the Tomb Raider – Lara Croft’s hair may flicker in some locations if the Esc key is pressed
Rise of Tomb Raider – A TDR error may be observed with some AMD Radeon 300 Series products after launching the “Geothermal Valley” mission
The AMD Overdrive memory clock slider does not show original clock values if memory speeds are overclocked
World of Warcraft runs extremely slowly in quad crossfire at high resolutions
A few game titles may fail to launch or crash if the Gaming Evolved overlay is enabled. A temporary workaround is to disable the AMD Gaming Evolved “In Game Overlay”
Star Wars: Battlefront – Corrupted ground textures may be observed in the Survival of Hoth mission
Cannot enable AMD Crossfire with some dual GPU AMD Radeon HD 59xx and HD 79xx series products
Fallout 4 – In game stutter may be experienced if the game is launched with AMD Crossfire enabled
XCOM 2 – Flickering textures may be experienced at various game locations
Rise of the Tomb Raider – The game may randomly crash on launch if Tessellation is enabled
Core clocks may not maintain sustained clock speeds resulting in choppy performance and or screen corruption
Another month in and AMD is releasing a new Hotfix for their new Crimson series of drivers. To to their word, the drivers are come out quicker than ever and with more fixes for the latest games. This time around, the fixes are mostly for Rise of the Tomb Raider and Fallout 4, with those fixes taking nearly three-quarters of the changes.
Performance and quality improvements for
Ashes of the Singularity – Benchmark 2
Rise of the Tomb Raider
Crossfire Profiles available for
In addition, crossfire profiles have been added for The Division and XCOM 2, two of the most recent major titles launched. A number of major issues remain however and these will hopefully be fixed shortly. The most egregious are these the two below, the first which impact pretty much every game and card it seems while the second render the point of dual-GPU cards moot.
Core clocks may not maintain sustained clock speeds resulting in choppy performance and or screen corruption
Cannot enable AMD Crossfire with some dual GPU AMD Radeon HD 59xx and HD 79xx series products
You can grab the new hotfixes from the links below. For the complete details, be sure to check out AMD’s page here, and as always be sure to create a restore point just in case.