AMD’s upcoming CPU architecture, codenamed Zen, is going to offer a 40 percent IPC improvement compared to Excavator and could dawn in a new era of competitiveness. Hopefully, if AMD can produce something which rivals Intel, it could instigate a pricing war and make enthusiast processors more affordable. Clearly, many consumers are eagerly anticipating Zen’s release which makes the current range seems rather outdated. Despite this, there are some users who own AM3+ motherboards and don’t want to upgrade to an entirely new platform. On another note, consumers on a tight budget might feel they’re a great option to create an affordable HTPC.
Traditionally, bundled CPU coolers are fairly poor at thermal dissipation due to the low fin array and compact heatsink size. Also, they can be alarmingly loud under full load which makes for a shoddy desktop experience. As a result, I always encourage people to invest in a third-party cooler, such as the Cooler Master Hyper 212 EVO. Interestingly, Intel decided to ditch the stock cooler altogether on Skylake K series chips because they knew people would be opting for a better option. Although, this didn’t reduce the retail price at all! During CES this year, AMD displayed their new Wraith stock cooler capable of lower thermals compared to the previous model and significantly reduced noise. The design evokes a premium feel and it looks rather nice.
Today, AMD announced that they will now bundle this new cooler with the FX 8350 and FX 6350. Previously, this was limited to the FX 8370 and A10-7890K. The FX 6350 will retail for $129.99 while the FX 8350’s price is set at $179.99. This is fantastic news for consumers wanting to order a new AMD processor right now. Of course, if you can wait, it would be advisable given the upcoming Zen release.
Nearing the end of the cycle for their current generation products, its not surprising to see poor financial results come out from AMD. Last year was a terrible one and it looks like 2016 won’t be much better, at least for Q1. For the first quarter of 2016, AMD posted a net loss of $109 million from an operating revenue of $832 million. Unsurprisingly, it is better than 2015 as that year was arguably the worst ever.
AMD blames the revenue drop of 13% sequentially and 19% year-over-year as lower semi-custom sales. This is somewhat expected as we continue the PS4 and Xbox One lifecycle. The bright side is that Sony is set to release the PlayStation 4 Neo and even the Xbox One will see new revisions if not a full upgrade. Combined with the Nintendo NX, those should bounce back the semi-custom segment as consumers buy more consoles again.
Even though margins improved slightly to 32% (Intel posts around 60%), the increase in expenses led to the loss. This is reportedly due to increased R&D for upcoming products, which in my mind are due to Vega/Navi and Zen+ since Zen and Polaris are all set in stone by now. With Polaris 10 and Zen coming this year and even an Apple deal in the works, AMD has a good chance to turn things around as long as they can execute and head back to the black.
While we’ve pretty much confirmed that GP104 will replace the current Maxwell chips with the new GTX 1080 and 1070, things are less clear from AMD. We got some clarification yesterday from the release of a new roadmap that appeared to show Polaris 10 replacing current Fiji cards. With a new statement as part of their Q1 earnings release, AMD is shedding a bit more light on where they see Polaris 11 fitting in.
“AMD demonstrated its “Polaris” 10 and 11 next-generation GPUs, with Polaris 11 targeting the notebook market and “Polaris” 10 aimed at the mainstream desktop and high-end gaming notebook segment. “Polaris” architecture-based GPUs are expected to deliver a 2x performance per watt improvement over current generation products and are designed for intensive workloads including 4K video playback and virtual reality (VR).”
From the statement, we can see that Polaris 11 is meant for mainstream desktop and high-end gaming notebook segment. To me, this suggests that Polaris 10 will be branded 480 and 480X which has been the mainstream segment. With at 2304 stream processors, this would make for a good 390X replacement and once you consider the significant improvements GCN 4.0 brings, it would be competitive with Fury. Polaris 11 seems to be targeting the low power segment with notebooks and like x70/x70X which have historically been the top end notebook cards.
If our speculation is correct, this means AMD is transitioning to a release schedule similar to Nvidia. The mainstream chip with Polaris 10 would come in first with a slight improvement over the current Fiji flagships. A few months later, in early 2017, we will see Vega with HBM2 come in as a true upgrade over Fury X. Starting off, it looks like GP104 and Polaris 10 will battle it out quite equally so it will be interesting to see how it all plays out.
Ashes of the Singularity is a futuristic real-time strategy game offering frenetic contests on a large-scale. The huge amount of units scattered across a number of varied environments creates an enthralling experience built around complex strategic decisions. Throughout the game, you will explore unique planets and engage in enthralling air battles. This bitter war revolves around an invaluable resource known as Turinium between the human race and masterful artificial intelligence. If you’re into the RTS genre, Ashes of the Singularity should provide hours of entertainment. While the game itself is worthy of widespread media attention, the engine’s support for DirectX 12 and asynchronous compute has become a hot topic among hardware enthusiasts.
DirectX 12 is a low-level API with reduced CPU overheads and has the potential to revolutionise the way games are optimised for numerous hardware configurations. In contrast to this, DirectX 11 isn’t that efficient and many mainstream titles suffered from poor scaling which didn’t properly utilise the potential of current graphics technology. On another note, DirectX 12 allows users to pair GPUs from competing vendors and utilise multi graphics solutions without relying on driver profiles. It’s theoretically possible to achieve widespread optimization and leverage extra performance using the latest version of DirectX 12.
Of course, Vulkan is another alternative which works on various operating systems and adopts an open-source ideology. Although, the focus will likely remain on DirectX 12 for the foreseeable future unless there’s a sudden reluctance from users to upgrade to Windows 10. Even though the adoption rate is impressive, there’s a large number of PC gamers currently using Windows 7, 8 and 8.1. Therefore, it seems prudent for developers to continue with DirectX 11 and offer a DirectX 12 render as an optional extra. Arguably, the real gains from DirectX 12 will occur when its predecessor is disregarded completely. This will probably take a considerable amount of time which suggests the first DirectX 12 games might have reduced performance benefits compared to later titles.
Asynchronous compute allows graphics cards to simultaneously calculate multiple workloads and reach extra performance figures. AMD’s GCN architecture has extensive support for this technology. In contrast to this, there’s a heated debate questioning if NVIDIA products can even utilise asynchronous compute in an effective manner. Technically, AMD GCN graphics cards contain 2-8 asynchronous compute cores with 8 queues per core which varies on the model to provide single cycle latencies. Maxwell revolves around two pipelines, one designed for high-priority workloads and another with 31 queues. Most importantly, NVIDIA cards can only “switch contexts at draw call boundaries”. This means the switching process is slower and gives AMD and a major advantage. NVIDIA has dismissed the early performance numbers from Ashes of the Singularity due to its current development phase. Finally, the game’s release has exited the beta stage which allows us to determine the performance numbers after optimizations were completed.
One of the biggest concerns about Polaris 10 has been whether or not it will be a true replacement for Fury X. With the latest leaks out, most of the information points to about 100W TDP with 2304 shaders and clock speeds around 1050Mhz. Compared to Nvidia’s Pascal GP104, this doesn’t sound very competitive, leading to concerns that Nvidia would dominate the high-end. With the release today of AMD’s more detailed roadmap, our concerns have been laid to rest.
The new official roadmap offers a bit more detail than the one AMD showed back at Capsaicin. The new one offers more detail around Polaris 10 and 11, with both chips working to replace the entire Fury and 300 series lineup. This means the top Polaris 11 chip will offer enough performance to at least match, if not exceed Fury X. This should be competitive enough against GP104. If the 2304 shader report is true, AMD has truly revamped GCN 4.0 into something that is significantly superior to GCN 1.0 while cutting power consumption at the same time.
The layout for Polaris compared to the current lineup also suggests there will be no rebrands for the 400 series. It suggests that Polaris 10 will go from about 490X to 480 while Polaris 11 will fill in 470X down to at least 460. With how well small die low power Polaris 11 has done, rebrands don’t really make any sense. Finally, Vega will drop in 2017 with HBM2 and not in late 2016 as some have hoped.
With the improvements AMD has done, I am really looking forward to what Polaris and GCN 4.0 will bring to the graphics landscape.
After many fruitful years of partnerships with Apple, AMD is reportedly continuing the relationship with their latest Polaris based GPUs. Apple has alternated MacBook Pro suppliers between Nvidia and AMD in the past but tended towards AMD more with the Mac Pro. According to the source, the performance per watt of 14nm Polaris combined with the performance per dollar of the chips is what sold Apple.
AMD has long pursued a strategy os using smaller and more efficient chips to combat their biggest rival Nvidia. Prior to GCN, AMD tended to have smaller flagships that sipped less power and had lesser compute abilities. This all changed around with GCN where AMD focused on compute more while Nvidia did the opposite. This lead to Nvidia topping the efficiency charts and combined with their marketing soared in sales. If the rumours are true, Polaris 10 will be smaller than GP104, its main competitor.
With Polaris, AMD should be able to regain the efficiency advantage with both the move to 14nm and the new architecture. We may see Polaris based Macs as soon as WWDC in June, just after the cards launch at Computex. In addition to a ‘superior’ product, AMD is also willing to cut their margins a bit more in order to get a sale as we saw with the current-gen consoles. Perhaps, is AMD plays their cards well, we may see Zen Macs as well.
When it comes to updating your BIOS, most users would be probably thinking about their motherboards. However, graphics card also have their own video BIOS which interfaces with the system BIOS and the graphics card hardware. A new VBIOS can add support for UEFI, speed and power profiles as well as improve stability. Today, AMD released an updated BIOS for their R9 Nano and R9 Fury X graphics cards.
According to AMD, the new BIOS is meant to improve UEFI BIOS support. Normally, you would see AMD’s AiB partners release new updates for their specific card models. However, in the case of the Nano and Fury X, these are reference designed Fiji based cards. We may see the Fury cards, which are all custom, get their own BIOS updates soon.
In addition to the UEFI support, some users are reporting that overclocking stability has improved. The Fury X was not particularly well-liked due to its lacklustre overclocking abilities, something this BIOS may fix. This also suggests that the Radeon Pro Duo may also overclock better than the original Fury X.
To update your relevant graphics card, you can download the new BIOS from AMD’s website here. AMD has chosen to release the updates as .roms which will make for a more complicated flashing process. The usual cautions of flashing your BIOS apply of course.
Battered on both the CPU and GPU fronts, consoles have been one of the few areas AMD has managed to outplay the competition. With competitive CPU and GPU architectures in one platform, AMD was able to secure Nintendo, Sony, and Microsoft’s current-gen consoles. Nintendo is also set to continue to use AMD chips for the Nintendo NX console and that device will reportedly use a 14nm Polaris like GPU.
From previous rumours, we’ve already learned that the Nintendo NX will use an x86 architecture chip paired with at least 6-8GB OF DDR4. What more, the new console will also feature 4K support via upscaling, streaming media and likely playback as well. To wrap it all up, AMD is reportedly supplying Nintendo with a 14nm Polaris-like GPU for their upcoming console. This is similar to how the PS4 and Xbox One used GPUs that were a merger of GCN 1.1 and 1.2. The Nintendo NX may use a something beyond the GCN 4 that is Polaris.The OS also will use Vulkan as it’s graphics API.
With a strong Polaris chip on 14nm, Nintendo will have a chance at seizing the performance crown for once. Nintendo consoles have proven weaker generally and have suffered from lesser third-party support as a result. With 4K support, the NX may well match the PS4K and the rumoured replacement for the Xbox One. Hopefully, we will finally get 1080p 60FPS with decent graphics on consoles soon enough.
In just under 10 days, users will finally be able to purchase their very own dual Fiji GPU. From launch, the Radeon Pro Duo would come out to a lofty $1,499 USD but given exchange rates, those in Spain will have to shell out 1696 EUR. In addition to some local pricing information, we’re also getting treated to some very nice pictures and more detailed physical specifications for the top end Radeon.
First off, confirmation has been given about the clockspeed of the dual-Fijis as 1000Mhz. This slightly lower than the FuryX which runs at 1050Mhz but the removal of PCIe latency should offset this. Memory stays the same at the standard 500Mhz though overclocking that shouldn’t be hard. Exact dimensions are 28.1 x 11.6 x 4.2 cm (length, width, height), with a 120mm as well. No word yet on the length of the tubing.
The biggest surprise is the display output which AMD told us was 4 DisplayPorts. We’re finding out now that it’s actually only 3 DisplayPort 1.2 and 1 HDMI 1.4a. Perhaps AMD misspoke display ports for DisplayPorts. Either way, it remains to be seen how well the card will sell given the hefty price tag and how close Pascal and Polaris are. Even with a strong showing from the next-gen card, though, the Radeon Pro Duo may remain the fastest single card solution.
Back at E3 2015 nearly a year ago, AMD showed off their Project Quantum PC featuring 2 Fiji GPUs in a tiny form factor. Ironically, the feature AMD device used an Intel CPU instead of an AMD one and ended up using a single Fury chip instead of the dual Fiji we have come to know as the Radeon Pro Duo. Along with supply issues, we likely won’t see Project Quantum for a while. According to Diit though, when it does arrive, it will use AMD’s own Zen CPU and new Vega GPUs.
The main reason AMD chose to use an Intel CPU was simple. AMD CPUs were not up to snuff and with the Project Quantum aimed at being the best, it required a top-end CPU, one from Intel. With Zen set to debut later this year though, AMD has a chance to showcase the potential of their chip, showing that is capable of driving the fastest graphics cards out there without holding anything back.
On the graphics side, the delay on the CPU side means Vega, the full-on Fiji replacement with HBM2 will have a chance at Project Quantum. Vega should have no trouble beating FuryX and potentially even the Radeon Pro Duo. By delaying, AMD also reaps the benefits of moving the entire system to 14nm FinFETs, finally making the true VR PC for those that want the best.
AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?
Is the Hardware Competitive?
The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.
NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.
Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.
Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.
Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.
Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.
The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.
Do AMD GPUs Lack Essential Hardware Features?
When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.
Have The Drivers Improved?
Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.
The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.
Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.
Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.
However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.
Is GeForce Experience Significantly Better?
In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:
“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”
NVIDIA’s Sean Pelletier released a statement at the time which reads:
“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.
GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51
We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”
As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software. If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.
Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.
On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.
Polaris 10 and 11 have long been tagged as releasing at Computex later this year. As we know from AMD directly, Polaris 10 will be the flagship chip while Polaris 11 will fill in the gap below. Previously, the expectation has been that Polaris 10 would do battle against GP104/GTX 1080 when that card launched. Now it seems that the card won’t be as high performing as we’ve come to expect.
According to the source, Polaris 10 won’t be the R9 490 and 490X we’ve come to expect as the GP104 challenger. Instead, the approximately 2304 core GPU (up to 2560) will be branded as the R9 480 or 480X. This is largely based on the clock speeds which have been reported as between 800-1050Mhz and the TDP of 110-135W. It’s hard to see how a 125W GPU will match the approx 250W GP104 that Nvidia will launch. Polaris 11 has also had its TDP leaked at 50W which is actually a bit higher than expected.
There is still some hope though as this information is reportedly from last month that has finally leaked out. This means AMD could have tweaked the TDP and clock speeds higher since then, perhaps to around 1200Mhz and 150W+ TDP. AMD has also introduced massive tweaks to GCN to achieve greater efficiency along with the move to 14nm. Nvidia may also have chosen to reintroduce FP64 compute units to Pascal GeForce which could take as much as 30% of the TDP, putting the GP104 at a real 200W worth of gaming performance. Either way, the battle between AMD and Nvidia will be heating up at Computex.
With just under 2 weeks to go, pictures for AMD’s Radeon Pro Duo have started popping up. Dubbed the fastest graphics card ever, the Pro Duo is reportedly launching on April 26th later this month. While AMD did show the card off at their Capsaicin event last month, we never really got a glimpse of the card, only renders. With pictures out, we can see the card in its true glory and the nice souvenir AMD bundled in.First, the design as expected follows the Fury X design paradigm. The card looks really nice and has the thick Cooler Master radiator we’ve come to expect. The tubing is also nicely braided. The water blocks underneath have been redesigned, likely to get around the Asetek’s patents. The box takes on the new AMD branding for their graphics divisions, Radeon Technologies Group as well. Finally, we see the Fiji die that has been bundled along as a souvenir. This is a nice way for AMD to add value through a chip that likely failed to pass certification. It would make a very nice keychain or paperweight. With cards already shipped out, it looks like AMD will meet their April 26th deadline. Even then, the card is awfully close to the Pascal and Polaris launches just a month after that. It will be interesting to see how many users end up picking up a card. The Radeon Pro Duo will likely remain the fastest single card solution till Vega or GP100 launch in 2017.
Some of the first cards to run utilizing the all new “Pascal” architecture made by Nvidia, may debut at Computex 2016. The show is going to be in Late may / Early June in Taipei and is one of the biggest ICT shows in the world and you can be sure the eTeknix team will be there to catch the latest news from the event!
Mass shipments should start sometime in July according to Digitimes, the Taiwan based industry observer. With Nvidia unveiling the new cards via its add-in card (AIC) partners, with large manufacturers such as ASUS, MSI, and GIGABYTE being at the event.
The new GPU will be based on the GP104 chip and utilize GDDR5X VRAM; a whopping 8GB is rumored to be the amount. The leaked specs show it having a single eight-pin power connector, meaning that (due to electrical capacity) the max power usage would be 225W, though it could use a lot less power. The 980 is only 165W so this card can’t be a huge amount more. The leaked specs also tell us that it could feature up to 6144 CUDA cores and a whopping 12.6 Teraflops. We’re not sure how accurate these specs are as they have been sourced from various places, only time will tell. Either way, Computex 2016 is going to be huge this year.
So far, we can accurately say:
2x performance per watt estimated improvement over Maxwell
DirectX 12_1 or higher
Successor to the GM200 GPU in the 980TI
Built on 16nm manufacturing process
It will be interesting to see the Polaris release too, as there is going to be some very tough competition on the GPU market shortly and that’s obviously great news for consumers.
Which cards are you most excited about this year, AMD’s or Nvidia’s latest? Let us know in the comments section below.
As always, most of the focus on Polaris has been on the top end chip. This has meant that much of the talk ahs been focused on the Polaris 10, the R9 390X/Fury replacement. Today though, we’ve been treated to a leak of the mainstream Polaris chip, Polaris 11. Based off of a CompuBench leak, we’re now getting a clearer picture of what Polaris 11 will look like as the Pitcairn replacement.
The specific Polaris 11 chip spotted features a total of 16CUs, for 1024 GCN 4.0 Stream Processors. This puts it right where the 7850/R7 370 is right now. Given the efficiency gains seen by the move to GCN 4.0 though, performance should fall near the 7870XT or R9 280. The move to 14nm FinFET also means the chip will be much smaller than Pitcairn currently is. Of course, this information is only for the 67FF SKU so there may be a smaller or more likely, a larger Polaris 11 in the works.
Other specifications have also been leaked, with a 1000Mhz core clock speed. Memory speed came in at 7000Mhz, with 4GB of VRAM over a 128bit bus. This gives 112GB/s of bandwidth which is a tad higher than the R7 370 before you consider that addition of delta colour compression technology. GCN 4.0 will also bring a number of other improvements tot he rest of the GPU, most importantly FreeSync support, something Pitcairn lacks.
While we can’t guarantee the same SKU was used, Polaris 11 was the GPU AMD pitted against the GTX 950 back at CES. During the benchmark of Star Wars Battlefront, the AMD system only drew 84W compared to the Nvidia system pulling 140W. For the many gamers who buy budget and mainstream cards, Polaris 11 is shaping out very well.
Nintendo’s upcoming console, codenamed ‘NX’, has been the subject of numerous rumours from multiple sources who apparently know confidential details about the system’s key features. Reports have suggested Nintendo will adopt the X86 architecture and be a handheld/home console hybrid. Other leaks indicate it might even use the latest version of Android. The latest rumour was originally posted on Reddit but the author has since decided to remove the post. Thankfully, MyNintendoNews managed to save the details which are listed below:
The retail name for the NX is unknown to developers (or they are holding back). I’ve asked multiple sources.
I know of at least 1 third-party Wii U game that has/have been successfully ported to NX.
Amiibo are still supported (if you hadn’t already guessed).
Friend codes are still a thing (unfortunately).
I don’t know when the NX will be announced. Speculation is this month.
There are multiple “gimmicks” with the NX, one is optional.
There are physical dev-kits out in the wild, I don’t have access to these.
I don’t know the model of the GPU, however there is little doubt (from what I’ve been told) that it an AMD.
The NX is capable of outputting 4k. Consensus is upscaling and streaming.
DDR4 Memory (between 6GB – 8GB). EDIT: Available to software.
Apparently, the NX will support 4K, but the games will not natively run at this resolution. It’s not overly surprising given the graphical horsepower required to push so many pixels. Honestly, I’m really concerned by the claim of multiple “gimmicks” because history dictates similar measures proved to be unpopular. While the Wii U gamepad is a novel idea, I found it uncomfortable for prolonged periods. On another note, the Wii remote didn’t really add anything to core games and felt extremely unnecessary. I think it’s important for Nintendo to innovate but create something which doesn’t have any gimmicks.
Of course, this is from an unverified source so it’s vital to take these claims with a pinch of salt.
AMD’s answer to the Titan lineup, the Radeon Pro Duo was first revealed last month at AMD’s Capsaicin event. Navigating a line fine line between the Radeon and FirePro lineups, the new graphics cards combines two of AMD’s top end Fiji GPUs. According to VideoCardz, we may see the first Radeon Pro Duos out in the wild sooner than expected. The chip will launch in just a couple of weeks on April 29th.
The Radeon Pro Duo features a pair of 28 nm Fiji GPUs, with two sets of 4,096 stream processors, 256 TMUs, 64 ROPs, and 4 GB of 4096-bit HBM memory. This means a total of 8192 stream processors, 512 TMUs , 128 ROPs and 8GB of HBM1. While the price is a hefty $1499, you do get a very nice custom Cooler Master water cooler with it. Peak performance is a high 16TFLOPS which is still 4.4TFLOPS more than Nvidia’s Tesla P100.
From AMD’s internal benchmarks of 3DMark, the Radeon Pro Duo should smash any other card on the market by a significant margin. Games, however, tend to be more fickle and the Radeon Pro Duo does rely on CrossFire for much of its performance. Given the many issues plaguing SLI and CrossFire this year, it will be interesting to see real world performance once the card becomes available.
AMD’s upcoming graphics architecture, codenamed ‘Polaris’ will be the company’s first product utilizing the 14nm FinFET manufacturing process. During CES 2016, AMD showcased the greatly improved performance per watt compared to the current 28nm NVIDIA GTX 950. As a result, the upcoming R9 490X and R9 490 will not be another rebranding exercise and should offer significant performance gains. It’s still unclear how AMD will position these products in comparison to the Fury X, Fury and Nano line-up. In theory, the 490X and 490 could be faster than the Fury X which becomes a product designed to compete with the 480. Personally, I’m really not sure, and it’s clear that AMD is really pushing the benefits of performance per watt with Polaris 10. To me that showcases their focus and suggests the main advantages will revolve around TDP.
According to Hardware Battle, and discovered by VideoCardz, the 490X and 490 will apparently launch in June. It looks increasingly likely that AMD will unveil their latest range at Computex. Shortly after that, the products should be with retailers in a swift manner. Hardware Battle is a reliable source and known to have good connections with AMD. While this doesn’t prove that the information is correct, it corresponds with earlier suggestions that AMD was planning the launch during Q2 2016.
On another note, the performance numbers Polaris is capable of should provide an indication of the improvements we can expect on NVIDIA’s GTX 1000 series. Whatever the case, this is an exciting time for the graphics card world, even though the huge strides forward will occur during the next architecture. It’s still unclear when NVIDIA will launch their consumer HBM2 graphics cards. Rumors suggest consumer Pascal might not happen until next year.
Personally, I’m just excited to see the industry move away from 28nm graphics cards to instigate a brand new era of hardware advancements.
Asynchronous Compute has been one of the headline features with DX12. Pioneered by AMD in their GCN architecture, Async Compute allows a GPU to handle both graphics and compute tasks at the same time, making the best use of resources. Some titles such as Ashes of the Singularity have posted massive gains from this and even titles that have a DX 11 heritage stands to have decent gains. In an update to Async Compute, AMD has added Quick Response Queue support to GCN 1.1 and after.
One of the problems with Async Compute is that it is relatively simple. It only allows graphics and compute tasks to be run at the same time on the shaders. Unfortunately, Async Compute as it stands right now will prioritize graphics tasks, meaning compute tasks only get the leftover resources. This means there is no guarantee when the compute task will finish as it depends on the graphics workload. Quick Response Queue solves this by merging preemption where you stop the graphics workload entirely, with Async Compute.
With Quick Response Queue, tasks can be given special priority to ensure they complete on time. Quick Response Queue will also allow the graphics task to run at the same time albeit with reduced resources. By providing more precise and dependable control, this allows developers to make better use of Async Compute, especially on latency/frame sensitive compute tasks. Moving on, we may see greater gains from Async in games as AMD allows more types of compute workloads to be optimized. Hopefully, this feature will reach GCN 1.0 cards but that depends on if the hardware is capable of it.
After launching Excavator with Carrizo last year, we’re getting the next iteration based on the same architecture. Dubbed Bristol Ridge, the new lineup features an improved DDR4 memory controller among other things. Today, we’re only getting the notebook side of the launch, with the desktop chips and platform to launch later in the year. AMD is touting some major gains over their Kaveri Steamroller APUs launched in 2014. The reason for the pre-announcement is HP outing Bristol Ridge with their new Envy x360 notebook at GTC.
According to AMD, Bristol Ridge improves x86 performance by nearly 50% over Kaveri/Steamroller. This is pretty good given that Excavator actually features less L2 cache. Even compared to Carrizo, Bristol Ridge manages to post a 10% improvement due to the DDR4 memory controller. AMD’s IMC performance has generally been good but not amazing and hopefully, there will be even more improvements for Zen’s DDR4 controller.
On the graphics side, there are significant gains up to 18% in some cases. This is even with the iGPU portion staying constant. This is all probably due to the use of DDR4 which is a god improvement over DDR3. As we all know, AMD’s APUs are highly reliant on good memory bandwidth in order to feed the iGPU and CPU portions at the same time. With good memory, the APUs can see massive gains in gaming performance. Hopefully, we’ll get more benchmarks that aren’t dubious leaks.
Even though the PlayStation 4 and Xbox One are less than three years old, there’s numerous reports which indicates both console manufacturers could release refreshed models with a higher specification. This is unprecedented for the console industry as the usual business strategy revolves around long hardware lifecycles and encouraging developers to eke out every last inch of performance through optimization. Saying that, the situation in 2016 is quite different due to the rise of mobile and impressive profitability on the PC platform. Furthermore, the current consoles are technically outdated and struggle to maintain 30 frames-per-second at 1920×1080. There’s even been instances where the resolution has been reduced to 720P on the Xbox One which really does showcase the performance problems.
Arguably, Sony is creating a much more powerful version of the PlayStation 4 to coincide with the launch of PlayStation VR. Virtual reality requires a great deal of processing power and the current model will struggle to offer a pleasant user-experience. According to information acquired by Go Nintendo, AMD’s earning call predicts future hardware launches this year and reads:
“In our EESC segment, we had record shipments of our semi-custom SoCs powering the Playstation 4 and Xbox One game consoles,”
“Demand for game consoles looks strong for 2016 and we remain on track to generate additional revenue from new semi-custom business in the second half of 2016.
“Game consoles – we see units going up 2016 to 2015. We’ve also said in the enterprise embedded and semi-custom segment that we will be ramping some new design revenue in the second half of 2016. We have new products being introduced in both the businesses with the new design wins in the second half on the semi-custom side.”
It’s important to remember that this might not be a new Xbox or PlayStation 4, and could refer to Nintendo’s upcoming console, the NX. Whatever the case, I cannot wait to see the announcements during this year and look forward to testing new hardware. Also, if these reports are true, I’m eager to know how console players will react.
Just 5 days into the new month and AMD has already released a new set of Crimson drivers for their Radeon GPUs. The latest version out is 16.4.1, a beta version hotfix for 16.3.2 which was released just a week ago. Coming so quick after 16.3.2 and still a beta, the number of changes aren’t that many but are welcome none the less. Interestingly, there looks like there will be no 16.4 driver, with AMD choosing to jump straight to 16.4.1.
First off, support for the Oculus Rift and HTC Vive is likely improved compared to 16.3.2. Furthermore, Quantum Break has received a number of optimisations, boosting performance by up to 35% in some cases. Hitman has also received some fixes to its DX11 High-Quality Shadows and frame cap issues experienced in several DX12 games have been resolved.
Even with these fixes in place, there is an ever-growing list of known issues that remain unresolved. Half of these issues have to do with Crossfire and nearly all of the other one related to bugs within AMD’s own Radeon Settings of Gaming Evolved Software. While quick and prompt driver releases are welcome, AMD needs to get to work fixing more issues rather than just another point release. Given the current track record, we may yet see 16.4.2 and 16.4.3 later this month.
In a final hurrah for AMD’s Bulldozer and its derivatives, Bristol Ridge APUs will launch later this year. Coming in just before Zen arrives in Q4, the update will bring Excavator to the desktop as well as introduce the new Socket AM4. Today, we’ve been treated to the Geekbench scores for the FX 9800P. Given the results, it looks like Excavator will be a nice IPC increase over the current Steamroller APUs.
Deviating from the rumoured 2.7Ghz base clock, the FX 9800P in the Lenovo 59AC sample runs at 1.85Ghz. Of course, this could be off given that Geekbench might not be properly reading the clock speed. The chip managed to score 2216 in the single-threaded tests and 5596 in the multi-threaded portion. This is pretty competitive compared to the FX 8800P especially given the clock speeds. Given that we don’t know the cTDP setting, we can’t draw too many conclusions.
The biggest change compared to Carrizo is the use of AM4, FP4 and DDR4. Bristol Ridge will showcase the motherboards and memory controller that Zen will be using and that will be the most interesting part about it. By finally bringing Carrizo to the desktop in numbers, AMD will have a new desktop architecture since 2015.
UPDATE: We have now been informed from a kind reader that this was an April Fools joke, so please disregard the benchmark images below
AMD has been lingering behind in the enthusiast CPU market and really struggled to compete with Intel’s flagship products. This isn’t a shocking revelation when you consider AMD is still using the FM2+ socket to house its current processor line-up. Thankfully, Zen is upon us and the first major socket change in a considerable amount of time. We’re all hoping that AMD can become competitive again and Zen really helps bring innovation forward in the stagnant CPU market. AMD’s President and CEO, Lisa Su provided a small insight into Zen’s performance numbers and suggested they will bring a 40 percent IPC boost over the current line-up. Up to this point, all performance benchmarks have been kept under wraps and any numbers revolved around pure speculation.
However, images provided by Bits&Chips clearly illustrate the performance differences between a Octacore AMD Zen CPU and competing products. The CPU’s FP32 Ray-Trace score outperforms the i7-4930K but it’s not as impressive as the CPU hash results. This means the architecture might implement a weaker FMA.
On a more positive note, DDR4 bandwidth performance is impressive and competes with the i7-6700K. The CPU Hash is significantly better than the i7-5820K and even surpasses a 20-core Xeon. Only time will tell if AMD’s latest processors can offer similar performance to Intel products and instigate a pricing war. Currently, the i7-6700K is extremely expensive for a 4-core CPU and there needs to be some competition to drive innovation. I cannot wait to get my hands on AMD’s AM4 motherboards and finally see if they’ve come up with the goods. The basic data we have so far and information from AMD is promising but it’s always unclear until the testing has been completed from independent sources.
Do you think AMD will be able to have a much stronger foothold in the CPU market once Zen releases?
AMD’s upcoming Zen architecture is arguably the most anticipated hardware release this year. After years in the wilderness, AMD will finally come back with a new CPU design that will challenge Intel again on IPC, process node and power efficiency. According to the latest leak, it appears that Zen is progressing well enough that engineering samples have already been distributed to various partners for testing. This also means AM4 motherboards are already sampling as well.
These stepping A0 samples are that of the previously rumoured 95W, 8 core Zen CPU. That AMD has managed to get an 8 core CPU in a 95W thermal envelope is stunning and combined with the early engineering sample release, points to a strong 14nm LPP process. What’s more, the frequency isn’t a slouch, at 3Ghz base though boost isn’t enabled yet. This is pretty much the same as the base clocks for Intel’s own prosumer i7 5960X which sports 8 cores as well at 3Ghz base and 3.5Ghz boost. We can expect the Es to set the baseline so release Zen will almost certainly clock higher.
At 3GHz, the engineering sample is already faster than the first Bulldozer ones suggesting that 14nm LPP won’t be holding back frequency too much. After all, Intel’s own 14nm process has performed better than their 22nm. Samsung and Global Foundries have also had plenty of time to refine their 14nm process to ensure it will offer the best performance at launch. Hopefully, AMD will be able to be competitive in both IPC and overclocking.
SilverStone is back on eTeknix again today, with their new AR07 and AR08 CPU coolers. Both coolers are designed to be affordable, quiet, stylish and pack great value for money performance. We’re going to be putting both of them on our test bench today, and while we’re expecting the bigger AR07 to offer up better performance, we’re still eager to see what the smaller and more affordable AR08 is capable of.
“The Argon series coolers are designed to provide the best cooling solution for your CPU. To improve performance even further, unique and exclusive heatsink fin designs such as interweaving diamond edge and arrow guides are included. For users looking for a no-nonsense top performing cooler without the premium price, the Argon AR07 is the perfect choice.” – SilverStone
Both coolers come equipped with a triple heat pipe design and a high-quality fan tuned for silence. The AR07 has three 8mm thick pipes, a 140mm PWM fan and the AR08 uses 6mm pipes with a 92mm PWM fan.
Argon Series AR07
Great balance of silence and performance
Unique interweaving diamond edged fins for improved performance
Exclusive arrow guides distribute airflow evenly among heat pipes
Three Ø8mm heat-pipes and aluminum fins for excellent heat conducting efficiency
Heat-pipe direct contact (HDC) technology
Includes compact 140mm PWM fan for excellent cooling and low noise
Anti-vibration rubber pads included for additional noise dampening
Argon Series AR08
Great balance of silence and performance
Unique interweaving diamond edged fins for improved performance
Exclusive arrow guides distribute airflow evenly among heat pipes
Three Ø6mm heat-pipes and aluminum fins for excellent heat conducting efficiency
Heat-pipe direct contact (HDC) technology
Includes compact 92mm PWM fan for excellent cooling and low noise
Anti-vibration rubber pads included for additional noise dampening
Intel Socket LGA775/115X/1366/2011 and AMD Socket AM2/AM3/FM1/FM2 compatible
Both coolers come nicely packaged with all the main specifications detailed around the box, as well as lots of images showing the fan, fin stack, block, heat pipes design and more.
In the box, you’ll find fan clips, a universal backplate for Intel and an AMD bracket, mounting arms, 3M pads, as well as all the required screws and some thermal paste. Both coolers come with a very similar mounting kit, the only exception being that the AR07 comes with larger fan retention clips.
As part of the ongoing process for technological advancement, 32bit support has begun to decline throughout the ecosystem. The latest firm to silently reduce support for 32bit systems is AMD with their GPUs. Starting with the latest Crimson Software 16.3.2 release, 32bit drivers for their latest GPUs have gone missing from their usual links. This follows the Radeon Pro Duo which only launched with 64bit drivers.
Moving away from 32bit makes a lot of sense as even mainstream GPUs are starting to have more than 4GB of VRAM, the same amount 32bit systems will handle. Once you add in system memory, there really isn;t a point to be using a 32bit system with the latest GPUs except for compatibility reasons. Furthermore, the market for 32bit drivers has been shrinking, with only about 13% of Steam users running a 32bit system. Given the intense ram requirements for games these days, 64bit is nearly a must. Dropping 32bit support also means more resources to put towards 64bit drivers and making those better.
The biggest complaint I have though is the silence from AMD. Rather than admit that they are reducing 32bit support, they silently started hiding their 32bit drivers. For users who click on 32bit drivers, they get sent to a page telling them to move to 64bit. At the same time, 32bit drivers continue to be made and are available with a bit of URL guessing (just change the “64” at the end of the 64bit bit link to “32”). Instead of trying to hide it, AMD should have made an announcement that 32bit support would end at X date in the future and continue for now to make 32bit drivers easy to access. This whole thing just smacks of bad PR and miscommunication. There is no shame to move away from 32bit and hopefully, AMD will get this.