Zotac Announces PCI-E x1 GeForce GT 710 Graphics Card

Some time ago, NVIDIA unveiled the GT 710 graphics card designed for HTPCs and relatively basic usage scenarios. The company claimed that you could experience performance gains up to 10 times better than integrated graphics solutions. Of course, it’s not suited to demanding applications which is reflected in the price point and form factor. The GT 710 doesn’t require any dedicated power connectors and utilizes the PCI Express format instead. Up to this point, custom GT 710 cards from manufacturers including Inno3D, EVGA and others have employed the PCI-E x16 interface. Zotac’s latest model bucks the trend and opts for the x1 interface.

The GPU is passively cooled and supports D-Sub, HDMI and DVI-D. Furthermore, it’s capable of driving displays up to 2560×1600 and opts for a WDDM 2.0 compliant driver. Technically, the Zotac version is clocked at 954MHz and includes 1GB DDR3L memory at 1600MHz. The PCI-E interface means you can use the card in expansions slots which traditionally remain free. This allows you to keep the x16 slots full with fibre-channel cards, enterprise HBAs and more. Clearly, the GT 710’s gaming credentials are fairly basic but they are a better option than many iGPUs. Saying that, I wouldn’t recommend it and there’s greater value when purchasing a higher performing product. The Zotac GT 710 might be useful if you’re watching videos and want to install a dedicated card.

Are you a fan of low power cards like the GT 710 or feel they are pointless due to the good performance levels on APUs?

Rajintek Announces Morpheus II Core Edition VGA Cooler

Rajintek is a hardware company which currently specializes in affordable solutions aimed towards budget-conscience users. Their products offer astounding value and compete against higher tier solution with a much greater price point. For example, The Triton all-in-one liquid cooler gained a cult following due to the changeable dyes, clear tubing and transparent block. The unique design looks phenomenal and is more stylish that the huge quantity of re-branded Asetek units on the market. Today, the company unveiled the Morpheus II Core Edition VGA cooler. This is the successor to the Morpheus Core Edition launched in 2014 and features enhanced GPU support including the Radeon R9 Fury series, R9 390, NVIDIA GeForce GTX 980 Ti and more!

In a similar vein to the Triton, the core branding relates to a lack of fans bundled with the unit. This is a fantastic idea if you already own high performance fans such as Noctua NF-F12s and want to save a bit of money on the VGA cooler itself. In terms of dimensions, the heatsink measures a total of 254mm x 98mm x 44mm and weighs 512g. The dense, monolithic, aluminium fin stack dissipates heat from a nickle-plated copper GPU block. There’s also six 6mm thick heat-pipes making two passes through the dense fin stack. This supports cooling for up to 18 memory chips, an in-line MOSFET heatsink and smaller MOSFETs. The package comes with thermal pads, thermal paste and mounting equipment to quickly attach this aftermarket heatsink.

Unfortunately, Rajintek didn’t disclose any information regarding the price point or when the product will hit retail channels. Although, given the company’s history, I would expect it to remain very affordable. In the modern era, vendors opt for impressive cooling solutions and I’m not entirely sure how widespread the product’s appeal will be.

Do you think custom VGA coolers like the Morpheus II Core still have a place in the current market?

Image courtesy of modcrash.

AMD Scores Apple Mac Design Wins with Polaris

After many fruitful years of partnerships with Apple, AMD is reportedly continuing the relationship with their latest Polaris based GPUs. Apple has alternated MacBook Pro suppliers between Nvidia and AMD in the past but tended towards AMD more with the Mac Pro. According to the source, the performance per watt of 14nm Polaris combined with the performance per dollar of the chips is what sold Apple.

AMD has long pursued a strategy os using smaller and more efficient chips to combat their biggest rival Nvidia. Prior to GCN, AMD tended to have smaller flagships that sipped less power and had lesser compute abilities. This all changed around with GCN where AMD focused on compute more while Nvidia did the opposite. This lead to Nvidia topping the efficiency charts and combined with their marketing soared in sales. If the rumours are true, Polaris 10 will be smaller than GP104, its main competitor.

With Polaris, AMD should be able to regain the efficiency advantage with both the move to 14nm and the new architecture. We may see Polaris based Macs as soon as WWDC in June, just after the cards launch at Computex. In addition to a ‘superior’ product, AMD is also willing to cut their margins a bit more in order to get a sale as we saw with the current-gen consoles. Perhaps, is AMD plays their cards well, we may see Zen Macs as well.

Is This Photo of The GTX 1080 Genuine?

NVIDIA’s upcoming architecture codenamed Pascal is rumoured to launch in June and apparently invites have already been sent out to members of the press for an early briefing on its unique features. The new range is built on the 16nm manufacturing process and could utilize the latest iteration of high bandwidth memory. Although, the mid tier options might opt for GDDR5X to cut costs and maintain a lower price point. Of course, there’s always leaks whenever a new architecture is being readied from either graphics company. However, NVIDIA has kept the information extremely secret and there’s not much to go on when trying to work out the specification across various products. Some time ago, a leaked image supposedly showcased the new cooler shroud and GTX 1000 branding. This looked fairly credible given the environmental setting and high quality of workmanship.

Today, a brand new image has arrived courtesy of Chinese website Baidu which apparently shows the GTX 1080 in full for the first time:

The cooler opts for a dynamic appearance with sharp edges compared to the older reference design. It also corresponds with the previous leak which suggests both images are credible. On the other hand, it’s important to adopt a cynical approach just in case someone made the effort of modding the stock cooler to foil people around the globe. Honestly, this is very unlikely and it’s probably the final design for the GTX 1080. Sadly, it’s impossible to tell if the GPU contains a stylish backplate. Although, this should be revealed soon if the release date reports are correct. Whatever the case, it looks like NVIDIA has tweaked the cooler design and opted for the GTX 1000 series name. I’m not so sure this is a clever move as the 1080 might cause some confusion.

Do you think the image discussed in the article is genuine?

Do AMD Drivers Really Deserve Such a Hostile Reception?

Introduction


AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?

Is the Hardware Competitive?


The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.

NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.

Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.

Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.

Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.

Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.

The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.

Do AMD GPUs Lack Essential Hardware Features?


When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.

Have The Drivers Improved?


Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.

The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.

Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.

Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.

However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.

Is GeForce Experience Significantly Better?


In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:

“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”

NVIDIA’s Sean Pelletier released a statement at the time which reads:

“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.

GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51

We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”

As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software.  If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.

Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.

On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.

NVIDIA DRIVE PX2 Powered by Two GP106 Chips

NVIDIA showed of its DRIVE PX 2 system – the new iteration of its autonomous and driver assistance AI platform – at last week’s GTC 2016 conference, and eagle-eyed viewers may have noticed that the board shown to the audience by CEO Jen-Hsun Huang was sporting a pair of integrated GP106 GPUs, eschewing the two Maxwell-based NVIDIA Tegra X1 chips that powered the original DRIVE PX, and confirming a rumour that we reported last week.

The GP106 runs on NVIDIA’s new Pascal architecture – set to hit the market in the latest line of GeForce graphics cards this Summer – which can perform at 24 DL TOPS or 8 TFLOPS, and features up to 4GB GDDR5.

NVIDIA hopes that the new DRIVE PX 2 will power the next generation of driverless cars – the DRIVE PX has so far only be used to power the ‘infotainment’ system on-board a Tesla, for example – and has already shipped to a number of unnamed Tier 1 customers.

https://www.youtube.com/watch?v=KnVVJSIiKpY

“DRIVE PX platforms are built around deep learning and include a powerful framework (Caffe) to run DNN models designed and trained on NVIDIA DIGITS,” according to NVIDIA. “DRIVE PX also includes an advanced computer vision (CV) library and primitives. Together, these technologies deliver an impressive combination of detection and tracking.”

AMD Mainstream Polaris 11 Specifications Leaked

As always, most of the focus on Polaris has been on the top end chip. This has meant that much of the talk ahs been focused on the Polaris 10, the R9 390X/Fury replacement. Today though, we’ve been treated to a leak of the mainstream Polaris chip, Polaris 11. Based off of a CompuBench leak, we’re now getting a clearer picture of what Polaris 11 will look like as the Pitcairn replacement.

The specific Polaris 11 chip spotted features a total of 16CUs, for 1024 GCN 4.0 Stream Processors. This puts it right where the 7850/R7 370 is right now. Given the efficiency gains seen by the move to GCN 4.0 though, performance should fall near the 7870XT or R9 280. The move to 14nm FinFET also means the chip will be much smaller than Pitcairn currently is. Of course, this information is only for the 67FF SKU so there may be a smaller or more likely, a larger Polaris 11 in the works.

Other specifications have also been leaked, with a 1000Mhz core clock speed. Memory speed came in at 7000Mhz, with 4GB of VRAM over a 128bit bus. This gives 112GB/s of bandwidth which is a tad higher than the R7 370 before you consider that addition of delta colour compression technology. GCN 4.0 will also bring a number of other improvements tot he rest of the GPU, most importantly FreeSync support, something Pitcairn lacks.

While we can’t guarantee the same SKU was used, Polaris 11 was the GPU AMD pitted against the GTX 950 back at CES. During the benchmark of Star Wars Battlefront, the AMD system only drew 84W compared to the Nvidia system pulling 140W. For the many gamers who buy budget and mainstream cards, Polaris 11 is shaping out very well.

Nvidia Pascal GTX 1000 Will Arrive at Computex

From the many leaks and rumours that have come out, the expected release of Pascal will come later this year at Computex. During the Taiwanese event, Nvidia will finally unveil the GTX 1000 lineup to the public. Today, we’re getting yet another report confirming this. In addition, Nvidia AiB partners like ASUS, Gigabyte and MSI will showcase their reference cards then as well. As revealed yesterday, mass shipments won’t begin till July.

As we’ve covered earlier, the sources appear to suggest that Nvidia will have a head start over AMD in launching Pascal ahead of Polaris. However, the lead might not amount to much. The report suggests that Nvidia will only ship large amounts till July. AMD on the other hand is also launching Polaris in June, the same month as Pascal. Given AMD’s previous history, we will probably see Polaris cards out by July as well. If Nvidia does have a lead, it won’t be for very long.

Right now, there is no word yet if GDDR5X will be utilized for top end Pascal chips. While there are some reports out that suggest GDDR5X, the timeline is very tight as GDDR5X probably won’t reach enough capacity till May/June at the earliest. Perhaps this is why we won’t be seeing Pascal or Polaris in numbers till July.

NVIDIA Pascal to Enter Mass Shipments in July

Yesterday, we reported on AMD’s plans to supposedly launch their next graphics architecture, codenamed ‘Polaris’ in June. The details surrounding NVIDIA’s response with Pascal aimed at consumers was unknown but it seemed likely the range would arrive at a similar date. According to new information sourced by Digitimes, Pascal will be unveiled during Computex and enter mass shipments in July:

“The graphics card players will begin mass shipping their Pascal graphics cards in July and they expect the new-generation graphics card to increase their shipments and profits in the third quarter.”

Interestingly, the report claims that AMD might only unveil the Polaris range in June, and shipments could occur at a later date. Apparently, NVIDIA will have the advantage and be the first company to release their new line-up to the public:

“Meanwhile, AMD has prepared Polaris-based GPUs to compete against Nvidia’s Pascal; however, the GPUs will be released later than Nvidia’s Pascal and therefore the graphics card players’ third-quarter performance will mainly be driven by demand for their Nvidia products.”

Thankfully, both graphics card manufacturers look set to release brand new products and I cannot wait to see the performance improvements and pricing across various tiers. AMD appears to be focussing on performance per watt on Polaris and the demonstrations thus far have been impressive. Sadly, we haven’t really seen anything from NVIDIA’s new consumer line-up, so it will be fascinating when the samples finally arrive. It’s still unknown which products will opt for HBM2, if any. It’s clear that the majority of models from both companies are set to utilize GDDR5X. While this isn’t a patch on HBM2, it’s a good improvement from the older GDDR5 standard.

Recently, there were some murmurings about NVIDIA delaying mainstream Pascal under 2017. This doesn’t look like the case at all, and if anything, reports suggest they will be the first to market.

AMD Radeon R9 490X And 490 Reportedly Set For June Release

AMD’s upcoming graphics architecture, codenamed ‘Polaris’ will be the company’s first product utilizing the 14nm FinFET manufacturing process. During CES 2016, AMD showcased the greatly improved performance per watt compared to the current 28nm NVIDIA GTX 950. As a result, the upcoming R9 490X and R9 490 will not be another rebranding exercise and should offer significant performance gains. It’s still unclear how AMD will position these products in comparison to the Fury X, Fury and Nano line-up. In theory, the 490X and 490 could be faster than the Fury X which becomes a product designed to compete with the 480. Personally, I’m really not sure, and it’s clear that AMD is really pushing the benefits of performance per watt with Polaris 10. To me that showcases their focus and suggests the main advantages will revolve around TDP.

According to Hardware Battle, and discovered by VideoCardz, the 490X and 490 will apparently launch in June. It looks increasingly likely that AMD will unveil their latest range at Computex. Shortly after that, the products should be with retailers in a swift manner. Hardware Battle is a reliable source and known to have good connections with AMD. While this doesn’t prove that the information is correct, it corresponds with earlier suggestions that AMD was planning the launch during Q2 2016.

On another note, the performance numbers Polaris is capable of should provide an indication of the improvements we can expect on NVIDIA’s GTX 1000 series. Whatever the case, this is an exciting time for the graphics card world, even though the huge strides forward will occur during the next architecture. It’s still unclear when NVIDIA will launch their consumer HBM2 graphics cards. Rumors suggest consumer Pascal might not happen until next year.

Personally, I’m just excited to see the industry move away from 28nm graphics cards to instigate a brand new era of hardware advancements.

AMD Introduces Quick Response Queue in Latest Drivers

Asynchronous Compute has been one of the headline features with DX12. Pioneered by AMD in their GCN architecture, Async Compute allows a GPU to handle both graphics and compute tasks at the same time, making the best use of resources. Some titles such as Ashes of the Singularity have posted massive gains from this and even titles that have a DX 11 heritage stands to have decent gains. In an update to Async Compute, AMD has added Quick Response Queue support to GCN 1.1 and after.

One of the problems with Async Compute is that it is relatively simple. It only allows graphics and compute tasks to be run at the same time on the shaders. Unfortunately, Async Compute as it stands right now will prioritize graphics tasks, meaning compute tasks only get the leftover resources. This means there is no guarantee when the compute task will finish as it depends on the graphics workload. Quick Response Queue solves this by merging preemption where you stop the graphics workload entirely, with Async Compute.

With Quick Response Queue, tasks can be given special priority to ensure they complete on time. Quick Response Queue will also allow the graphics task to run at the same time albeit with reduced resources. By providing more precise and dependable control, this allows developers to make better use of Async Compute, especially on latency/frame sensitive compute tasks. Moving on, we may see greater gains from Async in games as AMD allows more types of compute workloads to be optimized. Hopefully, this feature will reach GCN 1.0 cards but that depends on if the hardware is capable of it.

NVIDIA Unveil Tesla P100 GPU

GTC 2016: As part of NVIDIA’s GPU Tech Conference, Jen-Hsun unveiled the latest product in the Tesla family with the P100. Branded as the most advanced hyperscale datacentre GPU, it features 150 billion transistors and is based on the latest Pascal architecture.

Built on a 16nm FinFET process and featuring HBM2, this product is the latest in a whole host of new technologies from NVIDIA and should be the start of what we’re going to see across other NVIDIA products.

With AI and deep learning now at the forefront of NVIDIA’s thinking, the Tesla P100 GPU has been created to assist with making AI and deep learning among other tasks as fast as physically possible.

Nvidia May Have Stopped GeForce GTX 980Ti Production

One of the inevitable signs of an imminent  release of new products is when the old model starts becoming hard to find. A seamless transition to the new version is a mark of good logistics and something Nvidia is known for. In line with the expectations for Pascal, Nvidia has reportedly stopped shipping GTX 980Ti’s to their AiB partners, which indicates that Nvidia is winding down the supply chain for the high-end card.

A stop in GTX 980Ti production points to a Pascal chip to replace it coming soon down the line. Usually, shipments to stores are ahead by a month and production a month or so before that. If Nvidia stops supplies now, there will still be about 2-3months before supplies run low. This puts the timeframe smack dab during Computex where Pascal is expected to be launched. It seems like perfect timing for GTX 980Ti supply to dry up just as Pascal launches and becomes available.

Given the movement to 16nmFF+, we can expect the GTX 1080 to at the very least match the GTX 980Ti. With a replacement product, it makes sense for the GTX 980Ti to cease production now. For now, it seems that Nvidia hasn’t started supplying their partners with Pascal just yet but that should happen shortly if Pascal is to arrive at Computex. The leaked shrouds suggest that the AiB partners have already tooled up in expectation of Pascal. Of course, this is still an unconfirmed report, released on April 1st to boot, so take this with a fist full of salt.

NVIDIA Pascal Rumoured to Struggle with Asynchronous Compute

NVIDIA’s new Pascal GPU micro-architecture – billed as 10x faster than the previous Maxwell iteration, and set for release in retail graphics cards later this year – is rumoured to be having problems when dealing with Asynchronous Compute code in video games.

“Broadly speaking, Pascal will be an improved version of Maxwell, especially about FP64 performances, but not about Asynchronous Compute performances,” according to Bits and Chips. “NVIDIA will bet on raw power, instead of Asynchronous Compute abilities.”

“This means that Pascal cards will be highly dependent on driver optimizations and games developers kindness,” Bits and Chips adds. “So, GamesWorks optimizations will play a fundamental role in company strategy. Is it for this reason that NVIDIA has made publicly available some GamesWorks codes?”

This report has not been independently verified, but if it is true, it could spell bad news for NVIDIA, especially since, despite fears to the contrary, its Maxwell architecture was capable of processing Async Compute, and AMD’s Radeon graphics cards currently leading all DirectX 12 benchmarks

We shouldn’t have to wait too long before we find out, though, with WCCFTech reporting that NVIDIA’s flagship Pascal graphics card, the “GTX 1080”, will be unveiled at GTC 2016, which takes place in Silicon Valley, California, from 4th-7th April.

AMD Release Radeon Crimson 16.3.2 WHQL Driver With Oculus Rift Support

With the Oculus Rift launching now, both AMD and Nvidia have released new drivers to ensure compatibility with the VR headset. Nvidia has released their GeForce driver 364.72 and AMD has responded with their first WHQL driver since the end of 2015, Radeon Crimson 16.3.2. This is the third driver to be released in March and just 10 days since the last one. In addition to Oculus Rift support and WHQL status, there are also a number of other fixes.

First off, fellow VR headset HTC Vive is also getting supported with the new driver. To power these VR headsets, the Radeon Pro Duo is also being supported after launching earlier in the month. Everybody’s Gone to the Rapture and Hitman are also getting updated Crossfire profiles for DX11. Some notables fixes are to FFXIV and XCOM2 and the Fury series will no longer suffer corruption after long idle times.

In terms of known issues, the list remains as long as before, with most of them relating to Crossfire bugs. The AMD Gaming Evolved in-game overlay will still crash some games when enabled. Hopefully, these issues will be resolved shortly in a future update. This continues AMD’s streak of quick driver releases with several a month. You can find the full release notes here.

NVIDIA Ramping Up GeForce Now to be “Netflix of Gaming”

NVIDIA is expanding its GeForce Now game on-demand service, which streams PC and console games to the NVIDIA Shield set-top box. The service currently has over 100 games available, for a monthly subscription fee, and has plans to not only expand its library, but also improve the quality of streamed games when it moves its cloud data centres to Maxwell-based GPUs, replacing its old Kepler-based units, later this year.

“We are still on the path of being the Netflix of gaming,” Phil Eisler, General Manager of GeForce Now cloud gaming at NVIDIA told VentureBeat. “The cloud gives us good analysis and data. About half of our customers are millennial gamers, and half are parents who enjoy playing games with their children.”

“Gamer dads who are 35 and older struggle to find time to play games with their kids. They like the convenience of the system and the retro content. The millennial gamers, meanwhile, are very impatient and like to get their games quickly,” he added.

Following the data centre upgrade, “[GeForce Now] will be the highest-performing system that you can get access to in your living room by the end of the year,” Eisler said. “Our focus is getting games to work in 30 seconds and we are working on ways to cut that in half. Other services may take minutes. So we focus on the most convenient way to play.”

Apple in talks to Acquire Imagination Technologies for PowerVR GPUs

In what is likely good news for Qualcomm, Apple has confirmed they are in talks with mobile graphics designers Imagination Technologies. While Apple has denied making or considering an offer at this point in time, they may yet reconsider. Imagination Technologies is known for their PowerVR series of GPUs, integrated into Apple SoCs since the A4 and used in some Intel Atom SoCs as well; Apple currently owns a decent chunk of Imagination stock already.

Apple has a tendency to bring more and more development in-house. Back in 2008, Apple acquired PA Semi which eventually led to in-house CPU designs started with Swift for the A6 SoC. Bringing the GPU development in-house also makes sense for Apple as it will bring them better control over the direction and vision for the future. By doing their CPU designs, Apple was able to increase their IPC lead their competitors significantly as those firms had to rely on ARM and Qualcomm, both who were slow to the IPC and 64bit game.

If Apple does snatch up Imagination, that leaves Qualcomm and ARM as the only 2 major mobile GPU designers left. This may allow Nvidia to make some gains with their Tegra lineup and might even entice AMD to re-enter the market if the conditions are right.

 

QNAP Launches TDS-16489U Dual Xeon E5 Double Server

QNAP’s newest server, the TDS-16489U, is an amazing one that sets itself apart from the rest in so many ways. I want one so badly even though I have absolutely no need for this kind of power. This must be how a normal person feels when they see a Bugatti Veyron. But let us get back to the new QNAP dual server.

The TDS-16489U is a powerful dual server that’s both an application server and storage server baked into on chassis for simplicity and effectiveness. It is powered by two Intel Xeon E5 processors with 4, 6, or 8 cores each while supporting up to 1TB DDR4 2133 MHz memory with its 16 DIMM slots. These are already some impressive specs, but this is just where the fun begins.

The dual server has 16 front-accessible drive bays for 3.5-inch storage drives as well four rear-facing 2.5-inch drive bays for SSD cache. Should this not be enough, then you can expand that further by use of NVMe based PCI-Express SSDs too. The system has three SAS 12 Gb/s controllers built-in to couple it all together.

There are just as many connection options as there are storage options in the TDS-16489U. It comes with two normal Gigabit Ethernet ports as well as four SFP+ 10Gbps ports powered by an Intel XL710. Should that not be enough, then you can use the PCI-Express slots to expand with further NICs of your choice. The system supports the use of 40 Gbps cards too. It also comes with a dedicated IPMI connection besides the normal networking. The PCI-Express x16 Gen.3 slots can also be used with AMD R7 or R9 graphics cards for GPU passthrough to virtualization applications. A true one-device solution for applications, storage, and virtualization.

The TDS-16489U combines outstanding virtualization and storage technologies as an all-around dual server. With Virtualization Station and Container Station, computation and data from the guest OS and apps can be directly stored on the TDS-16489U through the internal 12Gb/s SAS interface. Coupled with Double-Take Availability to provide comprehensive high availability and disaster recovery, backup virtual machines can support failover for the primary systems on the TDS-16489U whenever needed to enable data protection and continuous services. QNAP Virtualization Station is a virtualization platform based on KVM (Kernel Virtual Machine) infrastructure. By sharing the Linux kernel, GPU passthrough, virtual switches, VM import/export, snapshot, backup & restoration, SSD cache acceleration and tiered storage.

“Software frameworks for Big Data management and analysis like Apache Hadoop or Apache Spark can be easily operated on the TDS-16489U using virtual machines or containerized apps, and with Qtier Technology for Auto Tiering the TDS-16489U empowers Big Data computing and provides efficient storage in one box to help businesses gain further insights, opportunities and values,” said David Tsao, Product Manager of QNAP.

With all the above, we shouldn’t forget that it still also runs QNAP’s QTS 4.2 operating system that provides everything you know and love from that. Included is the comprehensive virtualization applications that we’ve also seen on our consumer models, but this is where you truly can take advantage of what QNAP created and run multiple Windows, Linux, Unix, and Android-based virtual machines on your NAS. All the backup solutions and failover, from local to other NAS or the cloud. You can do it all. Share files to basically any device anywhere is made as easy as possible.

Should you still not have enough storage in this impressive unit, then you can expand with up to 8 of the QNAP enclosures and reach a seriously impressive 1152 TB raw storage capacity controlled by this single 3U server unit. The CPU power, dual system capabilities, virtualization options and impressive storage option will let you deploy an impressive system with a very tiny size and total cost of ownership compared to traditional setups.

Key Specifications

  • 16-bay, 3U rackmount unit
  • 2 x Intel Xeon E5-2600 v3 Family processor (with 4-core, 6-core and 8-core configurations)
  • 64GB~1TB DDR4 2133MHz RDIMM/LRDIMM RAM (16 DIMM)
  • 4 x SFP+ 10GbE ports
  • hot-swappable 16 x 3.5″ SAS (12Gbps/6Gbps)/SATA (6Gbps/3Gbps) HDD or 2.5″ SAS/SATA SSD, and 4 x
  • 2.5″ SAS (12Gbps) SSD or SAS/SATA (6Gbps/3Gbps) SSD;
  • 4 x PCle slots;
  • 4 x USB 3.0 port

The new QNAP TDS-16489U dual-server is now available.

AMD Open to Making Mobile GPUs

Looking back, AMD missed a big opportunity to get into the mobile phone and tablet market. According to Raja Koduri, SVP for RTG, AMD may be contemplating getting back into the mobile graphics market, provided the circumstances are right.

Originally a part of ATI, the mobile graphics division, Imageon was acquired by AMD along with the parent company. After running into severe financial hardship, AMD decided to sell the mobile division off to Qualcomm which renamed it Adreno, an anagram of Radeon. Combined with their custom ARM CPUs, Qualcomm has managed to become the largest mobile SoC vendor, putting Adreno into millions of devices. The only other major competitors are Imagination and Mali from ARM itself.

By considering the mobile GPU market if the right customer comes by, AMD is opening yet another market for them to enter. Right now, Adreno is still largely based on the VLIW architecture that ATI and AMD left in 2011. GCN, on the other hand, is a more complex and advanced architecture with arguably better performance per watt. With the rise of GPU based compute being used in gaming, GCN may be a potent force in tablets.

Seeking more custom chip customers makes sense of AMD given that their consoles deals are helping keep the firm afloat as other sources of revenue are dropping. There is a large measure of risk however as Nvidia has demonstrated with their flagging Tegra lineup. By securing a customer first, AMD can pass on the risk and run a much safer course. Perhaps, the next PSP or DS will be running GCN.

OCUK Offers Free SSD with ASUS GTX 970 Graphics Cards

Who doesn’t like to get free stuff? And especially when it is something as sweet as a solid state drive. That is what you currently can get, at least if you purchase one of the participating ASUS GTX 970 graphics cards at Overclockers UK.

For a limited time, and as long as stock lasts – as always – you can get a Kingston HyperX Fury 240GB solid state drive for free on top of the Nvidia based GTX 970 graphics card that your purchasing. It isn’t just some cheap SSD either as it comes from Kingston’s HyperX division. The 7mm slim 2.5-inch SSD delivers a solid performance, and for free, who can complain. You still get the bundled game Tom Clancy’s The Division on top too, making this a very good deal for a well-performing graphics card.

Speaking of the graphics card, the first of the two participating ASUS graphics cards are the GTX 970 DirectCU II OC Strix with 4GB GDDR5 memory, a core clock of 1141MHz, and a boost clock up to 1253MHz. The second card is the GeForce GTX 970 Turbo OC which also comes with 4GB GDDR5 memory, but a slower clock speed. The Turbo OC has a core clock of 1088MHz and a boost clock of 1228MHz.

Which of the two cards you pick for your setup is based on optical preference, amount you want to spend, and probably more things too – but it is safe to say that both are great graphics cards where you get a lot of bang for the buck, especially considering the extras you get in this deal. The DirectCU II OC Strix will set you back £299.99 while the Turbo OC will cost a little less and set you back £275.99.

Will you be picking up one of these deals? And if, which of the two cards will you go for. Let us know in the comments.

Far Cry Primal Graphics Card Performance Analysis

Introduction


The Far Cry franchise gained notoriety for its impeccable graphical fidelity and enthralling open world environment. As a result, each release is incredibly useful to gauge the current state of graphics hardware and performance across various resolutions. Although, Ubisoft’s reputation has suffered in recent years due to poor optimization on major titles such as Assassin’s Creed: Unity and Watch Dogs. This means it’s essential to analyze the PC version in a technical manner and see if it’s really worth supporting with your hard-earned cash!

Far Cry Primal utilizes the Dunia Engine 2 which was deployed on Far Cry 3 and Far Cry 4. Therefore, I’m not expecting anything revolutionary compared to the previous games. This isn’t necessarily a negative concept though because the amount of detail is staggering and worthy of recognition. Saying that, Far Cry 4 was plagued by intermittent hitching and I really hope this has been resolved. Unlike Far Cry 3: Blood Dragon, the latest entry has a retail price of $60. According to Ubisoft, this is warranted due to the lengthy campaign and amount on content on offer. Given Ubisoft’s turbulent history with recent releases, it will be fascinating to see how each GPU this generation fares and which brand the game favours at numerous resolutions.

“Far Cry Primal is an action-adventure video game developed and published by Ubisoft. It was released for the PlayStation 4 and Xbox One on February 23, 2016, and it was also released for Microsoft Windows on March 1, 2016. The game is set in the Stone Age, and revolves around the story of Takkar, who starts off as an unarmed hunter and rises to become the leader of a tribe.” From Wikipedia.

More AMD Polaris 10 Details Revealed

In the few days after AMD first demoed Polaris 10 to us at Capsaicin, more details about the upcoming graphics cards have been revealed. Set to be the big brother to the smaller Polaris 11, the better performing chip will drop sometime after June this year.

First off, we’re now able to bring you more information about the settings Hitman was running at during the demo. At Ultra Settings and 1440p, Polaris 10 was able to keep to a constant 60FPS, with VSync being possible. This means the minimum FPS did not drop below 60 at any point. This puts the card at least above the R9 390X and on par if no better than the Fury and Fury X. Of course, the demo was done with DX12 but the boost is only about 10% in Hitman.

Another detail we have uncovered is the maximum length of the engineering sample. Based on the Cooler Master Elite 110 case used, the maximum card length is 210mm or 8.3 inches. In comparison, the Nano is 6 inches and the Fury X 7.64 inches. Given the small size, one can expect Polaris 10 to be as power efficient as Polaris 11 and potentially be using HBM. Given that Vega will be the cards to debut HBM2, Polaris 10 may be limited to 4GB of VRAM. Finally, display connectivity is provided by 3x DP 1.3, 1x HDMI 2.0 and 1 DVI-D Dual Link though OEMs may change this come launch unless AMD locks it down.

MSI Releases 75W GTX 950 GPUs OCV2 and OCV3

For graphics cards, 75W is a golden number as it dispenses with the need to have a separate power. This allows users to avoid a PSU upgrade and broadens the market for the card. Originally launched at 90W, it looks like Nvidia has managed to trim an additional 15W savings to produce 75W GTX 950s. First started off by ASUS, MSI is getting into the 75W GTX 950 as well with 2 new additions to their lineup.

Dubbed the GTX 950 2GD5 OCV2 and GTX 950 2GD5T OCV3, they will replace/supplement the 90W GTX 950 2GD5 OCV1 and GTX 950 2GD5T OCV2 respectively. Both cards are based on NVIDIA’s GM206-251 GPUs, with the 251 indicating either a special bin or a new process that Nvidia is using. Of course, the chips are still the same GTX 950 with 768 CUDA cores, 48 TMUs, and 32 ROPs with 128-bit GDDR5 memory interface fed by 6.6Gbps of 2GB GDDR5. Both cards are factory overclocked to 1076Mhz stock and 1253Mhz Boost.

Between the OCV2 and OCV3, the only difference is in the cooler and form factor. The OCV2 uses a single fan and is geared towards mITX of other small form factors. The OCV3 sports a larger dual-fan cooler and is longer to boot. Both cards feature hardware-accelerated decoding and encoding, making them good choices for HTPC or to upgrade an existing desktop system for some moderate gaming. No word yet has been revealed about pricing but expect it to fall near MSI’s current offerings.

Intel Seeks AMD GPU Patent Licensing

After Samsung and Nvidia had their recent legal spat, more light has been shed on the world of GPU patents and licensing.  While Intel holds their own wealth of patents, no doubt some concerning GPUs, Nvidia and AMD, being GPU firms, also hold more important patents as well. With Intel’s cross-licensing deal with Nvidia set to expire in Q1 2017, the chip giant is reportedly in negotiations with AMD to strike up a patent deal.

Being one of the big two GPU designers, AMD probably has many important and critical GPU patents. Add in their experience with APUs and iGPUs, there is probably quite a lot there that Intel needs. With the Nvidia deal expiring, Intel probably sees a chance to get a better deal while getting some new patents as well. Approaching AMD also makes sense as being the smaller of the two GPU makers, AMD may be willing to share their patents for less. It’s also a way to inject some cash into AMD and keep it afloat to stave off anti-trust lawsuits.

AMD also has a lot to offer with the upcoming generation. The GPU designer’s GCN architecture is ahead of Nvidia’s when it comes to DX12 and Asynchronous Compute and that could be one area Intel is looking towards. Intel may also be forced into cross-licencing due to the fact with some many patents out there, there have to be some they are violating. The biggest question will be if AMD will consider allowing their more important and revolutionary patents to be licensed.

With the Nvidia deal being worth $66 million a quarter or $264 million a year, AMD has the chance to squeeze out a good amount of cash from Intel. Even though $264 million wouldn’t have been enough to put AMD in the black for 2015, it wouldn’t have hurt to have the extra cash.

AMD’s Raja Koduri Talks Future Developments – Capsaicin

Even though a lot of information was shared from the Capsaicin live stream, some details weren’t made known till the after party. In an interview, Radeon Technologies Group head Raja Koduri spoke in more detail about the plans AMD has for the future and the direction they see gaming and hardware heading towards.

First up of course, was the topic of the Radeon Pro Duo, AMD’s latest flagship device. Despite the hefty $1499 price tag, AMD considers the card a good value, something like a FirePro Lite, with enough power to both game and develop on it, a card for creators who game and gamers who create. If AMD does tune the drivers more to enhance the professional software support, the Pro Duo will be well worth the cash considering how much real FirePro cards cost.

Koduri also see the future of gaming being dual-GPU cards. With Crossfire and SLI, dual GPU cards were abstracted away as one on the driver level. Because of this, performance widely varies for each game and support requires more work on the driver side. For DX12 and Vulkan, the developer can now choose to implement multi-GPU support themselves and build it into the game for much greater performance. While the transition won’t fully take place till 2017-2019, AMD wants developers to start getting used to the idea and getting ready.

This holds true for VR as well as each GPU can render for each eye independently, achieving near 2x performance benefit. The benefits though are highly dependent on the game engine and how well it works with LiquidVR. Koduri notes that some engines are as easy as a few hours work while others may take months. Roy Taylor, VP at AMD was also excited about the prospect of the upcoming APIs and AMD’s forward-looking hardware finally getting more use and boosting performance. In some ways, the use of multi-GPU is similar to multi-core processors and the use of simultaneous multi-threading (SMT) to maximize performance.

Finally, we come to Polaris 10 and 11. AMD’s naming scheme is expected the change, with the numbers being chronologically based, so the next Polaris will be bigger than 11 but not necessarily a higher performance chip. AMD is planning to use Polaris 10 and 11 to hit as many price/performance and performance/watt levels as possible so we can possibly expect multiple cards to be based on each chip, meaning probably 3. This should help AMD harvest imperfect dies and help their bottom line. Last of all, Polaris may not feature HBM2 as AMD is planning to hold back till the economics make sense. That about wraps it up for Capsaicin!

AMD Reportedly Has 83% of VR Hardware Marketshare – Capsaicin


2016 may well go down as the year VR finally takes off for real. Sony and Microsoft have both been making progress towards VR and augmented reality while Oculus and HTC are set to launch the Rift and Vive respectively. Given the efforts and lengths AMD has gone to push VR, it should come to no surprise that a report has revealed that the company has a massive 83% lead in providing the hardware for VR capable systems.

Hardware wise, it is not surprising to see the lead over Nvidia. While PC hardware is a large segment of the VR market, only higher end systems are capable of producing the frames necessary for VR at 90fps and enough resolution for both eyes. Because of this, the PS4 is a viable candidate for VR adoption and with the APU inside it being AMD, Nvidia stands no chance in terms of sheer hardware market share for VR.

As noted many times during the Capsaicin event, AMD has been working with many developers in both gaming and other forms of media with LiquidVR and GPUOpen. AMD has also been on the forefront with developments like VR cafes and partnering with Oculus and HTC to ensure that the Rift and Vive work seamlessly with Radeon. There is even a Radeon VR Ready Premium program to ensure consumers are informed.

With the VR market still in it’s growing stages, AMD has seen an opportunity to get in before it’s competitors have a chance and secure a bastion of developer support and integration. Considering the price of VR capable hardware, AMD stands a good chance to reap a windfall when VR takes off. This can only bode well for AMD as for once they are ahead and hopefully will be able to leverage their position to help the rest of their business grow.

AMD Unveils Radeon Pro Duo 3DMark Performance – Capsaicin

Being the fastest single-card graphics card to date, we all know that AMD’s new Radeon Pro Duo is fast. Just how fast though is the dual-Fiji giant we don’t yet know though the 16TFOPs number and similar performance to 2 FuryX’s do give a rough estimate. To shed some light on the card, we do have some internal benchmarks of 3DMark AMD has run with their latest and great graphics card.

Testing conducted by AMD Performance Labs as of March 7, 2016 on the AMD Radeon Pro Duo, AMD Radeon R9 295X2 and Nvidia’s Titan Z, all dual GPU cards, on a test system comprising Intel i7 5960X CPU, 16GB memory, Nvidia driver 361.91, AMD driver 15.301 and Windows 10 using 3DMark Fire Strike benchmark test to simulate GPU performance. PC Manufacturers may vary configurations, yielding different results. At 1080p, 1440p, and 2160P, AMD Radeon R9 295X2 scored 16717, 9250, and 5121, respectively; Titan Z scored 14945, 7740, and 4099, respectively; and AMD Radeon Pro Duo scored 20150, 11466, and 6211, respectively, outperforming both AMD Radeon R9 295X2 and Titan Z.

According to AMD, the Radeon Pro Duo is undoubtedly the fastest card, at least according to 3dMark Firestrike. At Standard (1080p), the Pro Duo manages to have 134% of the Titan Z’s performance, a card that Nvidia priced at $2999 at launch. The lead only grows at Extreme and Ultra with 148% and 152% respectively.

Against the R9 295X2, the Pro Duo still manages a decent lead, with about a decent 120% lead across all settings. While lower than the 140% you might expect from a pure hardware standpoint, the 4GB of HBM1 and the limits of GCN do play a role. It does mean there won’t be any surprises fo users running 2 Fury or FuryX’s in CFX as they won’t have anything to worry about. The biggest question is if the card is worth the premium over running your own CFX solution, a question many dual-GPUs cards have faced.