Ashes of the Singularity DirectX 12 Graphics Performance Analysis

Introduction


Ashes of the Singularity is a futuristic real-time strategy game offering frenetic contests on a large-scale. The huge amount of units scattered across a number of varied environments creates an enthralling experience built around complex strategic decisions. Throughout the game, you will explore unique planets and engage in enthralling air battles. This bitter war revolves around an invaluable resource known as Turinium between the human race and masterful artificial intelligence. If you’re into the RTS genre, Ashes of the Singularity should provide hours of entertainment. While the game itself is worthy of widespread media attention, the engine’s support for DirectX 12 and asynchronous compute has become a hot topic among hardware enthusiasts.

DirectX 12 is a low-level API with reduced CPU overheads and has the potential to revolutionise the way games are optimised for numerous hardware configurations. In contrast to this, DirectX 11 isn’t that efficient and many mainstream titles suffered from poor scaling which didn’t properly utilise the potential of current graphics technology. On another note, DirectX 12 allows users to pair GPUs from competing vendors and utilise multi graphics solutions without relying on driver profiles. It’s theoretically possible to achieve widespread optimization and leverage extra performance using the latest version of DirectX 12.

Of course, Vulkan is another alternative which works on various operating systems and adopts an open-source ideology. Although, the focus will likely remain on DirectX 12 for the foreseeable future unless there’s a sudden reluctance from users to upgrade to Windows 10. Even though the adoption rate is impressive, there’s a large number of PC gamers currently using Windows 7, 8 and 8.1. Therefore, it seems prudent for developers to continue with DirectX 11 and offer a DirectX 12 render as an optional extra. Arguably, the real gains from DirectX 12 will occur when its predecessor is disregarded completely. This will probably take a considerable amount of time which suggests the first DirectX 12 games might have reduced performance benefits compared to later titles.

Asynchronous compute allows graphics cards to simultaneously calculate multiple workloads and reach extra performance figures. AMD’s GCN architecture has extensive support for this technology. In contrast to this, there’s a heated debate questioning if NVIDIA products can even utilise asynchronous compute in an effective manner. Technically, AMD GCN graphics cards contain 2-8 asynchronous compute cores with 8 queues per core which varies on the model to provide single cycle latencies. Maxwell revolves around two pipelines, one designed for high-priority workloads and another with 31 queues. Most importantly, NVIDIA cards can only “switch contexts at draw call boundaries”. This means the switching process is slower and gives AMD and a major advantage. NVIDIA has dismissed the early performance numbers from Ashes of the Singularity due to its current development phase. Finally, the game’s release has exited the beta stage which allows us to determine the performance numbers after optimizations were completed.

Do AMD Drivers Really Deserve Such a Hostile Reception?

Introduction


AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?

Is the Hardware Competitive?


The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.

NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.

Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.

Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.

Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.

Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.

The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.

Do AMD GPUs Lack Essential Hardware Features?


When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.

Have The Drivers Improved?


Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.

The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.

Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.

Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.

However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.

Is GeForce Experience Significantly Better?


In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:

“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”

NVIDIA’s Sean Pelletier released a statement at the time which reads:

“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.

GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51

We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”

As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software.  If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.

Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.

On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.

Far Cry Primal Graphics Card Performance Analysis

Introduction


The Far Cry franchise gained notoriety for its impeccable graphical fidelity and enthralling open world environment. As a result, each release is incredibly useful to gauge the current state of graphics hardware and performance across various resolutions. Although, Ubisoft’s reputation has suffered in recent years due to poor optimization on major titles such as Assassin’s Creed: Unity and Watch Dogs. This means it’s essential to analyze the PC version in a technical manner and see if it’s really worth supporting with your hard-earned cash!

Far Cry Primal utilizes the Dunia Engine 2 which was deployed on Far Cry 3 and Far Cry 4. Therefore, I’m not expecting anything revolutionary compared to the previous games. This isn’t necessarily a negative concept though because the amount of detail is staggering and worthy of recognition. Saying that, Far Cry 4 was plagued by intermittent hitching and I really hope this has been resolved. Unlike Far Cry 3: Blood Dragon, the latest entry has a retail price of $60. According to Ubisoft, this is warranted due to the lengthy campaign and amount on content on offer. Given Ubisoft’s turbulent history with recent releases, it will be fascinating to see how each GPU this generation fares and which brand the game favours at numerous resolutions.

“Far Cry Primal is an action-adventure video game developed and published by Ubisoft. It was released for the PlayStation 4 and Xbox One on February 23, 2016, and it was also released for Microsoft Windows on March 1, 2016. The game is set in the Stone Age, and revolves around the story of Takkar, who starts off as an unarmed hunter and rises to become the leader of a tribe.” From Wikipedia.

Sapphire Nitro OC R9 Fury Graphics Card Review

Introduction


The initial unveiling of AMD’s Fury X was eagerly anticipated due to the advent of high bandwidth memory, and potential to revolutionize the size to performance ratio of modern graphics cards. This new form of stackable video RAM provided a glimpse into the future and departure from the current GDDR5 standard. Although, this isn’t going to happen overnight as production costs and sourcing HBM on a mass scale has to be taken into consideration. On another note, JEDEC recently announced GDD5X with memory speeds up to 14 Gbps which helps to enhance non-HBM GPUs while catering to the lower-mid range market. The Fury X and Fury utilizes the first iteration of high bandwidth memory which features a maximum capacity of 4GB.

There’s some discussion regarding the effect of this limitation at high resolutions but I personally haven’t seen it cause a noticeable bottleneck. If anything, the Fury range is capable of outperforming the 980 Ti during 4K benchmarks while it tends to linger behind at lower resolutions. AMD’s flagship opts for a closed-loop liquid cooler to reduce temperatures and minimize operating noise. In theory, you can argue this level of cooling prowess was required to tame the GPU’s core. However, there are some air-cooled variants which allow us to directly compare between each form of heat dissipation.

Clearly, the Fury X’s water cooling apparatus adds a premium and isn’t suitable for certain chassis configurations. To be fair, most modern case layouts can accommodate a CLC graphics card without any problems, but there’s also concerns regarding reliability and the possibility of leaks. That’s why air-cooled alternatives which drop the X branding offer great performance at a more enticing price point. For example, the Sapphire Nitro OC R9 Fury is around £60 cheaper than the XFX R9 Fury X. This particular card has a factory overclocked core of 1050MHz, and astounding cooling solution. The question is, how does it compare to the Fury X and GTX 980 Ti? Let’s find out!

Specifications:

Packing and Accessories

The Sapphire Nitro OC R9 Fury comes in a visually appealing box which outlines the Tri-X cooling system, factory overclocked core, and extremely fast memory. I’m really fond of the striking robot front cover and small cut out which provides a sneak peek at the GPU’s colour scheme.

On the opposite side, there’s a detailed description of the R9 Fury range and award-winning Tri-X cooling. Furthermore, the packaging outlines information regarding LiquidVR, FreeSync, and other essential AMD features. This is displayed in an easy-to-read manner and helps inform the buyer about the graphics card’s functionality.

In terms of accessories, Sapphire includes a user’s guide, driver disk, Select Club registration code, and relatively thick HDMI cable.

AMD Bundles Hitman with GPUs and CPUs

Freebies are something that we all like and AMD has now bundled the new Hitman game with some of their graphics cards and processors as well as systems prebuilt with these components. AMD has partnered with IO Interactive again to bring this deal and they also joined the AMD Gaming Evolved program in order to get the best out of the hardware with top-flight effects and performance optimizations for PC gamers.

The bundle deal runs from the February the 16th and it is valid with the purchase of selected products from participating retailers – as it always is. In this round, AMD bundles Hitman with their Radeon R9 390 and 390X graphics cards as well as their FX 6 and 8 core processors (PIB). The bundle will last until 30th of April 2016 or whilst supplies last. Vouchers can be redeemed until 30th of June 2016.

The new Hitman game is offered in a seasonal fashion with a base game and periodic add-ons that will continue the story, but it is handled in the best possible way. The full experience with the full season off new missions won’t cost more than other games costs in themselves without DLCs and this AMD bundle also includes the full game rather than just the initial release. You will also get access to the BETA for Hitman that will run from the 19th to the 22nd February.

Those that have upgraded to Windows 10 will have the best experience with this new game as it has been built to take advantage of DX12, a feature that will make a very noticeable difference for AMD CPU users.

“Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs, called asynchronous compute engines, to handle heavier workloads and better image quality without compromising performance. PC gamers may have heard of asynchronous compute already, and Hitman demonstrates the best implementation of this exciting technology yet.”

You can find all the fine print and redeem your game code on the official Hitman mini-site. The beta phase is almost here, so it might be time to make that upgrade that you’ve holding back with. The full hardware specifications and recommendations have also been published a few days ago, in case you missed them.

XFX Brings Back Blower Style R9 390X

When AMD first launched their R9 290 and 290X GPUs back in 2013, many had mixed feelings for the blower style cooler. While the cooler was one of the best efforts yet from AMD, it was not enough for the hot Hawaii chips, leading to high temperature, throttling and noisy operation. In the end, many opted for custom coolers which were not blowers and did a better job at cooling. Two years later, it looks like XFX is planning on releasing the 390/X series cards equipped with what appears to be the original 290X cooler.

Using the Grenada core, the R9 390X is fundamentally the same as the 290X, with maybe better binning and process improvements to differentiate them. XFX is also using the older cooler and not the revamped one AMD launched with the R9 390X in a while ago. The new 390X blower cooler take’s its design cues from the Fury X and Nano. Given XFX’s choice of using the 2013 cooler and not the 2015 model, either XFX has a lot of stock left or there is little difference between the 2015 and 2013 models. You can check out the 2015 model below.

There is undoubtedly a market for blower style GPUs as they tend to exhaust more of the GPU heat out of the case. This is especially important for SFF and builds with poor case cooling. If the cooler is still lacking though, there won’t be many users who will pick it up. The biggest advantage is that with a reference board, watercooling blocks will be easier to source. It will be interesting to see how well the blower card does, both performance and sales wise.

PowerColor Launches Devil 13 Dual Core R9 390 Graphics Card

It was just a matter of time before we got it and here it is: the fastest R9 390 series graphics card. The just launched PowerColor Devil 13 Dual Core R9 390 is packed with dual GRENADA cores running at 1GHz and 16GB GDDR5 RAM running at 1350MHz via a new high-speed 512-bit X2 memory interface.

The new Devil 13 card isn’t a small one, it is a monster. The oversized card goes beyond the IO shield despite already being a triple-slot card. That really isn’t a surprise considering the cooling required to keep two such chips running at peak efficiency.

It consists of a massive 15-phase power delivery, PowerIRstage, Super Cap and Ferrite Core Choke that provides the stability and reliability for such high-end graphics solution. Three Double Blades Fans are attached on top of the enormous surface of aluminum fins heatsink that is connected with a total of 10 pieces of heat pipes and 2 pieces of large die-cast panels. This should create a perfect balance between the thermal solution and noise reduction.

The PowerColor Devil 13 Dual Core R9 390 has LED backlighting that glows a bright red color where the Devil 13 logo pulsates slowly. It also comes with dual-BIOS and it requires four 8-pin PCI-E power connections. The graphics card even comes with a mouse included as well, and it isn’t just a no-name one. Included is the Razer Ouroboros mouse, just because. The output connections are a two DL-DVI-D, one HDMI, and one DisplayPort.

Powercolor did not announce an MSRP at this time, but it surely won’t come cheap.

EXCLUSIVE: AMD 300 Series & Fiji Slideshow Leaked

With the impending launch upon us, we thought that it was only right that we shared some information that was anonymously sent to us by an eTeknix reader.

Enjoy the slides from this AMD presentation. We can only assume that these will be used at any press briefings in the upcoming day or so in Germany and the USA.

AMD R9 300 Series of Cards is Full of Surprises

We’ve had quite a few leaks and rumours for some time when it comes to AMD’s new Radeon R9 300 series graphics cards, ranging all the way back to the first possible cooler shroud that could be a hybrid cooling system. But now sources tell TweakTown that AMD’s newest generation of graphics cards won’t arrive as they are portrayed in the current rumours and leaks.

The source didn’t want to go into too much detail when talking to our friends at TT, but did say that “the new Radeon R9 390X will arrive with specifications and possibly features that are different to what the rumors currently suggest.” The most interesting part of the source’s comment is that the new HBM1 memory will actually provide the performance in real-life as it does on paper. If that is true, then Nvidia could be in some serious trouble down the road – at least until they can adapt their own processes and parts to match.

To summarize HBM, the first version to be released will have around 640GB/s bandwidth and the second generation will double that up to 1.2TB/s. Current cards provide an average of 300GB/s bandwidth, so even the first generation of HBM will double that.

There hasn’t really been much change on the memory side of graphics since the introduction of GDDR5 memory, so I can see how this could become a game changer and it’s hopefully something that will get AMD back on track so we see some more competition on the market. Competition is the best thing for us as consumers as it results in more effort in the R&D department as well as lower prices.

Thank you TweakTown for providing us with this information.

GeForce GTX 980 Ti Is Ready, but We’ll Have to Wait a Little

AMD is getting ready to launch their new 300 series very soon and Nvidia isn’t just standing by on the sidelines to watch, they want to be prepared. According to the latest leaks coming through Sweclockers, who have an impressive track record of being right on the spot with Nvidia rumours, Nvidia already has their new GeForce GTX 980 Ti ready, they just don’t want to release it yet.

The timeframe for the GTX 980 Ti is still set to the end of Q2 or Q3 2015, but with the option to switch it up and release it earlier in case AMD’s new flagship GPUs will kick their butts. I know that many people are waiting for the card, based on comments on our previous articles, but this is also good news. It will give Nvidia time to optimise and tweak the card, give board-partners more time to create better custom PCB and cooler solutions, but also to improve on it in case AMD’s 300 series cards will surpass the leaked performance figures.

The sad side of the news is however that it looks pretty much like the Titan X with half the memory. The GTX 980 Ti will feature the full version of the GM200 core with 3072 CUDA cores, 192 texture units and a 384-bit memory interface for the 6GB VRAM. Where the Titan X is running in at the $999 price tag, the GTX 980 Ti will most likely cost around $699.

Thank you Sweclockers for providing us with this information

AMD’s Next Generation Only Brings One new Chip

We’ve seen rumours about the AMD Radeon R9 300 series for quite some time now. and with the release dates getting closer each day, more and more of these rumours are compiled into more reliable information.

The 390 and 380 series are confirmed for a Q2 2015 release, but the other release times are more or less speculations based on history and leaks, but they seem very likely.

One of the almost sad things bout this is the use of GCN 1.1 (Graphic Core Next) in the 380 and 380x and it shows us that we’ll only really get one new chip in this generation – the Fiji used in the 390 and 390x cards.

Where the R9 380 series will be a rebranded R9 290, the 370 will be a rebrand of the current R9 285 – but when we say rebrand it just means that they will use the same chip architecture. Clock speeds and other aspects might have been tuned and optimised.

So, the wait is almost over for those who want to get their hands on AMD’s next gen cards with HBM memory.

Thanks to 3Dcenter for providing us with this information

Image courtesy of MyDrivers