Ashes of the Singularity is a futuristic real-time strategy game offering frenetic contests on a large-scale. The huge amount of units scattered across a number of varied environments creates an enthralling experience built around complex strategic decisions. Throughout the game, you will explore unique planets and engage in enthralling air battles. This bitter war revolves around an invaluable resource known as Turinium between the human race and masterful artificial intelligence. If you’re into the RTS genre, Ashes of the Singularity should provide hours of entertainment. While the game itself is worthy of widespread media attention, the engine’s support for DirectX 12 and asynchronous compute has become a hot topic among hardware enthusiasts.
DirectX 12 is a low-level API with reduced CPU overheads and has the potential to revolutionise the way games are optimised for numerous hardware configurations. In contrast to this, DirectX 11 isn’t that efficient and many mainstream titles suffered from poor scaling which didn’t properly utilise the potential of current graphics technology. On another note, DirectX 12 allows users to pair GPUs from competing vendors and utilise multi graphics solutions without relying on driver profiles. It’s theoretically possible to achieve widespread optimization and leverage extra performance using the latest version of DirectX 12.
Of course, Vulkan is another alternative which works on various operating systems and adopts an open-source ideology. Although, the focus will likely remain on DirectX 12 for the foreseeable future unless there’s a sudden reluctance from users to upgrade to Windows 10. Even though the adoption rate is impressive, there’s a large number of PC gamers currently using Windows 7, 8 and 8.1. Therefore, it seems prudent for developers to continue with DirectX 11 and offer a DirectX 12 render as an optional extra. Arguably, the real gains from DirectX 12 will occur when its predecessor is disregarded completely. This will probably take a considerable amount of time which suggests the first DirectX 12 games might have reduced performance benefits compared to later titles.
Asynchronous compute allows graphics cards to simultaneously calculate multiple workloads and reach extra performance figures. AMD’s GCN architecture has extensive support for this technology. In contrast to this, there’s a heated debate questioning if NVIDIA products can even utilise asynchronous compute in an effective manner. Technically, AMD GCN graphics cards contain 2-8 asynchronous compute cores with 8 queues per core which varies on the model to provide single cycle latencies. Maxwell revolves around two pipelines, one designed for high-priority workloads and another with 31 queues. Most importantly, NVIDIA cards can only “switch contexts at draw call boundaries”. This means the switching process is slower and gives AMD and a major advantage. NVIDIA has dismissed the early performance numbers from Ashes of the Singularity due to its current development phase. Finally, the game’s release has exited the beta stage which allows us to determine the performance numbers after optimizations were completed.
AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?
Is the Hardware Competitive?
The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.
NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.
Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.
Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.
Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.
Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.
The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.
Do AMD GPUs Lack Essential Hardware Features?
When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.
Have The Drivers Improved?
Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.
The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.
Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.
Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.
However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.
Is GeForce Experience Significantly Better?
In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:
“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”
NVIDIA’s Sean Pelletier released a statement at the time which reads:
“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.
GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51
We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”
As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software. If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.
Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.
On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.
The Far Cry franchise gained notoriety for its impeccable graphical fidelity and enthralling open world environment. As a result, each release is incredibly useful to gauge the current state of graphics hardware and performance across various resolutions. Although, Ubisoft’s reputation has suffered in recent years due to poor optimization on major titles such as Assassin’s Creed: Unity and Watch Dogs. This means it’s essential to analyze the PC version in a technical manner and see if it’s really worth supporting with your hard-earned cash!
Far Cry Primal utilizes the Dunia Engine 2 which was deployed on Far Cry 3 and Far Cry 4. Therefore, I’m not expecting anything revolutionary compared to the previous games. This isn’t necessarily a negative concept though because the amount of detail is staggering and worthy of recognition. Saying that, Far Cry 4 was plagued by intermittent hitching and I really hope this has been resolved. Unlike Far Cry 3: Blood Dragon, the latest entry has a retail price of $60. According to Ubisoft, this is warranted due to the lengthy campaign and amount on content on offer. Given Ubisoft’s turbulent history with recent releases, it will be fascinating to see how each GPU this generation fares and which brand the game favours at numerous resolutions.
“Far Cry Primal is an action-adventure video game developed and published by Ubisoft. It was released for the PlayStation 4 and Xbox One on February 23, 2016, and it was also released for Microsoft Windows on March 1, 2016. The game is set in the Stone Age, and revolves around the story of Takkar, who starts off as an unarmed hunter and rises to become the leader of a tribe.” From Wikipedia.
Rise of the Tomb Raider originally launched on November 10th and received widespread critical acclaim from various press outlets. Unfortunately, the game went under the radar because Fallout 4 released on the same day. This was a strategic error which hindered the game’s sales and prevented consumers from giving it their undivided attention. It’s such a shame because Rise of the Tomb Raider is a technical marvel when you consider the Xbox One’s limited horsepower. Even though it’s not technically an exclusive, PC players had to wait until after the Christmas period to enjoy the latest exploits of everyone’s favourite heroine.
The PC version was created by Nixxes Software who worked on the previous Tomb Raider reboot as well as a number of other graphically diverse PC games. The studio is renowned for creating highly polished and well-optimized PC versions featuring an astonishing level of graphical fidelity. Prior to release, NVIDIA recommended a GTX 970 for the optimal 1080p experience and 980 Ti for 1440P. Since then, there have been some performance patches from the developer and driver updates to help with scaling across various hardware configuration. This means it will be fascinating to see the performance numbers now that the game has matured and gone through a large number of post-release hot fixes.
“Rise of the Tomb Raider is an action-adventure video game developed by Crystal Dynamics and published by Square Enix. It is the sequel to the 2013 video game Tomb Raider, which was itself, the second reboot to its series. It was released for Xbox One and Xbox 360 in November 2015 and for Microsoft Windows in January 2016. It is set to release for PlayStation 4 in late 2016.
The game’s storyline follows Lara Croft as she ventures into Siberia in search of the legendary city of Kitezh, whilst battling a paramilitary organization that intends on beating her to the city’s promise of immortality. Presented from a third-person perspective, the game primarily focuses on survival and combat, while the player may also explore its landscape and various optional tombs. Camilla Luddington returns to voice and perform her role as Lara.” From Wikipedia.
So, let’s get to it and see how some of the latest graphics cards on the market hold up with the latest from Crystal Dynamics!
Here at eTeknix, we strive to give the consumer the best possible advice in every aspect of technology. Today is no different and we are excited to bring you the CrossFireX review of the newly released R9 380X graphics cards.
Based on the R9 380, which was based on the R9 285, the R9 380X was designed to fit the gap that was obvious between the R9 380 and R9 390. Priced at just under £200, sales have proven strong in the first weeks and board partners have given their models the usual overclocking treatment with the average clock speed of around 1030MHz being around 50MHz higher than the ‘reference’ design.
Through our testing of both the XFX DD and Sapphire Nitro models, it was evident that performance wasn’t as high as I hoped and still left a gap to fill under the R9 390. Reviewing the Rx 200 series lineup, the R9 285 was an extremely late arrival. It was based on architecture we were familiar with, but it introduced GCN 1.2 which is the foundation of the brand new R9 Fury range. To me, this leaves a gap for an R9 385 to be introduced to the market and the next step in the graphics card race for late 2016.
When we test in CrossFireX, we aim to use two identical graphics card to ensure that everything is as similar as possible. When using the same cards, you can almost guarantee the same cooling capabilities, power draw, core clock and other variables. This then gives us the best possible outcome for maximum performance as the computer does not need to compensate for any differences.
If you’ve been reading up on the latest R9 380X range, you may have seen stories and possibly even reviews that the performance wasn’t as high as many expected. I one of those who was very disappointed with the performance when compared to the R9 380 and R9 390 graphics cards by being around 10% faster than the R9 380. I really shouldn’t be complaining, at a sub £200 price point, the R9 380X is poised as a great 1440p graphics card; albeit with some settings lowered to medium or high from ultra.
Today in the test bench we have the Sapphire Nitro R9 380X. Fitting in nicely to the current Nitro range from Sapphire at just £199.99, this card features an identical cooling design as the R9 380 but with one huge visual improvement, which I will disclose later. The card features 16K capacitors for ultra long life and an increased core and memory clock to 1040MHz and 6000MHz respectively, so we should expect improved performance over what we have been witnessing so far.
Before anyone starts chanting “rebrand”, stop! As much as I agree that this is technically a rebrand, I’m now actually re-wording that to remanufacturing. A rebrand would be taking the R9 285, put a new cooling design on there and calling it an R9 380 with no other changes. However, the R9 380 and R9 380X are remanufactured with a much more precise manufacturing procedure to squeeze as much performance as possible from the Antigua GPU Core.
Packaging and Accessories
The Nitro box has changed for this card compared to the rest of the R9 Nitro range. We now have a simple portrait style box without a window to show off the contents.
I really think Sapphire want everyone to know that this card has a back plate fitted.
Accessories are simple, general paper based material, driver disk, case sticker and DVI to VGA adapter.
According to Japanese sources, AMD is set to unveil a multitude of new products in the coming week. Set to occur in an event on November 20th, AMD will be showing off their new Crimson software suite, various partner products as well a new GPU.
While Crimson is undoubtedly a major release and an important release for AMD, the graphics card launch will probably take center stage. After the complete Fiji lineup launched, there are only two main candidates for a new GPU. These are the Fury Gemini which is expected to be a Fijix2 part or the much-leaked 380X which we pretty much already know all about.
For the past month or so, the leaked launch dates for the 380X have come and gone many times. This time around though, given the multitude of leaks we have seen, may finally see the card launch. With Black Friday coming up soon as well as the rest of the holiday season, a 380X launch could capitalize on the increased spending in the United States. On the other hand, the flagship dual Fiji part will probably sell well enough on its own given its premium status. We will could see the FijiX2 part before the end of the year, but given past timeframes on AMD launches, that is unlikely.
Just last month, we heard that the AMD R9 380X was on its way, as a cards specifications, as well as a picture of the card from XFX leaked online. The new AMD card, although admittedly I use the term “new” lightly, looks set to topple the Nvidia Geforce GTX 970, offering impressive performance at a mighty affordable price range, which should make it ideal for 1440p gaming.
The new card features a 28nm chip, with a clock of up to 1100Mhz, 4GB of GDDR5 @ 5500Mhz – 6000Mhz and a 256bit bus. Of course, the specifications seem decent enough and no doubt a few AMD partners such as XFX, Sapphire, Gigabyte and Powercolor will put their own touch of magic in there to get the most of the card using custom cooling and PCB solutions; my money is on Sapphire putting out the best card of the bunch, as we’ve seen so many times with AMD cards in the past.
The card is expected to launch in just a few days time, November 15th to be exact, to the general public. Of course, this is just a rumour at this time, but Hardware Battle have proven a reliable source of leaks in the past.
What’s more exciting, is that the card is expected to retail at just $249, much lower than the GTX 970, which are often north of $300.
From the GPU-Z screenshot, we pretty much get a good idea of the card’s performance. The 2048 shader cores, 128 TMUs and 32 ROPs all clock in at a good 1070 Mhz. Pixel fill rate comes in at 34.2 GPixel/s which is pretty much expected given the Tonga configuration. Texture fillrate is 137 GTexel/s which is much better than what the R9 380 and R9 280X boasted. 4GB of 6125Mhz GDDR5 VRAM wrap it up by giving 172GB/s via the 256bit bus. Overall, these specs place the card solidly between the R9 380 and R9 290/390.
In 3DMark 11 Extreme, the card managed to score 4024 overall with a relatively weak Intel Core i5 and 3768 in graphics. The R9 290 scores around the 4200 mark and the R9 280X at about 3300. Based off our estimates from extrapolating Tonga/GCN1.2 improvements over the R9 290X/GCN1.0, we would expect the 380X fall a bit short of the R9 290 but still surpass the GTX 780 in most cases. This is despite the 780 scoring about 3600 in 3DMark 11 since that test tends to favour Nvidia cards more.
Overall, AMD looks to have winner int he midrange with this card. Depending on the price, the 380X can steal some marketshare back from Nvidia which has a sizable gap between the 970 and 960 in terms of performance. Given some of the limitations of the 960, Nvidia may want to consider a cut-down 970 that is not memory bottlenecked in order to do battle. As one of the last 28nm and GCN cards, AMD is making sure to go out with a bang.
With just over a month until the expected launch, more information on AMD’s Radeon R9 380X have surfaced. Last time around, we got a glimpse of the XFX Double Dissipation model and today we’re treated to the full specifications
Unlike the R9 285/380, the 380X will feature the full Tonda die. Tonga was originally launched last year cut down to 1792 shader units, 112 TMUs and 32 ROPs over a 256bit GGDR5. Keeping the same 256bit bus, the 380X will feature 2048 shader units, 128 TMUs and 32 ROPs, a hardware parity with the aged 280X. Given the architectural improvements GCN 1.2 introduced starting with Tonga, the 380X should place at least 10% faster than its predecessor. Clock speeds also get a boost up to between 1000~1100Mhz while GDDR5 speeds will stay about the at around 5500Mhz~6000Mhz.
With better performance and DX12 support, among other advantages compared to the GTX 770, AMD has a good chance to dominate the $150 gap between Nvidia’s GTX 960 and 970. The 380X may also come standard with 4GB of VRAM as 2GB is probably a bit too low for this tier of performance. If the 380X does well in the market, it will be interesting to see if Nvidia will respond with an even more cut down GM204 in the form of a GTX 960Ti, cut prices on the 970 or simply just wait it out till Pascal.
Thank you HWBattle for providing us with this information
Ever since AMD debuted Tonga Pro in the R9 285, everyone had been waiting for the full Tonga XT die. Earlier this week, we got our first hint with the glimpse of the XFX Double Dissipation R9 390X. Today, we’re getting word that the R9 380X will finally arrive in late October, a little over a month from now. This will fill the relatively large gap between the R9 380 and 390.
With GCN 1.2, the R9 380X will bring the efficiency gains first demonstrated in the R9 285. The card will feature 2048 Stream Processors, 128 TMUs and 32 ROPs connected to 4GB of GDDR5 across a 256bit bus. With GCN 1.2’s improved architecture, the 380X should perform about 10% faster than the 280X at the same clocks. Despite a drop in raw bandwidth compared to the 280X, the introduction of delta color compression should alleviate any issues. The card should also feature good DX12 support with asynchronous compute as well as FreeSync.
Unlike the earlier R9 370X which was limited to China, the 380X will be available worldwide. With full Tonga on tap, AMD should be able to strike at the hole Nvidia has left between the 960 an 970. This should hopefully help AMD make some more revenue, gain some market share and be more competitive overall. The only spoiler would be if Nvidia somehow introduced a GTX 960 Ti.
Thank you Fudzilla for providing us with this information
Chinese technology site, Expreview has released images from a reliable “insider source” which could be the first glimpse of AMD’s 380X graphics card. This particular sample appears to be the XFX’s Double Dissipation model and features a dual-fan design. Apparently, the 380X is built on a 384-bit interface which could signify a 3GB or 6GB configuration.
The 380X will also fully utilize the Tonga GPU and could retail for around $250. According to the source, this card is already scheduled for mass production in China but it’s unknown if it will be sold worldwide. Unfortunately, there’s a distinct lack of information from the pictures and it’s difficult to analyze the card’s potential performance and overall build.
Despite this, initial impressions are fairly promising and the gaming-orientated theme should suit a wide array of custom-builds. On a more practical note, the protruding heatpipes should provide enough cooling to remove any thermal throttling. Early speculation about the card’s performance is fairly conflicting so it’s impossible to predict the price to performance ratio on the 380X. With AMD’s delicate financial position, it seems sensible to target the lower-medium end customer. Hopefully, the 380X can alleviate people’s concerns towards AMD and demonstrate they are still more than capable of making fantastic GPUs for the mainstream gamer.
We’ve seen rumours about the AMD Radeon R9 300 series for quite some time now. and with the release dates getting closer each day, more and more of these rumours are compiled into more reliable information.
The 390 and 380 series are confirmed for a Q2 2015 release, but the other release times are more or less speculations based on history and leaks, but they seem very likely.
One of the almost sad things bout this is the use of GCN 1.1 (Graphic Core Next) in the 380 and 380x and it shows us that we’ll only really get one new chip in this generation – the Fiji used in the 390 and 390x cards.
Where the R9 380 series will be a rebranded R9 290, the 370 will be a rebrand of the current R9 285 – but when we say rebrand it just means that they will use the same chip architecture. Clock speeds and other aspects might have been tuned and optimised.
So, the wait is almost over for those who want to get their hands on AMD’s next gen cards with HBM memory.
Thanks to 3Dcenter for providing us with this information
The new Nvidia GeForce GTX 980 and 970 have blown us away with their mixture of high performance and low power consumption; so where are the AMD cards to compete with them? We have the R9 285, but with AMD shooting down rumours about the R9 285X, we may now be waiting until 2015 before we see a new generation of cards from AMD.
The Radeon R300 series, also currently known as “Pirate Island” is still nowhere to be seen, but rumour suggests that AMD will launch their new high-end cards early next year – best estimate would be just inside 6 months. There are said to be three ranges in the Pirate Island series; the high-end “Bermuda”, the mid range “Fijian” or “Fiji” and the low-end “Treasure Island”. It is expected that the Radeon R9 380 “Fiji” will be the first to market.
It has been suggested that the cards will use a 20nm process, but with TSMC currently using their 20nm fabs for Apple products, it seems unlikely that AMD will get to market with 20nm hardware, Nvidia pushed forward with 28nm for good reason and AMD and Nvidia are likely to stay with it and skip to 16nm when the time for the next-next generation comes. However, if AMD can pull off a range of cards at 20nm, it could prove a big win for the company in this never-ending battle between themselves and Nvidia.
Pirate Island will incorporate other new AMD technologies such as GCN 2.0, we should also see improved connectivity such as HDMI 2.0 and an updated DisplayPort to stay competitive with Nvidia.
If you’re waiting for the new AMD cards, don’t hold your breath for too long; with any luck we may see something new on display at CES 2015 in January.
Thank you MyDrivers for providing us with this information.