Micron Starts Sampling GDDR5X to Customers

Even when much of the excitement about VRAM is coming from HBM2, that technology isn’t quite ready for prime time yet. For now, HBM2 is still a ways away and still a premium product. To hold the line, memory vendors have come up with GDRR5X, a significantly improved version of GDDR5. In what is unquestionably good new, Micron has just started sampling their GDDR5X modules to customers, way ahead of their original summer target.

GDDR5X has been moving along quickly since JEDEC finalized the specifications back in January. It was also only last month that Micron got their first samples back from their fabs to test and validate. This means that GDDR5X was easier to implement than expected and the quality of the initial batch was good enough that there wasn’t much to change in the production process.

Micron will be offering GDRR5X in 1GB and 2GB IC’s, allowing for 8GB and 16GB VRAM GPUs on as narrow as 256bit memory buses. The biggest advantage of GDDR5X is the doubling of bandwidth from 32byte/access to 64byte/access. Combined with higher clock speeds that allow for up to 16Gbps and improved power efficiency, the new memory will be a good match for Pascal and Polaris while we wait for HBM2.

SK Hynix Details HBM2 Production Timeline

For the upcoming graphics generation, both Nvidia and AMD are set to use HBM2 for their upcoming Pascal and Polaris graphics cards. While Samsung has already revealed that they have kicked off mass production for HBM2, we’re getting word on competitor SK Hynix’s plans for the new memory technology. According to Golem.de, the 4GB modules will start mass production in Q2 while the 8GB variants in Q3.

With HBM2, the maximum capacity per stack jumps from 1GB in HBM1 to 8GB with HBM2. This will allow GPUs to have up to 32GB of VRAM using the 8GB modules in the Fiji/Fury configuration, 16GB using the 4GB modules and 8GB using the lowest 2GB stacks. Each stack also doubles the bandwidth from 128GBps to 256GBps. Due tot he bandwidth increase all coming from a 1Ghz clock rate boost, we can expect latency to be severely reduced.

Given that SK Hynix along with AMD were the pioneers for HBM1, it is surprising that Samsung was the first one to reach mass production for the 4GB modules. On the other hand, SK Hynix may still have locked in the 2GB stacks first, the ones more likely to be used for consumer GPUs. With 4 stacks, this will allow for 8GB of VRAM, plenty for 4K and VR GPUs, while 2 stacks will do fine with 4GB for mainstream cards. Hopefully, this means that HBM2 cards will arrive sooner rather than later.

Micron Declares GDDR5X Right on Track

Engineers at the Micron Development Center in Munich have announced that they have gotten their first samples of GDDR5X back from their fab before schedule and have started testing. In addition to that, Micron is expecting to ramp up volume production of GDDR5X on their 20nm memory process sometime in mid-2016. GDDR5X is an evolution on GDDR5 rather than a new memory technology and is expected to tide the industry over till HBM2 and HMC come online.

In early testing, some of the GDDR5X samples have already hit 13Gbps, just short of the eventual 14Gbps goal for the production modules. Combined with a new improved prefetch and new quad data rate, GDDR5X is expected to double the bandwidth over GDDR5 while increasing capacity and reducing power consumption. With progress going well, samples will begin to ship to partners (like AMD and Nvidia) in the spring. This means we will be unlikely to see any GDDR5X based cards until fall 2016.

While GDDR5X will still fall short of HBM2 bandwidth, it will undoubtedly be cheaper. It will also allow GPUs to be made with narrower buses while still maintaining the same overall bandwidth, allowing for reduced power consumption, cheaper GPUs and faster GPUs. We can expect the mainstream and even performance segments to utilize GDDR5X while the budget cards stick with GDDR5 and the enthusiast cards use HBM2. For more on GDDR5X, check out our write-up here.

Nvidia May Drop 2GB Model of GTX 960

Originally launching in both a 2GB and 4GB variant, Nvidia is reportedly planning to discontinue the lower capacity model. By offering only a 4GB tier, Nvidia is hoping to make the card more attractive to buyers as they will only see the 4GB version. At this point in time, there is no word yet if the 4GB 960 will keep its current price or drop to fill in the void left by the departing 2GB model.

The GTX 960 features the full GM206, Nvidia’s budget Maxwell die. While the card does decent against AMD’s R9 380, it does fall behind a bit in terms of overall performance. With the launch of the GTX 950 as well, the 960 has become even more of a niche product. The 950 features only 256 fewer shaders and 12 TMUs, not a large margin by any means, placing its performance to near 960 levels. With such competition, it is understandable why Nvidia will try to differentiate the card more by only having a 4GB model.

The biggest question is whether or not the GTX 960 will actually need 4GB of VRAM. While 4GB might be needed for 1440p, the 960 is solidly a 1080p performing card. That has historically been the domain of 2GB of cards and by the time 4GB is required for 1080p, the GPU core of the 960 may well be lacking. One also must consider the fact the 950 also has a 4GB model and would age about the same as the 960. Both cards are also limited by the 128bit memory interface which may hinder the use of such a large frame buffer.

Undoubtedly though, the extra frame buffer would make the 960 more future proof if only just. It will be interesting to see if Nvidia does follow through with this move in the end. We will follow this story as it develops and bring you more information as it arrives so stay tuned!

Thank you HWBattle for providing us with this information

PowerColor R9 390 PCS+ 8GB Graphics Card Review

Introduction


We have all had mixed opinions on the R9 300 series upon release, the rebranded nature of the 200 series was seen as the fall of AMD and short-changing consumers. However, while they are in fact rebranded, they are great cards and provided an excellent performance boost over the previous generation and are a great foundation for the Fiji range to be based on.

Today in the test bench is the PowerColor R9 390 PCS+. This is the only version of the R9 390 that PowerColor offer which is good as it’s not confusing to consumers to have to choose between different models. As with all other R9 390’s, it features 8GB VRAM, a 6000MHz memory clock and over 1000MHz core clock.

This R9 390 PCS+ edition in particular, features a 3 fan monster metal cooling shroud which hugs a large heatsink; ideal for 0db operation at low load levels. The design of this card is extremely deceiving, the shroud is wide at the top and comes in. This makes the card look a lot larger than it actually is, being 10mm shorter than the Gigabyte G1 gaming and 7mm short than the Sapphire Tri-X cooler.

Packaging and accessories

The outer skin of the box is plain, but also extremely attractive to the eye. The trio of colours and simple design show that this is a no fuss card and the specifications along the bottom show that it means business.

The back of the box has some key features with some images to be more appealing.

The accessories aren’t bursting from the seams with this card, PowerColor just offering the driver disc, installation manual and PCI-e power adapter.

Gigabyte G1 Gaming GTX 980Ti 6GB Graphics Card Review

Introduction


 

The GTX 980Ti has been around for a few months now and in that time, manufacturers have had the chance to not only design and create their own versions but also perfect the product. Gigabyte is one of the many manufacturers that have given the 980Ti the magic treatment with a revamped version of the G1 Gaming that we previously had a glimpse of at Computex 2015. It definitely looks the part, but we didn’t know the specification apart from a memory clock of 7010MHz and that it would have a similar cooling capacity of previous G1 Gaming cards of 600W.

Well now I can finally and happily announce that we have the Gigabyte G1 Gaming GTX 980Ti in our test bench and it is BIG; you seem to forget how big this card is thanks to the insane Windforce x3 cooling design. Like with most of the gaming series of graphics cards being released lately, this card comes with multiple clock speed settings. These different settings are preset and only available through the use of Gigabyte’s own OC Guru II software; the three settings are ECO mode of 1060MHz base and 1151MHz boost, Gaming mode of 1090MHz base and 1241MHz boost and OC mode of 1190MHz base and 1279MHz boost. I don’t really understand why there are multiple settings as most users will obviously not only want the best possible performance out of the box but also to overclock the graphics card to the maximum potential.

The G1 Gaming cooling shroud has been given a new lick of paint and a small LED upgrade. Like before, the Silent and Fan Stop LED’s illuminate when the fans have stopped.

The new feature is the customisation of the LED through the use of the bundled OC Guru II app.

Onto the specifications of this beast.

So with those specifications and price point, this is poised to be one of the best GTX 980Ti graphics cards on the market; so let’s find out shall we?

Packaging and accessories

The outer box skin resembles that of the latest G1 Gaming designs, simple yet key features are detailed to catch the potential customers eye.

The back of the box shows a breakdown of the card and slightly more detailed information of what was shown on the front.

Gigabyte offers a ‘no-frills’ accessories package, simply offering just a quick installation guide, driver disk and PCIe power cable adapter.

Nvidia GTX 950 Round-Up Review: Three Cards Go Head to Head

Introduction


With all of the hype surrounding the GTX 900 series recently, it has been hard to imagine what the lower end of the graphics card market would hold for the refined Maxwell architecture. We originally saw Maxwell in the mighty GTX 750Ti, but it was only when the GTX 900 series was released that we received the Maxwell that we know today. Our reviews and news have focused heavily on the GTX 980Ti and Titan X graphics cards, so information on the GTX 950 has been scarce to say the least; that is about to change. In today’s review, there is not one, not two but three GTX 950’s in for punishment; the ASUS STRIX GTX 950 2GB, Inno3D iChill AIRBOSS ULTRA GTX 950 2GB and the MSI GAMING GTX 950 2GB.

The GTX 950 is hot off the manufacturing line and features some lack-luster, but pokey specifications; knowing NVIDIA, less is more and we should see a stormer of a graphics card here regardless. Most of the options will feature 2GB of VRAM due to the product placement, but we will see some 4GB models which should make for a very capable SLI configuration for not a great deal of money. So with a price tag of around £120 depending on the manufacturer, performance isn’t going to be outstanding compared to the bigger options. However, the estimated performance and price tag makes this an extremely attractive option for 1080p and online gamers. Personally, I feel that this GTX 950 will be the final piece in the puzzle for NVIDIA; it will then have a great graphics option at almost every price point.

Now just because the GTX 950 is aimed at the lower price market, do not assume that you are not getting the full NVIDIA treatment. As with almost every NVIDIA GPU you will get the following:

  • NVIDIA Surround
  • NVIDIA SLI
  • GPU Boost 2.0
  • NVIDIA G-Sync
  • DX12 Support

The GM206 GPU core makes its second appearance here since the GTX 960 options; however, it seems to have had a shave to bring the performance down. Would this be enough or will we see another Titan X and GTX 980Ti scenario here, let’s find out shall we.

MSI Gaming GeForce GTX 980Ti ‘OC’ 6GB Graphics Card Review

Introduction and Packaging


MSI is one of those names you think of when compiling a list of computer components. You know that the amount of testing and quality control that goes into each product released will shine through in performance and longevity. MSI produces some of the best looking, cooling, sounding and performing graphics cards on the market thanks to the military grade components that go into every card.

Today on the test bench is the MSI Gaming GTX 980Ti OC edition. MSI has given this card the magic touch with a custom PCB and the epic Twin Frozr V cooling solution. The new design features a brand new fan design called TORX fan, this includes a fan blade that forces air down into the heatsink and out faster than conventional fan blades.

Since the launch of the GTX 980Ti, it has been adored the world over by enthusiasts of both AMD and NVIDIA. Based on a cut-down version of the GM200 GPU found in the Titan X, the GTX 980Ti seemed like it would lag behind the Titan X by around 10%, however, that never was the case. This graphics card pretty much dethroned the Titan X overnight thanks to the sub-vendors ability to customise the cooling designs and huge price difference.

The outer box resembles the rest of the GeForce range of graphics cards made by MSI. A small amount of information on the front with the MSI Gaming dragon dominating in terms of colour.

The back of the box is filled with key MSI features, mainly focusing on the Twin Frozr V cooling solution.

Inside the main box, you’ll find a slimmer box containing the accessories.

The accessories include a user guide, product series ‘catalogue’, driver disc, DVi to VGA adapter and a 6-pin PCIe power to 8-pin adapter.

AMD Has Priority Access over Nvidia For HBM2

AMD made history earlier this month by being the first major GPU vendor to ship HBM with their top end Fury and Fury X graphics cards. Nvidia however, has been absent so far, waiting on HBM2, a more advanced version of the HBM1 shipping with Fury(X), before getting into the new tech. According to a report though, AMD is leveraging their deal with SK Hynix to get priority access to HBM2 in time for their upcoming Arctic Islands GPUs.

While HBM1 is limited to 4GB and 512GB/s, HBM2 increases those numbers significantly with up to 16/32GB of VRAM and over 1024 GB/s. Like HBM1, HBM2 is expected to be in limited supply at launch. If AMD has priority for HBM2, and the stocks are low, it may mean that Nvidia practically won’t be able to use HBM2 until the supply improves enough that AMD can’t use what is available. This might create a de facto exclusively for AMD, offering a chance for the underdog to dominate with HBM2 GPUs.

If the supply of HBM2 is limited, it could complicate things for Nvidia. Their Pascal architecture is set for 2016 and could be designed for either GDDR5 or HBM2, which vary widely in implementation. Nvidia can choose to go with GDDR5 but risk losing its lead over AMD and inability to refresh with HBM2 later on. If Nvidia does go with HBM2, supply might be heavily constrained, allowing AMD a chance to grab market share. It will be interesting to see both side’s offerings in early 2016 and the choices they make for their lineup.

Thank you WCCFTech for providing us with this information

HBM Can be Overclocked Thanks to a Glitch?

With the recent launch of the brand new AMD R9 Fury X graphics card, it brought forward a new architecture in the form of Fiji XT and a brand new memory technology, High-Bandwidth Memory. During the reviews and briefings, it was made evident that HBM will not support overclocking due to the technology being so immature compared to GDDR5. If you have been living under a rock for the last few weeks, here is our coverage of the lack of overclocking capability.

The overclocking features were locked in all third-party software and even in AMD’s own Catalyst Control Centre it was locked; however a review has found a small glitch after a routine of system restarts. uk.hardware.info apparently managed to yield a 20% overclock, going from 500MHz to 600MHz, along with this a core clock was boosted to 1145. As we can see from the results below in Fire Strike, the card managed to achieve a score of 16,963

As we can see from the results below in Fire Strike, the card managed to achieve a score of 16,963, compared to our overclocked Fire Strike (core 1139MHz) score of 16771, this is an extra gain of 198; however the CPU has also been overclocked in this instance too.

Generally VRAM overclocking yields very little gain, so was this stunt to prove it can be done rather than the quest for performance? Let us know your thoughts in the comments.

Thank you to WCCFTech for providing us with this information.

An update from AMD has now confirmed that even though GPU-Z is showing an increase in memory speed, the frequency never actually changed as the memory frequency is configured at hardware level and no software is able to alter this.

AMD Radeon R9 370 and M390X Become Standard at Alienware

Alienware announced on their twitter feed that they now have the R9 370 and the M390X GPUs as standard earlier this week. The R9 M390X is a mobile, High-end graphics card for laptops. It will most likely be based on the M295X that was used in the iMac 5k in 2014, meaning it should be based on the third generation of the desktop Tonga chip. The amount of shader cores is identical, but the clock speed is slower (by 15%) and will run at 723mhz and the memory will also be clocked slower at 5000mhz. This means that the performance will be a bit below the M295X and will be somewhere between the GTX965M and the 970M. Meaning that it will be suitable for 1080p gaming and high detail gaming.

The announcement on twitter stated that the card will also have 4GB of Vram too.

Notebookcheck.net have a list of stats as follows:

Codename Tonga
Architecture GCN 3
Pipelines 2048 – unified
Core Speed * 723 (Boost) MHz
Memory Speed * 5000 MHz
Memory Bus Width 256 Bit
Memory Type GDDR5
Max. Amount of Memory 4096 MB
Shared Memory no
DirectX DirectX 11.2, Shader 5.0
Power Consumption 125 Watt
Transistors 5000 Million
technology 28 nm
Features DirectX 12, OpenCL 1.2, OpenGL 4.3, Vulkan, Mantle
Notebook Size large
Date of Announcement 09.06.2015

Sounds good right? The power consumption will be around 100 watts TDP and will be used in the larger and more powerful gaming laptops such as the Alienware m17x.

The specs for the R9 370 were leaked earlier this year too, with impressive specs of 4GB DDR5 vram, 130 watt TDP. It will have the Curacao Pro chip to do all the hard work and will also pack a whopping 179GB/s of available bandwidth.

Will you be getting one of these new cards? let us know!

Sapphire R9 290X Tri-X 8GB CrossFireX Review

Introduction


Here at eTeknix, we strive to give the consumer the best possible advice in every aspect of technology. Today is no different, as we have a pair of Sapphire’s amazing R9 290x 8GB Tri-x edition graphics cards to combine together for some CrossFireX action. The dedicated review for this graphics card can be found here. When striving for the best results, it is favourable to test 2 of the same models to allow for no variation in any clock speeds or variations in any integrated components, so today we should see some excellent results.

In the dedicated review, this graphics card has more than enough power to play most games at 4K resolution at 60FPS, faltering slightly in the more demanding Metro Last Light.

We inserted both graphics cards onto our Core i7 5820K and X99-based test system, ensuring adequate spacing for optimum cooling and that both have access to sufficient PCI-e bandwidth for CrossFire operation.

The typical ‘hot spot’ when arranging a CrossFire or SLI configuration is the closest graphics card to the processor, due to both of these cards being equipped with the Tri-x cooler, positioning isn’t an issue.

As these graphics cards have been subject to Sapphires treatment, they have slightly higher clock speeds than a reference model, but as these are both the same cards, there should be little to no variation in clock speeds; this will result in maximum gains during testing.  

ASUS to Release White GeForce GTX 970 Turbo

Asus is releasing a rather fine-looking modified NVIDIA GTX 970 in white with red accenting. Though information is limited, we know that the card will use a blower-type cooler, non-reference display connectors (two DVI, an HDMI, and DisplayPort), and likely a modified PCB.

Despite its bad reputation after the VRAM allocation scandal, the GTX 970 is still a popular graphics card, especially since it has no direct competition from AMD.

Asus teased the GPU on its PCDIY page, promising more information to come soon.

Source: VideoCardz

“We’ll Do a Better Job Next Time”, NVIDIA Admitting Defeat

 

Over the last few weeks we’ve all heard of the scandal relating to the 3.5GB VRAM buffer on the GTX 970 graphics cards. Yes, the card comes with 4GB, but the last 512mb is extremely slow compared to the rest. Well today, NVIDIA’s Jen-Hsun, came forward to elaborate on this unfortunate turn of events.

“We invented a new memory architecture in Maxwell. This new capability was created so that reduced-configurations of Maxwell can have a larger framebuffer – i.e., so that GTX 970 is not limited to 3GB, and can have an additional 1GB. GTX 970 is a 4GB card. However, the upper 512MB of the additional 1GB is segmented and has reduced bandwidth. This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment. 

Unfortunately, we failed to communicate this internally to our marketing team, and externally to reviewers at launch.”

So they tried to push the boundaries with as little as possible, resulting in a very fast and usable 3.5GB VRAM, but failed to tell anyone outside of the company of this groundbreaking memory architecture; something that could have played so well to their advantage has seemed to have backfired.

“The 4GB of memory on GTX 970 is used and useful to achieve the performance you are enjoying. And as ever, our engineers will continue to enhance game performance that you can regularly download using GeForce Experience. This new feature of Maxwell should have been clearly detailed from the beginning. We won’t let this happen again. We’ll do a better job next time.”

Good guys NVIDIA for admitting defeat, let’s hope future driver updates will increase the speed of the last 512mb.

Have you returned or received any form of refund for your GTX 970? Are you content with the performance and think this has been blown massively out of proportion by a small population? Let us know on Facebook and our Forums.

Thanks to NVIDIA for sharing this with us.

Nvidia Reporting Less than 5% Returns on GTX 970s after VRAM Controversy

Amid all the controversy that has been surrounding NVidia, the aftermath seems much less severe than was to be expected. So how bad is it actually? President of Jon Peddie Research, Jon Peddie, has stated not that bad. He had this to say: “I have had heard as many as 5 per cent of the buyers are demanding a refund from the AIB suppliers.” By comparison, retailers are only reporting 1-2%, including two of the biggest in the UK offering full refunds through till the end of the month.

If you missed out on the controversy, although the GTX 970 does have a full 4GB of VRAM, the last 512MB is accessed differently and therefore runs at a slower rate. This causes slowdowns and stuttering in games when the extra 512MB is accessed. This design additionally affects the ROPs (Raster Operating Pipelines), reducing them from 64 to 56, and the L2 cache falls from 2048KB to 1792KB.

Source: Tweaktown

Nvidia Release Statement Regarding GTX 970 VRAM Issue

There’s been a hot debate on this last few days, why is the GTX 970 on showing 3.5GB of VRAM, instead of the full 4GB?

As far as we know, the VRAM is split between two sections. The first houses 3.5GB of high priority memory, the second holds 500MB. Games can still use all 4GB, but it switches back down to 3.5GB when not needed.

Nvidia moderator ManuelG made the following statement.

“The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory. However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section. The GPU has higher priority access to the 3.5GB section. When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.

We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment. The best way to test that is to look at game performance. Compare a GTX 980 to a 970 on a game that uses less than 3.5GB. Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.”

They also released the following chart, which shows that there is no performance drop when addressing the extra 500MB of VRAM.

So there you have it, there’s no issues here and that’s how the card is supposed to operate; it just looks a bit strange.

Thank you DSO for providing us with this information.

Images courtesy of DSO.

8 GB XFX Radeon R9 290X Graphics Card Spotted

Eager to get your graphics card loving hands on more VRAM? Then check out one of the first 8GB R9 290X graphics cards from XFX! We’ve already heard that Sapphire, MSI and PowerColor are working on similar hardware, but the more manufacturers giving their hardware a VRAM boost, the better.

Images have leaked that show the popular XFX R9 290X sporting an 8GB sticker. It still looks like the previous card, with their lovely Ghost2 cooler, but with double the amount of VRAM of the old model. This is good news for those pushing for better 4K performance levels and should reap huge rewards for those running multi-GPU configurations.

Are you excited about 8GB cards, or are you currently happy with a 4GB (or less) GPU?

Thank you VideoCardz for providing us with this information.

Images courtesy of VideoCardz.