Sapphire Launches G2A Competition on Nitro GPUs

Sapphire is one of the most reputable graphics card vendors and produces a huge range of custom cooled AMD solutions. The company has forged a reputation for manufacturing exceptional cooling apparatus which manages to tame the GPU core while remaining quiet. This is a remarkable achievement especially when you consider AMD’s 290 series suffered from very high load temperatures. Their latest range, entitled the NITRO offers premium features at a very respectable price point. According to Sapphire, the:

 “….NITRO series boasts a range of features previously reserved for high-end cards, including long-life capacitors and award-winning Black Diamond Chokes, as well as our award-winning cooling solutions. Its sleek, elegant contours have been designed to suit any build. And the latest graphics architecture from AMD ensures fast, reliable gaming, performance tuned for any level of gamer.”

The company has just partnered with digital games store, G2A.com, and will:

“…give anyone who buys a SAPPHIRE NITRO Gaming Series card a chance to win a discount voucher from G2A.com.

NITRO R7 300 series = 20 USD discount voucher
NITRO R9 300 series = 40 USD discount voucher
NITRO R9 Fury series = 50 USD discount voucher

The draw will be made weekly with winners announced on the competition page. Winners will also be notified by email.”

To enter the competition all you have to do is register your details here, and hopefully be selected to receive a voucher! G2A.com provides various games at discounted prices, and there’s been some concern about where they acquire codes from. Nevertheless, you shouldn’t encounter any problems redeeming purchases, and it’s always worth trying to enter as many competitions as you possibly can! This is a great promotion and provides additional value when selecting a NITRO series GPU.

Sapphire will also launch a new NITRO gaming series mini site around the 22nd February. Here is a small taste of what’s to come.

GDDR5X Graphics Memory Standard Announced by JEDEC

JEDEC Solid State Technology Association is one of the world leaders in the memory standards field and today published JESD232, the specification for GDDR5X graphics memory. With both sides of the graphics card battle seemingly set to use this new standard going into 2016, the standard should herald the release of new graphics cards making use of the RAM.

GDDR5X graphics memory (or SGRAM) is derived from the commonplace GDDR5 used in the majority of current graphics cards while identifying key areas in which the existing standard can be enhanced in both design and operability that make them more able to handle applications that benefit from very high memory bandwidth. The aim for GDDR5X is to reach data rates in the region of 10 to 14 Gb/s, twice as fast as GDDR5. While this falls short of the enormous 256 Gb/s HBM2 GRAM is meant to be capable of, GDDR5X should be suited to more affordable grades of graphics card where HBM is price ineffectual. GDDR5X should also be able to ensure an easy switchover from the previous standard for developers, with the new standard retaining usage of GDDR5’s pseudo open drain signaling.

How GDDR5X impacts Micron’s development of GDDR6 remains to be seen, with both technologies seemingly targeting the same area of the graphics card market. Regardless, with HBM2 for enthusiast grade cards and this newly standardized GDDR5X for the rest of the field, 2016 should be an exciting time for the GPU market whether you’re a fan of AMD or nVidia.

Rumours Suggest Upcoming NVIDIA GeForce Price Cuts

AMD has gained some foothold on NVIDIA as of late and we’ve also recently heard that the long-awaited new Fury-based dual-GPU card might make its appearance soon. That leaves the ball in NVIDIA’s corner and it is time for the to hit back. The latest rumours now suggest that NVIDIA might be preparing a series of price cuts on their GeForce GTX 900 series graphics cards. The price cuts, if the rumour is true, will affect the GTX 980 and below graphics cards, so should you want a GTX 980 Ti, then the price might stay the same for a bit longer.

The price cuts should make the holiday shopping a little nicer for those who are in the market for a new NVIDIA-based graphics card. With the new pricings, you should be able to get the GeForce GTX 960 for $179 USD, the GeForce GTX 970 for $299 USD, and the GeForce GTX 980 for $449 USD.

All three cards that are set to be discounted have their own market aspect. The GTX 960 is perfect for League of Legends or DOTA gamers that don’t require a large amount of GPU horsepower while the GTX 970 is perfectly suited for 1080p to 1440p gaming of all sorts. The GTX 980 is a really sweet card that can pull 1440p with the best of settings in most games and even makes some games playable in 4K.

A price cut could also tempt many people to opt for a second graphics card to their already existing and get a sweet SLI setup going. Would a price cut like this tempt you to get a new NVIDIA graphics card or are you holding back a little longer? Let us know in the comments.

Sapphire Tri-X R9 390X 8GB CrossfireX Review

Introduction


Here at eTeknix, we strive to give the consumer the best possible advice in every aspect of technology. Today is no different and we are excited to bring you the CrossfireX review of the Sapphire Tri-X R9 390X graphics cards.

Based on the slightly aging Hawaii architecture, performance was expected to be fairly low, however, as we found in our standalone review that really wasn’t the case. Alone, this card has the power to directly take on the GTX 980 and is poised to be at the low-end of the brand new AMD R9 Fury range. At a price of £350, it is perfectly priced to fill in the gap between the R9 390 and R9 Fury.

When we test in CrossfireX, we aim to use two identical graphics card to ensure that everything is as similar as possible. When using the same cards, you can almost guarantee the same cooling capabilities, power draw, core clock, boost clock and so on. This then gives us the best possible outcome for maximum performance as the computer does not need to compensate for any differences.

AMD R9 Fury X 4GB Graphics Card Crossfire Review

Introduction


Here at eTeknix, we strive to give the consumer the best possible advice in every aspect of technology. Today is no different, we are extremely excited to bring you the CrossFireX review of the recently released AMD Radeon R9 Fury X. As we all know, the R9 Fury X is AMD’s latest attempt to take the crown from NVIDIA in the top end consumer GPU market. In some ways, AMD has succeeded, thanks to the introduction of a new GPU architecture and the innovative High Bandwidth Memory (HBM). With the use of HBM, it has been proven that the quantity of VRAM isn’t the issue, it is the quality of the connection and bandwidth allowance for the VRAM to do its work; although more VRAM certainly couldn’t hurt.

On the test bench today, we have the XFX version of the AMD R9 Fury X 4GB featuring HBM. As we previously saw in the standalone review, the card had more than enough power to supply 30FPS at 4K; however, 30FPS isn’t enough. Adding another card into the mix should produce very high chances of witnessing 60FPS at 4K.

The two cards in the testing bench together you can get a feel of the size of them compared to the Gigabyte G1 Gaming X99 motherboard. The attention to detail that has gone into every card is simply amazing; there isn’t a piece of cable sleeve or cable tie out of place. All of the screws are perfectly inserted and the metal is buffed up to a gorgeous shine.

A single card is a testament to AMD’s attention to detail. It’s a shame the heat shrink didn’t go all of the way to the fan cowling; leaving about 1″ of coloured cables visible.

Out of the rig, the two cards in all their glory. If the comparison to the motherboard wasn’t enough, how about next to the 120mm radiators? Due to there being no metal heat sink inside the card, it weighs next to nothing compared to the radiators.

 

Up close to the cards, you can see that there isn’t a dimple on the cover plate out of place and there is no frayed cable sleeving protruding from the end of the cards.

 

We inserted both graphics cards onto our Core i7 5820K and X99-based test system, ensuring adequate spacing for optimum cooling and that both have access to sufficient PCI-e bandwidth for CrossFire operation. These cards are the best possible option for configuring a crossfire set-up, both are the reference design, same sub-vendor, exactly the same clock speeds and the same TDP. All of this means that we can achieve the best possible scaling with little to no variations due to the mismatch of graphics cards.

Unreleased AMD & Nvidia GPU Benchmarks Leaked

AMD’s R9 390X, Nvidia’s GTX 980 Ti and Titan X benchmarks have been leaked plus an unconfirmed GTX 9xx  has also appeared! The leaks give detailed performance stats for Nvidia’s GeForce GTX Titan X, GeForce GTX 980 Ti “GM200” and AMD’s Radeon R9 390X “Fiji XT” also an unconfirmed GTX 9xx supposedly the GTX 960ti or 965, but don’t get your hopes up, till official word.

At 4K the R9 390X and Titan X are closely matched with the R9 390X figures slightly higher whilst the GTX 980 Ti has a bit of catching up to do.

At 2560×1600 and Firestrike Extreme benchmarks, the R9 390X grabs gold, Titan X silver and 980 Ti bronze, the GTX 9xx is in the lower order, between the 780 and 770.

Power consumption’s a bit of a surprise as AMD’s 4096 GCN GPU consumes slightly less power than a 2816 GCN chip. Carrizo, AMD’s power efficient technology being the main culprit in this reduction. The Titan X is doing way better, the GTX 980 Ti beats it to the punch and the GTX 9xx beating them all with the lowest power consumption rating, finally winning a gold.

Here’s final comparison of the AMD Radeon R9 390X, Nvidia GeForce GTX Titan X and Nvidia GeForce GTX 980 Ti.

If these stats excite you, feel free to drop us a line in the comments section.

Thank you chiphell for providing us with this information.

AMD Officially Announce Details of High Bandwidth Memory

We’ve been waiting for details on the new memory architecture from AMD for a while now. Since we heard the possible specifications and performance of the new R9 390x all thanks to the new High Bandwidth Memory (HBM) that will be utilised on this graphics card.

Last week, we had a chat with Joe Macri, Corporate Vice President at AMD. He is really behind HBM and has been behind it since product proposal. Here is a little bit of background information, HBM has been in development for around 7 years and was the idea of a new AMD engineer at the time. They knew, even 7 years ago, that GDDR5 was not going to be an ever-lasting architecture and something else needed to be devised.

The basis behind HBM is to use stacked memory modules to save footprint and to also integrate them into the CPU/ GPU itself. This way, the communication distance between a stack of modules is vastly reduced and the distance between the stack and the CPU/ GPU core is again reduced. With the reduced distances, the bandwidth is increased and required power dropped.

When you look at graphics cards such as the R9 290x with 8GB RAM, the GPU core and surrounding memory modules can take up around a typical SSD size footprint and then you also need all of the other components such as voltage regulators; this requires a huge card length to accommodate all of the components and also the communication distances are large.

The design process behind this, in theory, is very simple. Decrease the size of the RAM footprint and get it as close to the CPU/ GPU as possible. Let’s take a single stack of HBM, each stack is currently only 1GB in capacity and only four ‘DRAM dies’ high. What makes this better than conventional DRAM layout is the distance between them and the CPU/ GPU die.

With the reduced distance, the bandwidth is greatly increased and also power is reduced as there is less distance to send information and fewer circuits to keep powered.

So what about performance figures? The actual clock speed isn’t amazing, just 1GBps when compared to GDDR5, but that shows just how powerful and refined they are in comparison. Over three times the bandwidth and lower voltage; it’s ticking all the right boxes.

There was an opportunity to ask a few questions towards the end, sadly only regarding HBM memory, so no confirmed GPU specifications.

Will HBM only be limited to 4GB due to only 4 stacks (1GB per stack)?

  • HBM v1 will be limited to just 4GB, but more stacks can be added.

Will HBM be added into APU’s and CPU’s?

  • There are thoughts on integrating HBM into AMD APU’s and CPU’s, but current focus is on graphics cards.

With the current limitation only being 4GB, will we see negative performance in high demanding games such as GTA V at 4K that require more than 4GB?

  • Current GDDR5 memory is wasteful, so despite lower capacity, it will perform like higher capacity DRAM

Could we see a mix of HBM and GDDR5, sort of like how a SSD and HDD would work?

  • Mixed memory subsystems are to become a reality, but nothing yet, main goal is graphics cards.

I’m liking the sound of this memory type; if it really delivers the performance stated, we could see some extremely high power GPU’s enter the market very soon. What are your thoughts on HBM memory? Do you think that this will be the new format of memory or will GDDR5 reign supreme? Let us know in the comments.

PC Gamers to be Represented at E3

E3, that glorious event where game creators come together to blow our minds with what they have in store for gamers in the upcoming year. Last year we had the likes of Battlefield Hardline gracing the stage, but it’s always targeted for the same audience; console gamers.

This year is slightly different, yes at E3 there have been stalls with some PC paraphernalia tucked in a corner, but this year creators have come together to offer a PC gamers only event. PC hardware has never been so bang-for-buck powerful; just look at the NVIDIA GTX 960, absolutely smashing the mid-range budgets at 1080p for under £200. Developers have come to understand that and have now embraced it.

PC Gaming Show, presented by AMD and PC Gamer, the event will be hosted on June 16th in downtown Los Angeles. Kicking off in the Blasco Teater at 5PM PT, developers such as Blizzard, Square Enix, Devolver Digital and Bohemia Interactive will present some of their upcoming releases; alongside speakers like Cliff Bleszinski and Dean Hall. Now don’t worry, we will have coverage of the event on our website, but there will be a live stream via Twitch so you can see what exactly is going on.

We’ve already heard an E3 rumour story that AMD will be showcasing the hugely anticipated 300 series graphics cards; so maybe this is a fully tuned event to incorporate that.

Are you looking forward to any games or hardware to be announced at E3? Will you be watching with live stream or just catching up on news stories that interest you? Let us know in the comments.

Thank you to Rock, Paper, Shotgun for providing us with this information

AMD Drops R9 285 Price Ahead of R9 300 Series Release

When it comes to affordable yet powerful graphics cards, the AMD Radeon R9 285 is quite possibly one of the most fulfilling to its target demographic, with support for 4K resolution and DirectX12, it’s a tough competitor for the Nvidia rivals. But now AMD have decided reduced the price of the mid-range card in Europe, just one month before the R9 300 series is set to hit the market.

With the card already being a worthwhile purchase at the former price, the new price tag of €180 or lower makes the card essentially unbeatable in its class, especially since the Nvidia GTX 960 is currently selling for around €192 or more. With the new price change, it will be incredibly difficult to warrant purchasing the Nvidia equivalent considering it holds inferior power to that little AMD powerhouse, packing in an impressive 1,792 stream processors, 32 ROPs and 2GB of GDDR5 video memory. But don’t jump to buy one straight away, as we could also see Nvidia try combat the price change with one of their own.

With new top-end cards coming out on both sides of the field, we can anticipate that we’ll be seeing even more price drops across all the previous generation of cards, making that R9 290 a little easier to afford.

Thank you to VR-Zone for providing us with this information.

Image courtesy of TechPowerUp.

AMD Greenland GPUs Might Go Straight to 14nm Process

AMD is putting the final touches on everything and preparing to launch their new Radeon R300 series graphics cards very soon, but before it even hits the market we already get information about the next generation of AMD graphics cards. The R9 300 series is set to launch in June during Computex in Taipei and it will continue to use the 28nm process as the 20nm process just isn’t viable yet for these kind of products, the costs are just too high.

But the next generation of AMD cards from the Arctic Islands series, codenamed Greenland, will be built on the 14nm FinFET technology. This means that AMD could skip the 20nm process entirely. Another detail revealed is that the Greenland card will use the second generation of HBM memory with increased bandwidth and capacity. It is expected that AMD’s 14nm FinFET process will be produced by Globalfoundries OEM.

This could mean some promising times ahead of us with more powerful GPUs that use even less power than they do today, but also heavily improved memory in both capacity and speed. I can hardly wait to see what AMD has to offer here, although we should be looking forward to the next generation R9 300 cards instead. Computex isn’t far away, so it will be an exciting summer.

Latest Zotac Graphics Cards @ CeBIT 2015

During the second day of CeBIT 2015, we met up with ZOTAC; most famous for their graphics cards and mini PCs. During the meeting, we had the opportunity to handle and fully examine some of their newest GTX 900 series products. Some very high-quality finishes were seen here, the best bit that I took away was the constant questions of our initial feedback on the looks of the products. We appreciate a company that values everything you have to say about their products. We were personally asked about the aesthetics of the tri-fan coolers, Extreme and AMP coolers fitted to the GTX 970 and GTX 980.

We gave our honest opinion which were taken on board; so I wonder if our suggestion of an orange accented AMP cooler will be taken on board. Would you like to see some additional colours used in the cooler design? Subtle or in your face, let us know on Facebook or on our forums.

We look forward to see what other innovative products Zotac can bring us in the future. Any news or events, we will keep you updated.

 

AMD R9 390X Benchmarks Surface

With Nvidia’s GTX 970 and 980 pretty much dominating the market now, we all can’t wait for AMD’s next move. We will still have to be a little patient though, but it does look like it will be worth waiting for.

We’ve had rumours and leaks for a while now, starting with pictures of a possible hybrid reference cooler and the use of stacked High-Bandwidth-Memory (HBM). Time has passed we’re now seeing the first leaks of benchmarks for the next generation of AMD cards.

Two separate benchmarks have surfaced during the last week, claiming to be from AMD R9 390X cards. The first is a 3D Mark 11 score of X8121 where the current 290X only scores around X4700. Very impressive and if true without a doubt due to the new memory as the GPU still will be built on the 28nm process.

The second benchmark is from an R9 390x quad crossfire setup scoring impressive 38,875 in the Fire Strike Extreme test – about 33% more than a heavy overclocked GTX 980 Quad-SLI setup. The user posting the quad benchmark also posted what is supposed to be the PCB of the card.

Please keep in mind that these are rumours. We do however know that the new AMD Rx 300 series will launch in Q2 2015, most likely during Computex. The R9 380 will double up to 4096 GCN 1.2 cores and use 4GB of stacked HBM memory as was confirmed in an investor conference call following the AMD q4-2014 and fiscal year reports.

Thanks to MyDrivers for providing us with this information

Nvidia GeForce GTX 980 & 970 Specifications Revealed

The new Nvidia GeForce GTX 980 and GTX 970 graphics cards are set to launch very soon, bringing with them a wave of features that will make them the new flagship Nvidia GPU’s. There has been a lot of speculation on the cards specifications in recent weeks, but now it looks like most of the final specs have been revealed.

The GeForce GTX 980 is set to feature 2048 CUDA cores via 16 Streaming Multiprocessors Maxwell (SMM), while the GTX 970 features 1664 CUDA cores via 13 SMMs, leaving a 384 CUDA core difference between the two cards.

The GeForce GTX 980 (GM204) is being sold as the replacement for the current GK104 hardware and since the new architecture is set to offer improved power efficiency and performance, that shouldn’t be a difficult task.

We don’t have long to wait until these cards are fully revealed, but early indications are that they’re going to be a worthy step up from the current 7xx models.

Thank you VideoCardz for providing us with this information.

Image courtesy of VideoCardz.

Nvidia’s Flagship GTX 970 Photos & Specifications Leaked

Just one week from the expected launch date and yet more details of Nvidia’s flagship GTX 970 GPU have leaked. The model displayed looks to be an engineering sample with a fairly standard looking cooler fitted. I suspect the cooler on the retail release will be similar to the one seen on cards such as the 780 Ti and Titan models.

The card features a short PCB, similar to that of the GTX 670, with the cooler extending off the back of the card. There are two six-pin power connectors and rumour has it that the cards power consumption will not exceed more than 190W.

The dual SLI bridges are still in place, so three or four-way SLI seems like a possibility. At the back of the card you’ll find a large exhaust, dual DVI, HDMI 2.0 and DisplayPort.

The screenshots below are believed to contain the hardware specifications, but these are yet to be confirmed. GPU-Z shows 1664 stream processors, 138 Texture Unit. This contradicts previous rumours that the card would feature 2048 stream processors. The card also features 4GB of GDDR5 with a 256-bit bus @ 1814MHz and a GPU core speed of 1103-1230MHz.

We mention the GTX 970 as the flagship, as no concrete information on the GTX 980 have been released at the time of writing.

Thank you MyDrivers for providing us with this information.

Graphics Card Shipments Drop But Nvidia Still Holds Significant Lead

Not that long ago we heard that graphics card shipments were set to take a tumble in Q2 of this year, that’s April through June. The reason for this was because Scrypt-based cryptocurrency miners were dumping their AMD graphics cards onto Amazon, eBay and other sale sites in favour of newly released ASIC Scrypt miners. This meant many prospective graphics card buyers were opting for buying from the floods of cheap used mining cards instead of buying new ones at higher prices. That predicted drop has indeed happened for the entire discrete graphics market which Jon Peddie Research say fell 17.5% compared to the previous quarter.

The overall decline of 17.5% for discrete graphics is surprising given that the overall PC Market increased by 1.3%. Given the link to mining you would think that AMD took the hardest hit – this wasn’t the case. Nvidia suffered a 21% quarter-to-quarter drop compared to AMD who dropped by 10.7%, roughly half of that. However, there could be a link: it could be a spillover effect of cheap used AMD graphics cards affecting demand for Nvidia’s more premium priced products. Despite that fact Nvidia still dominates the market with 62% share compared to AMD’s 38%.

JPR didn’t specify when we might see a recovery, but we should expect to see a recovery once AMD and Nvidia release their new ranges of graphics cards at the end of this year.

Source: Jon Peddie Research (JPR), Via: TechPowerUp

Image courtesy of Techspot

The Rise and Rise of Radeon

Something is happening in the world of graphics and it is a sea change that is bringing benefits to everyone in the industry. AMD has always sought to offer the best possible GPU with our Radeon line of products but not since the purchase of ATI in 2006 has the company had the strength it has now.

Take a look at the chart below which shows the performance of AMD Radeon v NVIDIA GeForce as tested using the independent benchmark 3DMark Fire Strike test.  We can see that at every point in the stack AMD Radeon offers a competitive performance lead.   However when we add currently available pricing from etailers like Newegg.com, Scan.co.uk, Alternate.de or JD.com, you will see that Radeon is also cheaper at every level.  Why is this?

There are three factors making Radeon the best buy today;

  • 1) Performance.  AMDs advanced GPUs deliver extraordinary performance and spectacular efficiency. Offering the best product across the board- better performance at every price point.
  • 2) Social.  AMD has long had a strong fan base but over the last 18 months our lead in gaming following the wins with Xbox, PS4 and the partnership with EA/Dice with Battlefield 4 and the hugely successful launch of Mantle has seen ‘Team Red’ grow enormously.  As measured by Sysomos Heartbeat end user sentiment for AMD has grown by more than 200% in the last 18 months and our expectation is for those to increase further.
  • 3) Technological.  A recent report from website eTeknix showed how far AMD has come in improving both the performance and reliability of our drivers.  Today’s Radeon software drivers are not just good but world class, strong enough to be used by the world’s most demanding and stringent users like Boeing, Lockheed and Philips.  Our success in Professional Graphics with our Fire Pro range is testament to this.

Today many stores around the world are now using the chart below to help guide customers to making the right choice of GPU.  We believe that the use of fair, independent benchmarks like 3DMark are crucial to the establishment of fair competition to the benefit of all.  We encourage you to use the chart or even do your own testing and share your results. 3DMark can be downloaded from here.

This is a guest blog post written by AMD’s Vice President of Global Channel Sales Roy Taylor

Image courtesy of AMD

Target Components Now Exclusive Gaming Distributor For Club3D

Target Components are really coming into their stride recently, securing several major brand exclusives such as GeIL, KFA2Chieftec and now Club3D. While they’re not covering the entire Club3D range, they have secured the most important part of it, their gaming products.

Club3D have a  really exciting range of graphics cards available right now and their Poker branding makes it really easy to find out the performance, cooling solutions and factory overclocking levels, with their Joker being the lower end and the Ace being the highest.

The deal sees Target become sole distributor of Club 3D’s gaming graphics card range as well as the brand’s newly-launched ‘SenseVision’ range of gaming accessories. Established 17 years ago, Club 3D‘s successes in supplying AMD graphics cards has led to an expanded product offering. Target’s main focus will be on Club 3D’s ‘PokerSeries’ graphics cards which offer top performance at competitive prices, but they will also be expanding the range with Club 3D’s ‘SenseVision’ label of USB hubs, graphics adapters and multi-screen display ports. Many of which we saw earlier this year at CeBIT. This year also sees Club 3D launch the world’s first USB3.0-to-4K graphics adapter.

Gerjan Blonet, Club 3D’s European Sales Executive, explains Target’s appointment: “With their uniquely customer-focused approach to the channel and extensive customer base of independent resellers, we’re excited by the partnership between Club 3D and Target.”

The Club 3D range has also been added to Target’s In-Store PC Builder configuration tool, with all products pre-checked for compatibility with all other components. Club 3D will be exhibiting at the Target Open Day 2014 on 19th September in Leeds (B2B only) and eTeknix will be there to provide coverage.

Thank you Target Components for providing us with this information.

Images courtesy of Target Components.

Asetek Granted Patents On Liquid Cooling Systems For GPUs In The USA

Today appears to be a very sad day for graphics card enthusiasts as Asetek have patented a liquid cooling system for graphics cards. The patent, which will apply in the USA, has the potential to force custom GPU water-block vendors out of the GPU market from what I can see. The likes of EKWB, Koolance and Aquacool may be forced to move production outside the USA and stop selling to the American market in a similar way Swiftech had to when Asetek raised litigation claims against them for patent violations of the AIO CPU cooler design. It is not yet specifically known if the Asetek patent applies to all-in-one liquid cooling solutions for GPUs or whether it applies to all liquid cooling solutions for GPUs so we will be working to get clarification on this. Asetek claim to have been granted a patent by the USPTO on their “Thermal interposer liquid cooling system designed for cooling graphic processing units (GPUs)”. Unsurprisingly Asetek were delighted about the USPTO decision, making the following statement:

“As seen in the recently announced AMD Radeon R9 295X2, the graphics cooling market is one that we see as having tremendous growth potential for our desktop business,” said André Sloth Eriksen, Founder and CEO of Asetek. “We continue to see increasing interest from GPU and graphics card manufacturers due to increased power use and demands for lower acoustics. Given this interest, it is possible that the GPU cooling business could rival our CPU cooling business in the coming years.”

Source: Techpowerup

Image courtesy of Asetek

Titan Z Misses Launch Day, Has Nvidia Delayed It Indefinitely

There is something strange going on at the Nvidia camp, not only have they already delayed the mighty Nvidia GeForce GTX Titan Z graphics card, but now it appears to… well, actually, it hasn’t appeared and that is what worries us. The card literally launched yesterday and with any GPU launch you expect some amount of commotion as people scramble to get one, even at $3000 a pop, we were still expecting this to fly off the shelves as there are enough people with either a professional need for this card, or just very deep pockets and a lot of spare time to kill.

So far there are no retailers, system integrators or anyone else for that matter with the cards, so it looks like the Titan Z has missed its own launch party and worse still, Nvidia have yet to say anything about it. To add more worry to the matter, ComputerBase.de has the cards release date as “shifted indefinitely”.

With the Titan Z pushing around 8TFLOPS of performance Nvidia were set to rule, but word has it that they’ve decided to redesign the card after AMD’s card performed at 11.5TFLOPS, and it does it for half the price. Nvidia needs to come back with something more powerful and cheaper to compete and this is a great thing for consumers.

It’s a shame if the Titan Z didn’t launch, it’s 12GB of GDDR5 memory and general performance would still have AMD beaten in many benchmarks, just not all benchmarks. It was poised to be a real powerhouse for 4K gaming, but now we’ll have to wait and see exactly what Nvidia are going to do.

Thank you KitGuru for providing us with this information.

Image courtesy of KitGuru.

Diamond Display Their Latest Graphics Solutions At CES 2014

A brand that we’ve never really worked with in the past were at CES 2014, and after meeting with them, this should all change in the near future and that brand is Diamond. For those not in the know, they have a broad product range of networking products, but their bread and butter so to speak has always been graphics cards and after visiting their suite in the Mirage hotel, we got to see exactly what was on offer. This included the R7 240 and 250, as well as the R9 270X and 280X offerings from AMD.

Also in the mix was the BizView 750 card aimed at the business sector, offering some key features that AMD cater for, including Eyefinity but without the un-needed 3D performance that is’t required for a card catered for this market. Instead, this card is aimed at POS, digital signage and other similar functions. A good example is airports, travel agents and restaurants that could display their latest deals or flight times.

Other products that were on show included the MDS3900 dual head mini dock with Gigabit Ethernet which allows a digital display signal to be split for multiple outputs as well as including USB 3.0 and Gigabit functionality. Also on display was the BV550X4 quad output graphics card, which has an extremely small footprint working on a PCI-Express interface, but includes a single output connector with a 4-1 cable allowing 4 outputs to be displayed using the pre-supplied cable. This is a very cost effective way of displaying multiple screens from a single output and keeping costs and heat/noise to a minimum.

Would any of Diamond’s graphics card be something that you’d buy?

Removable CPU Confirmed In The New Mac Pro

When it comes to Apple computers, they have been criticized for the lack of flexibility towards user upgrades. Basically, if you buy an iMac, Mac Pro or Macbook, what you get is what you get suck with unless you buy another one. That’s why most users tend to spend more and go for the high-end specs when buying Apple gear. But it seems times are changing, and Apple devices tend to change with them.

The new Mac Pro is reported to have been built with more modding features in mind, and according to Other World Computing who performed a quick teardown of the new Mac Pro, it has been confirmed that the CPU in the computer is removable, meaning that users will be able to upgrade the CPU whenever they want in the event that it starts to get a little old or worn out.

This is thanks to the fact that the processor is socketed to the motherboard, as opposed to being directly soldered, which seems to be the case in most of Apple’s Mac computers, both laptop and desktop. Apple’s Mac Pros have typically been a little bit more customizable than the iMac, where users can swap out RAM and graphic cards for something better or newer. The CPU swapping feature might not seem to be as flexible as what you can do in custom-built Windows-based PCs, but at least it is a step forward.

Thank you Ubergizmo for providing us with this information
Image courtesy of Ubergizmo

Radeon R9 290 Series Double Dissipation Released By XFX

XFX rolled out its first non-reference design Radeon R9 290 series graphics cards, the Radeon R9 290X Double Dissipation (model: R9-290X-EDFD), and the R9 290 Double Dissipation (model: R9-290A-EDFD).

Both Radeon R9 290X DD and R9 290 DD look exactly the same. The difference lays under the clocks, the difference is also quite disappointing as most would think. XFX did not overclock these cards, so they come with default clocks: 947 and 1000 MHz. The cards however are fully custom-built. They feature something known as XFactor, which stands for Solid Capacitors, Ferrite Core Chokes and Dust-Free IP-5X Fan. At this point we don’t know how modified is the board of these cards (I expect PCB to be the same for both Hawaii PRO and XT).

XFX also released a teaser with an illuminating logo. I’m not sure if this is actually a new feature, because XFX does not mention that in their overview, but it does look different than R9 280X cooling system so this has to be a glowing thing.

Unfortunately XFX equipped both cards with default power connectors (6+8pin). No price and availability date were mentioned. Personally I think this is one of the best looking R9 290X on the market, so it’s definitely worth a wait.

Thank you VideoCardz for providing us with this information
Images courtesy of VideoCardz

Maxwell Powered GeForce Cards To Shipping before March 2014

Rumor has it that the new Maxwell based graphics cards are coming in Q1 2014 and will hit the shelves in the same quarter. Multiple sources have confirmed that there will be Maxwell based cards in retail that will ship before March 2014. This is quite big news for GPU lovers, as Maxwell should be much more power efficient than Kepler. According to Nvidia’s roadmap it is supposed to have four times the Dual Precision Gflops per watt compared to Kepler. According to the same roadmap Maxwell has been pushed from 2013 to 2014 which implies a slight delay from the original plan.

Fudzilla’s sources have confirmed that the cards will start shipping in Q1 2014, but they are not aware if this is another 28nm or first 20nm graphics chip to hit the market. GPU performance lovers as well as Tesla compute performance enthusiasts will like this core as it can offer much more performance per watt than any previous generation including Kepler. The question remains if Nvidia plans to officially introduce this card before March 24 2014, the first day of its GPU technology conference, of if the launch happens a bit earlier.

In case Nvidia wants to ship cards to customers, that will have to mean that it has the production right now. Therefore, we will just have to wait and see more about the Maxwell cards shipping to customers in the form of retail graphics cards by the end of March.

Thank you Fudzilla for providing us with this information
Image courtesy of Fudzilla

Battlefield 4 Graphics Performance Overview With Current Generation GPUs

Introduction


Battlefield 4 has been one of the biggest game releases so far this year for gamers on all gaming platforms. The FPS title from EA and DICE has got off to a relatively shaky start with numerous audio, graphical and gameplay problems across the various platforms it was released on. In fact for many Battlefield 4 owners the game is still in a dysfunctional or buggy state, but you can expect (or hope) that EA and DICE will begin to patch and fix the majority of the problems within the coming weeks as they have said they will. The shaky launch aside, what most PC owners/gamers want to know, if they haven’t already found out, is how do current generation GPUs perform in Battlefield 4 on the PC?

Today we put that question to the test with an extensive, albeit not entirely complete, range of current generation AMD and Nvidia GPUs. On the AMD side we have the R7 260X, R9 270, R9 270X, R9 280X, R9 290 and R9 290X while on the Nvidia side we have a few more offerings with the GTX 650 Ti Boost, GTX 660, GTX 760, GTX 770, GTX 780, GTX 780 Ti and GTX Titan. All of the aforementioned graphics cards are current offerings and to the sharp-minded readers you will notice some graphics cards are missing. Mainly the current generation lower-end graphics cards from both AMD and Nvidia are absent, that includes the Nvidia GTX 650, GT 640 GDDR5, GT 640 DDR3 and the AMD R7 250 and R7 240. The main reason for not testing these graphics cards, other than that we didn’t have most of them, is because they simply aren’t that capable of running such a high end gaming title. Of course that’s not to say they can’t but given the nature of the resolutions we test (mainly 1080p or above) and the quality settings our readers like to see (very high or ultra) these GPUs simply aren’t cut out for the test. Arguably they are more aimed at gamers with 1366 x 768 monitors tackling medium-high details but I digress. The system requirements for Battlefield 4 reveal a similar picture, if you want a smooth gameplay experience then you need an AMD Radeon HD 7870 or Nvidia GTX 660 or better. However, those system requirements show you very little about what you can expect at different resolutions.  So without any further ado let us show you our results and show you exactly how AMD and Nvidia’s offerings stack up!

Club3D SenseVision MST Hub CSV-5300 Review

Introduction And Feature Overview


The unique ability of the DisplayPort signal to be split into multiple streams is something that has been around for a while, namely since DisplayPort 1.1 compatible graphics cards have been on the market. AMD’s HD 5000 series were the first to offer multiple display outputs from a single DisplayPort but is very much limited by the low bandwidth of DP 1.1. In terms of DisplayPort innovations we haven’t really seen an MST hub from anyone up until now which has been quite sad.

Today we are looking something that isn’t exactly glamorous but fills quite a large hole in the market. Club3D’s MST (Multi Stream Transport) DisplayPort Hub is one of the first of those elusive MST hubs that allows you to split off a DisplayPort compatible graphics card output into any combination of resolutions that fills the maximum bandwidth of the link, the two links would be DisplayPort 1.1 or 1.2 aka HBR and HBR2. You can see full  bandwidth details below:

Most people will choose to use a trio of 1080p displays as these are currently the most affordable solutions on the market. This MST hub from Club3D does support Eyefinity but Nvidia surround does not work due to a lack of driver support from Nvidia, if and when Nvidia fix it the MST Hub will support it.

The ability to split a DisplayPort output into up to three displays of varying resolutions will also come in useful for mobile workstations where you need more display real estate but simply can’t get that in a mobile solution or when your graphics card supports more displays than it has ports. The Club3D MST Hub does require an external power source but uses only around 2.5-3.5 Watts.

The biggest rival to Club3D is the Matrox TripleHead2Go DP Edition but costing around £275+ this is mainly limited to the professional and business markets – most other people have made-do with other more affordable compromises and solutions. Club3D’s MST Hub on the other hand costs only around £90-100 making it about a third of the cost of its biggest rival and unlike the Matrox product the Club3D MST supports a higher overall resolution and more bandwidth over DP 1.2. This is compared to the maximum of 5760 by 1080 supported on the Matrox. It is also worth noting that the Matrox unit processes “on-chip” and sends the signal to the three monitors so isn’t capable of gaming, high frame rates or ultra high definition video playback whereas the Club3D MST retrieves the processing from the GPU so supports everything that the GPU would support.

The Club3D MST Hub, pictured above, serves a very functional purpose for desktop systems.  With the vast majority of graphics cards only having three to four display outputs, yet supporting 6 displays, the only way to achieve more displays than the number of ports is to use one of these MST hubs. We will be testing the Club3D MST hub’s capabilities in a triple display scenario through one port because unfortunately we do not have six displays or a second MST Hub.

Features

  • Standards compliance/support:displayport v1.2, displayport v1.1a,VESA DDM
  • Standard,HDCP V2.0,DisplayId,and EDID V1.4
  • Supports main link rates of 5.4 bps(HBR2),2.7 bps(HBR)and 1.62 bps(RBR) from source
  • Supports 1/2/4 lanes of main link for RX side
  • Supports three DP++ output port,or two dual-link DVI ports,or the combination of ports
  • For DP 1.2 source,supports DP1.2 MST multi video/audio steams
  • Supports 1.1 source,supports ViewXpand
  • Supports DP-DP Bypass mode
  • Supports AUX-CH enables SBM and I2C mapping over AUX between the source/sink and device
  • Dedicated I2C slave for main processor to access the device
  • Supported output resolution:up to 2560X1600@60Hz each monitor in DP1.2 MST and up to FHD/1080p in DP1.1 or DP 1.2 SST
  • Input pixel data depth 6/8/10/12 bits and supports output pixel format RGB444

Powercolor Reveal New Double Blade Fan Design For Future Cooling Solutions

TUL Corporation, who manufacture AMD graphics cards under quite a few brand names (the two biggest are Powercolor and VTX3D) have announced a new double bladed fan design for its future cooling solutions. This new patented therma solution adds a second shorter fan blade to the fan at the center and TUL corporation claim up to 20% more airflow over a traditional fan design. Apparently the extra fan blades can absorb airflow into the centre for more focused airflow.

The new bearing design is also apparently dust proof with a prolonged life cycle. We should expect to see such fans implemented on high end VTX3D and Powercolor AMD graphics cards in the future.

Image courtesy of Powercolor

Next Generation AMD Graphics Card Naming Is Revealed

A new report from TechPowerUp has revealed the naming structure of AMD’s next generation of graphics cards. For as long as I can remember AMD/ATi graphics cards have been denoted by the Radeon HD xyz0 naming scheme with X denoting the generation (aka HD 7000 series), y the market segment (aka HD 7900 series, HD 7800 series) and z the variant of that segment (HD 7970, HD 7950 etc). However, with Volcanic Islands, which is the next generation of AMD graphics cards, this all changes.

The structure is now Ry xz0. As daunting as that looks due to the mish-mash of letters let me break down what it exactly means. The market segment (y) is paired up with an R, the generation number (x) begins the secondary part and is paired up with the market segmentation (z)…. so if we translate some current graphics into this nomenclature the HD 7970 would be R9 770 and a HD 7870 would be R8 770 while a HD 6990 would be R9 690.

Of course we don’t know if AMD will keep the same structure of product segmentation such as the 30/50/70/90 suffix, they might possibly use 40/60/80 instead for example. The reports also indicate their will be another indicator or “X” for the purposes of one of our examples… R9 770 X. This X would indicate a special feature about the product although no one is sure exactly what.

TechPowerUp report examples of R9 280 X for the next-generation desktop graphics and R9 M380 X for the next generation notebook graphics so we are presuming these are models which have popped up in their GPU-Z database. While the new naming scheme isn’t totally straight forward, once we have details of the actual cards we will see it should be much easier to explain.

Image courtesy of AMD