Nvidia May Launch 3 Pascal GP104 SKUs at Computex

For those of you hoping for massive performance jump with the launch of Pascal, prepare to be disappointed. Every new generation tends to improve performance but some more than others. According to previous rumours, Nvidia is using their GP104 die to replace the GTX 980Ti with the GTX 1080 and 1070. Now, the latest reports are suggesting that Nvidia will launch 3 different Pascal SKUs, all based off of GP104, at Computex.

As the xx4 die, GP104 has traditionally been viewed as the smaller chip to the larger x10 or x00 dies that traditionally power flagships. Due to this, don’t expect Pascal to surpass the 980Ti by any large amount. Today’s news also furthers that impression. By splitting GP104 into 3 SKUs, we can expect performance between the 3 cards to be pretty close. It wouldn’t make sense to have so many close performing cards to the flagship which suggests that GP104 won’t be real flagship material.

By slipping GP104 into 3 SKUs, we will likely run into the same situation as the GTX 560Ti 448/570/580 and the 660Ti/670/680. If we take our past experience with those cards as the guideline, we can expect differentiation, not just on the core but the memory bandwidth as well. This makes the previous rumours about the GTX 1070 using GDDR5 while the GTX 1080 will use faster GDDR5X. The 1060Ti as I am calling it may feature either a gimped 192bit bus or the same situation faced by the GTX 970 with a section of VRAM being slower.

Right now, all we have to differentiate the 3 SKUs are the model numbers, the GTX 1080 will be GP104-400-A1, the GTX 1070 GP104-200-A1, and the 1060Ti will be using the GP104-150-A1. It will be interesting to see how Nvidia will differentiate the cards and how they compete against current Maxwell models. Computex can’t come soon enough!

Nvidia Pascal GP104 Spotted?

So far all of the rumours around the GP104 and GTX 1000 series have mostly been about release date and specifications. The closest we’ve gotten to physical evidence have been the shrouds for the GTX 1080 and GTX 1070. For the first time, we’re getting a picture of the physical die and parts of the GPU board around it. According to ChipHell, the die shot you see below belongs to the GP104, the mainstream Pascal GPU.

From the die shot picture, GP104 appears to be about 15.35mm x 19.18mm for a total of about 290 to 300mm². This is the same as GK104 which was also a die shrink and came in at 294mm² and much smaller than GM204 which was a relatively massive 398mm². This shows that Nvidia is starting out with small dies first with the GTX 1070 and 1080 and releasing a GP100/102 Titan and 980Ti later on.

For now, we still don’t know what GP104 will look like, but it seems that most of the FP64 units in GP100 will likely be stripped out and replaced by the more ‘useful’ FP32 ones. The leak also suggests that total FP32 CUDA core count will be around the same as the Titan X but the TMU and ROP count seems closer to the GTX 980. I expect that clock for clock, GP104 won’t be much faster than the Titan X but it will be ahead and much more efficient.

Finally, we can see what appears to be Samsung 2Ghz 1GB GDDR5 DRAM modules for 8GB total. This suggests that that GDDR5X isn’t ready in time or will be reserved for the GP100/102 consumer release. This follows the same trend set by the GTX 680 which was more powerful than the 580 but featured lower memory bus width but faster VRAM and more memory overall.

While the leak is promising, it is a leak after all and I would make sure to take all of this with a shipload of salt. Given the information we know though, this leak may very well reflect reality.

Micron Starts Sampling GDDR5X to Customers

Even when much of the excitement about VRAM is coming from HBM2, that technology isn’t quite ready for prime time yet. For now, HBM2 is still a ways away and still a premium product. To hold the line, memory vendors have come up with GDRR5X, a significantly improved version of GDDR5. In what is unquestionably good new, Micron has just started sampling their GDDR5X modules to customers, way ahead of their original summer target.

GDDR5X has been moving along quickly since JEDEC finalized the specifications back in January. It was also only last month that Micron got their first samples back from their fabs to test and validate. This means that GDDR5X was easier to implement than expected and the quality of the initial batch was good enough that there wasn’t much to change in the production process.

Micron will be offering GDRR5X in 1GB and 2GB IC’s, allowing for 8GB and 16GB VRAM GPUs on as narrow as 256bit memory buses. The biggest advantage of GDDR5X is the doubling of bandwidth from 32byte/access to 64byte/access. Combined with higher clock speeds that allow for up to 16Gbps and improved power efficiency, the new memory will be a good match for Pascal and Polaris while we wait for HBM2.

Micron Declares GDDR5X Right on Track

Engineers at the Micron Development Center in Munich have announced that they have gotten their first samples of GDDR5X back from their fab before schedule and have started testing. In addition to that, Micron is expecting to ramp up volume production of GDDR5X on their 20nm memory process sometime in mid-2016. GDDR5X is an evolution on GDDR5 rather than a new memory technology and is expected to tide the industry over till HBM2 and HMC come online.

In early testing, some of the GDDR5X samples have already hit 13Gbps, just short of the eventual 14Gbps goal for the production modules. Combined with a new improved prefetch and new quad data rate, GDDR5X is expected to double the bandwidth over GDDR5 while increasing capacity and reducing power consumption. With progress going well, samples will begin to ship to partners (like AMD and Nvidia) in the spring. This means we will be unlikely to see any GDDR5X based cards until fall 2016.

While GDDR5X will still fall short of HBM2 bandwidth, it will undoubtedly be cheaper. It will also allow GPUs to be made with narrower buses while still maintaining the same overall bandwidth, allowing for reduced power consumption, cheaper GPUs and faster GPUs. We can expect the mainstream and even performance segments to utilize GDDR5X while the budget cards stick with GDDR5 and the enthusiast cards use HBM2. For more on GDDR5X, check out our write-up here.

GIGABYTE Server Lineup First To Be Tesla M40 Certified

Deep learning is redefining what is possible for a computer to do with the information that is provided. This is however a very compute intensive task and it requires specialized hardware to get the optimal performance. This is also the technology that one day will make an AI possible. Nvidia’s Tesla M40 is the fastest deep learning accelerator and it significantly reduces the training time. GIGABYTE is the first server maker to have its lineup certified for just these new NVIDIA cards. While a certification isn’t a thing that is necessarily needed, it is one of those guidelines that you shouldn’t look past.

 

Right now you are most likely wondering what deep learning is and I could go into a lot of details about its start, progress, and details – but I doubt anyone would read all that here. Wikipedia’s definition probably sums it up best. With very basic words, it allows the software to draw its own conclusions based on what it already knows.

The Wikipedia definition reads: “Deep learning (deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures, or otherwise composed of multiple non-linear transformations.”

NVIDIA’s Tesla M40 is a quite impressive card with its 3072 CUDA cores, 12GB GDDR5 memory with a bandwidth of 288GB/s, and a single precision peak performance of 7 TFLOPS. That is just from one card and we need to keep in mind that some of GIGABYTE’s servers can handle up to 8 graphics cards each. That adds up to a lot of performance.

If you already have a GIGABYTE server or plan to purchase one, then you’ll most likely also know the model number already. I’ve added a screenshot from the official compatibility list below which in that case will save you the trip to the official compatibility list. We see that it’s only the R280-G20 that isn’t certified for the M40, but that is because the system has a different field of operation than the rest.

So GIGABYTE has you well covered in regards to NVIDIA’s impressive Tesla M40 Deep Learning GPU.

GDDR5X Graphics Memory Standard Announced by JEDEC

JEDEC Solid State Technology Association is one of the world leaders in the memory standards field and today published JESD232, the specification for GDDR5X graphics memory. With both sides of the graphics card battle seemingly set to use this new standard going into 2016, the standard should herald the release of new graphics cards making use of the RAM.

GDDR5X graphics memory (or SGRAM) is derived from the commonplace GDDR5 used in the majority of current graphics cards while identifying key areas in which the existing standard can be enhanced in both design and operability that make them more able to handle applications that benefit from very high memory bandwidth. The aim for GDDR5X is to reach data rates in the region of 10 to 14 Gb/s, twice as fast as GDDR5. While this falls short of the enormous 256 Gb/s HBM2 GRAM is meant to be capable of, GDDR5X should be suited to more affordable grades of graphics card where HBM is price ineffectual. GDDR5X should also be able to ensure an easy switchover from the previous standard for developers, with the new standard retaining usage of GDDR5’s pseudo open drain signaling.

How GDDR5X impacts Micron’s development of GDDR6 remains to be seen, with both technologies seemingly targeting the same area of the graphics card market. Regardless, with HBM2 for enthusiast grade cards and this newly standardized GDDR5X for the rest of the field, 2016 should be an exciting time for the GPU market whether you’re a fan of AMD or nVidia.

AMD Polaris Will use HBM2 and GDDR5

Ever since HBM1 was revealed and launched with Fury X, many have been looking forwards to what HBM2 would bring along in 2016. While HBM1 brought large power savings and a major boost in memory bandwidth, it was largely limited to a relatively low 4GB capacity. HBM2, however, is set to provide a boost in capacity and bandwidth by increasing the number of stackable dies. We’re now getting reports that AMD’s upcoming Polaris chips will utilize HBM2.

As a major revamp of the GCN architecture, a Polaris flagship GPU would be the natural product to debut HBM2. A flagship GPU much more powerful than current generation chips due to the new architecture and process node would likely require more memory bandwidth to feed it and a high memory capacity as it would be meant for VR and 4K gaming. Being the largest chip in the lineup, the flagship would also benefit from the major power savings, helping offset its core power consumption. The confirmation of HBM2 also suggests that we will be getting high-end Polaris chips this year.

At the same time, AMD is also confirming that they will continue to use GDDR5 and likely GDDR5X as well. At CES, AMD showed off a low powered Polaris chip using GDDR5 that was able to provide the same performance as Nvidia’s GTX 950 but with a significantly lower power consumption. With such a leap in efficiency, the HBM2 chips will likely be light years ahead of current cards in terms of efficiency if GDDR5 already shows such massive gains.

GDDR6 Memory Coming to Graphics Cards in 2016?

The graphics card market is full of interesting power struggles and if recent reports are true, it seems 2016 will be one of the biggest battles yet. AMD may have already put out some cards with HBM memory, and we’ve heard that Nvidia will be doing the same soon too, but don’t count GDDR memory out just yet! It seems that the upcoming GDDR5 standard is being developed by Micron, which will power mid-end graphics cards, while HBM2 will likely remain for higher end cards.

Of course, there’s some confusion here as JEDEC are already working on the GDDR5X standard, so where Micron fits in really remains to be seen, but that’s something we’ll have to wait and see. GDDR5X is said to double the bandwidth, so is GDDR6 is new standard, or just a further refinement of 5X? Either way, we can expect it to adopt a lower node, most likely starting from 20nm and working down from there, allowing for higher clocks, and lower voltages, although these kinds of improvements are the obvious targets for any increase in performance these days.

Our guess is that the revised GDDR standards will be acting as a bridge until HBM matures enough to cover a wider range of cards and budgets. Either way, 2016 is shaping up to be an exciting time in the GPU market, with new memory, new architectures, new cards and so much more on the horizon.

Rumours Suggest NVIDIA Using GDDR5X on its Pascal GPUs

The tech world is abuzz with rumours regarding NVIDIA’s next-generation architecture, Pascal, and the latest word (in German) is that the GPU producer will be introducing a successor to GDDR5 with it. Speculation says that NVIDIA will debut GDDR5X with its next-gen GeForce cards. While it will likely retain the familiar 256-bit memory interface, the GDDR5X’s bandwidth will stretch to 448GB/sec, blowing all its AMD rivals – bar the Fiji-based HBM cards, which are thought to be under threat due to reported supply problems – out of the water. HBM2 should also feature in its new graphics cards, with a 2048-bit memory bus at 1GHz and a 512GB/sec memory bandwidth.

NVIDIA is thought to have been testing its Pascal architecture since last month, but the latest rumours suggest that the tests involve the GP100 and GP104 chips. To clarify, for those unfamiliar with NVIDIA’s GPU naming convention, its GM204 chip powers the GTX 980 and the GTX 970, with the GM100 integral to the GTX Titan X and GTX 980 Ti. Therefore, the GP100 and GP104 will mark the high-end Titan cards and the consumer GeForce cards, respectively.

NVIDIA’s Pascal architecture with GDDR5X is expected to hit the market towards the end of 2016.

Image courtesy of Enterprise Tech.

Gigabyte Reveals New XTREME GAMING Series With a GTX 950 Card

Gigabyte revealed their brand new XTREME GAMING series with the announcement of the new GB-N950XTREME-2GD graphics card built around the Nvidia GTX 950 GPU. The new series is designed to deliver extreme gaming experience for dedicated enthusiasts with great overclocking performance and rock-solid durability.

The GB-N950XTREME-2GD comes with an impressive overclock. The base clock has been pumped up to 1203MHz from 1016MHz and the Boost clock has been pushed from 1190MHz to 1405MHz. The 2GB DDR5 memory has also been overclocked to 7GHz.

The card comes with one Dual-Link DVI-I, one HDMI and three DisplayPort connectors. It supports the latest DirectX 12 and only requires a single 8-pin power connector. The card promises some great performance and 4K gaming in 45 fps in Heroes of the Storm, 58 fps in Dota2 and 186 fps in League of Legends, an easy 65% or more improvement over the popular GTX 750Ti graphics card.

Gigabyte built the GB-N950XTREME-2GD with their WINDFORCE 2X cooling system with pure direct-touch copper heat pipes and the 3D-Active fans that only are powered up when needed. The built-in LED will show when the card is running,but the fans are parked as well as allowing you to set your colour of choice. The card also features a sturdy backplate to help with the cooling and increase the overall structural integrity of the card.

The GTX 950 XTREME GAMING graphics card would make a great card for the casual MOBA gamer and it is the first of many gamer-focused products that Gigabyte will be rolling out this year.

Nvidia Announces Quadro M4000 and M5000 Maxwell-Based GPUs

NVIDIA announced the successors to the Quadro K5200 and K4200 graphics cards and the two new cards have been dubbed the Quadro M5000 and the Quadro M4000. These two new cards are based on the Maxwell 2 architecture just as the Quadro M6000 announced back in April was, and these aren’t just some wash up rebrands of previous cards but offer a performance that should be double that of the predecessors.

NVIDIA isn’t always quick to share all the details on these new cards, but we can predict the missing information based on the information we got and the general knowledge of the systems and chips. The first thing we don’t fully know is the actual GPU, but based on the CUDA cores it is safe to assume that it is a fully enabled GM204 with 2048 CUDA cores and the full 256-bit memory bus. The M4000 is using the same GPU, but it only has 1664 active CUDA cores. Both cards are almost equal when it comes to memory as they both feature 8GB GDDR5 memory. The M5000’s memory is clocked slightly higher and it also features a software based ECC support.

A thing that didn’t change much over the predecessor Quadro cards K4200 and K5200 is the power consumption. They all draw power from a single 6-pin PCIe power connector and the M5000 comes with a TDP of 150W, same as the K5200. The Quadro M4000 got a slight bump up to a 120W TDP over the previous 105″.

The new generation Maxwell GPU has a lot of benefits over the older Kepler and one of them is being able to native support up to four 4k monitors and both of these cards can do that with four DisplayPort connectors. The Quadro M4000 is a single slot card that doesn’t have room for more connectors, but the Quadro M5000 being a dual-slot card also comes with a DVI connector.

NVIDIA never discloses the prices of these cards, they leave that up to the card partners, but it’s safe to assume that it will be around the same as the predecessor cards, $2000 and $1000 for the Quadro M5000 and Quadro M4000 respectively

AMD Has Priority Access over Nvidia For HBM2

AMD made history earlier this month by being the first major GPU vendor to ship HBM with their top end Fury and Fury X graphics cards. Nvidia however, has been absent so far, waiting on HBM2, a more advanced version of the HBM1 shipping with Fury(X), before getting into the new tech. According to a report though, AMD is leveraging their deal with SK Hynix to get priority access to HBM2 in time for their upcoming Arctic Islands GPUs.

While HBM1 is limited to 4GB and 512GB/s, HBM2 increases those numbers significantly with up to 16/32GB of VRAM and over 1024 GB/s. Like HBM1, HBM2 is expected to be in limited supply at launch. If AMD has priority for HBM2, and the stocks are low, it may mean that Nvidia practically won’t be able to use HBM2 until the supply improves enough that AMD can’t use what is available. This might create a de facto exclusively for AMD, offering a chance for the underdog to dominate with HBM2 GPUs.

If the supply of HBM2 is limited, it could complicate things for Nvidia. Their Pascal architecture is set for 2016 and could be designed for either GDDR5 or HBM2, which vary widely in implementation. Nvidia can choose to go with GDDR5 but risk losing its lead over AMD and inability to refresh with HBM2 later on. If Nvidia does go with HBM2, supply might be heavily constrained, allowing AMD a chance to grab market share. It will be interesting to see both side’s offerings in early 2016 and the choices they make for their lineup.

Thank you WCCFTech for providing us with this information

AMD Announces New FirePro S9170 Server GPU

AMD might just have released their latest Radeon branded graphics cards, but that isn’t the only upgrade they have in stock for us. Today they are releasing the new FirePro S9170 server GPU with an industry-leading 32GB memory for high-performance compute.

The server card is designed for DGEMM heavy double-precision workloads and with support for OpenCL 2.0. It is based on the second-generation AMD Graphics Core Next (GCN) GPU architecture and this new monster is able to deliver up to 5.24 TFLOPS of peak single precision compute performance while enabling full throughput double precision performance, up to 2.62 TFLOPS of peak double precision performance. That is some serious performance.

FirePro graphics cards aren’t something for gamers and if you came here to find an amazing game card, then you’ll be disappointed. GPUs are great with parallel workloads and can be used to offload those tasks from the CPU and achieve a great performance per watt. In fact, AMD is the market leader in that field and are ranked number 1 in Green when counting performance per watt.

The new AMD FirePro S9170 isn’t a replacement for the S9150 but rather thought as a complimentary card for those who need that extra memory to solve those extra big problems. It is the new flagship of server graphics cards and comes with ECC support.

We don’t find the new HBM memory here, which is a given due to the amount and HBM’s current limitation. Instead, it is the good old and well-performing GDDR5 that is used. The card is passive cooled and requires an airflow of 20 cubic meters per minute – which any current server setup should deliver. Users shouldn’t experience any thermal restrictions or throttling if cooled according to specifications.

The GPU clock of the S9170 got a slight boost over the S9150 and it now runs at 930MHz vs the old 900MHz. Shipments are scheduled for Q3 2015 and the street price will be slightly higher than the current S9150.

  • Passively cooled solution for server environments
  • AMD Graphics Core Next architecture
  • 2,816 stream processors (44 compute units)
  • 5.24 TFLOPS peak single-precision floating point
  • 2.62 TFLOPS peak double-precision floating point
  • Full throughput double-precision
  • Error-correcting code memory support (external only)
  • 32GB ultrafast GDDR5 memory
  • 512-bit memory interface
  • Up to 320GB/s memory bandwidth
  • 275W maximum power consumption
  • Support for SMBus temperature reporting at boot-up
  • AMD PowerTune technology3
  • AMD STREAM technology4
  • OpenCL, OpenGL support
  • PCIe® x16 bus interface, PCIe 3.0 compliant
  • Full-height/full-length dual-slot form factor
  • Headless display support
  • Linux OS support (64- and 32-bit)
  •  FCC, CE, C-Tick, BSMI, KCC, UL, VCCI, RoHS, and WEEE compliance
  • Designed, built, and tested by AMD
  • Planned minimum three-year life cycle
  • Limited three-year warranty

Gigabyte Launches Tiny GeForce GTX 960 ITX Windforce

Gigabyte has just announced its low-end GTX 960 model, the GeForce GTX 960 ITX, boasting a Windforce 2x cooling solution. This looks to be dedicated to people who use their PCs mostly for office and multimedia activities, though the card can also be used for some casual games that don’t require a powerhouses rig just start it.

The card features a reasonable 2GB of GDDR5, a 128-bit memory interface and a core base clock of 1127 MHz, going to up to 1178 MHz in boost mode. By the looks of it, Gigabyte plans on rolling out a OC version of the card with a base clock of 1152 MHz and boost clock of 1203 MHz. Taking into account the latter and the GeForce GTX 960 ITX already on the market, I’m fairly certain that Gigabyte will roll out a 4GB model soon enough, should 2GB be not enough for what you have in mind.

Taking a look at the Windforce 2x, the cooling solution looks to be promising in keeping the ‘little monster’ cool under full load. The blades are specially designed with triangle shapes at the edge and special 3D stripe curves to efficiently enhance and keep the card cool. In addition to that, the pure copper heat pipe direct touch (HDT) helps in keeping the card cool at an extremely low noise level, so you don’t have to worry about it buzzing your ears off when you put it to the ultimate test.

There is no official confirmation of any price for it, but EXPReview puts it at ¥1499, which is roughly £155.

Images courtesy of Gigabyte

EVGA Announces Their GeForce GTX 980 Ti Classified ACX 2.0+ Graphics Card

Whether you are looking for an upgrade to your gaming rig or want to build a new one, it’s important to know what’s available on the market. EVGA knows this and hopes to offer everything you need to improve your gaming experience. That’s why they just revealed their own GeForce GTX 980 Ti Classified graphics solution.

The card comes with 6GB of GDDR5 memory and 2816 NVIDIA CUDA Cores, so it will undoubtedly have your back in the latest and upcoming game titles. In addition, the 384-bit memory width, 7010 MHz memory clock and 336.5 GB/s bandwidth specs just speak for themselves in terms of performance. The Maxwell GPU clocks look promising too, having the base clock at 1140 MHz and boost clock going for up to 1228 MHz. But the best part about it? It can go to up to 4-way SLI, so even if you’re an extreme gamer, you can get everything you need out of this baby.

EVGA tells that the 14+3 Power Phase on the card will bring improved efficiency, power capacity and lower average operating temperature, while the dual 8-pin Power Inputs deliver increased power capability. When it comes to the cooling solution, EVGA’s ACX 2.0+ cooling technology along with its MOSFET Cooling Plate show a 13% temperature, while the Straight Heat Pipes are said to further decrease the card’s temperature by 5ºC. This all adds up to a 20% temperature reduction compared to other cards. The ACX 2.0+ cooling solution also helps in delivering more airflow while eating up less power with the help of its optimized swept fan blades, double ball bearings and the low power motor.

The card certainly looks great and from what I see, EVGA is really pushing a lot of juice to deliver the necessary performance while also providing the right cooling for the card. More information about the EVGA GeForce GTX 980 Ti Classified ACX 2.0+ can be found over at its website here.

Images courtesy of EVGA

AMD Officially Announce Details of High Bandwidth Memory

We’ve been waiting for details on the new memory architecture from AMD for a while now. Since we heard the possible specifications and performance of the new R9 390x all thanks to the new High Bandwidth Memory (HBM) that will be utilised on this graphics card.

Last week, we had a chat with Joe Macri, Corporate Vice President at AMD. He is really behind HBM and has been behind it since product proposal. Here is a little bit of background information, HBM has been in development for around 7 years and was the idea of a new AMD engineer at the time. They knew, even 7 years ago, that GDDR5 was not going to be an ever-lasting architecture and something else needed to be devised.

The basis behind HBM is to use stacked memory modules to save footprint and to also integrate them into the CPU/ GPU itself. This way, the communication distance between a stack of modules is vastly reduced and the distance between the stack and the CPU/ GPU core is again reduced. With the reduced distances, the bandwidth is increased and required power dropped.

When you look at graphics cards such as the R9 290x with 8GB RAM, the GPU core and surrounding memory modules can take up around a typical SSD size footprint and then you also need all of the other components such as voltage regulators; this requires a huge card length to accommodate all of the components and also the communication distances are large.

The design process behind this, in theory, is very simple. Decrease the size of the RAM footprint and get it as close to the CPU/ GPU as possible. Let’s take a single stack of HBM, each stack is currently only 1GB in capacity and only four ‘DRAM dies’ high. What makes this better than conventional DRAM layout is the distance between them and the CPU/ GPU die.

With the reduced distance, the bandwidth is greatly increased and also power is reduced as there is less distance to send information and fewer circuits to keep powered.

So what about performance figures? The actual clock speed isn’t amazing, just 1GBps when compared to GDDR5, but that shows just how powerful and refined they are in comparison. Over three times the bandwidth and lower voltage; it’s ticking all the right boxes.

There was an opportunity to ask a few questions towards the end, sadly only regarding HBM memory, so no confirmed GPU specifications.

Will HBM only be limited to 4GB due to only 4 stacks (1GB per stack)?

  • HBM v1 will be limited to just 4GB, but more stacks can be added.

Will HBM be added into APU’s and CPU’s?

  • There are thoughts on integrating HBM into AMD APU’s and CPU’s, but current focus is on graphics cards.

With the current limitation only being 4GB, will we see negative performance in high demanding games such as GTA V at 4K that require more than 4GB?

  • Current GDDR5 memory is wasteful, so despite lower capacity, it will perform like higher capacity DRAM

Could we see a mix of HBM and GDDR5, sort of like how a SSD and HDD would work?

  • Mixed memory subsystems are to become a reality, but nothing yet, main goal is graphics cards.

I’m liking the sound of this memory type; if it really delivers the performance stated, we could see some extremely high power GPU’s enter the market very soon. What are your thoughts on HBM memory? Do you think that this will be the new format of memory or will GDDR5 reign supreme? Let us know in the comments.

AMD’s Official Roadmaps Reveals the Company’s Plans for the next 5 Years

AMD has revealed what the company plans to do with its GPUs and CPUs in the next 5 years at the PC Cluster Consortium event in Osaka Japan, where AMD’s Junji Hayashi revealed the company’s roadmap.

During the event, AMD has focused on its graphics IP and the products that involved it, including discrete Radeon graphics cards and Radeon powered Accelerated Processing Units. There have been talks about AMD’s upcoming K12 ARM as well as the x86 Zen CPU core, including a strategy of how the company plans to introduce both x86 and ARM powered SOCs to the market in a pin for pin compatible platform code-named SkyBridge.

It is said that both CPUs are 64-bit capable parts coming in a 14nm FinFET ‘shell’, but one is based on the ARMv8 architecture while the other is based on the more traditional x86 AMD64 architecture, having them target the server, embedded, semi-custom and client markets.

AMD has also talked about “many threads” revealing that the K12 will come with Simultaneous Multi-Threading (SMT) technology in contrast to the company’s Clustered Multi-Thread (CMT) technology we are able to see in the Bulldozer family. SMT essentially takes advantage of the various resources in the core which are underutilized and dedicate to an additional, slower, execution thread for added throughput. In contrast, CMT is looking for opportunities to share resources between two different CPU cores, instead of doing it inside a single CPU core.

Hayashi also revealed AMD’s GPU roadmap, which reveals that the company is employing a two-year cadence to updating its GPU architecture inside APUs. It looks like the company will add Accelerated Processing Units with updated GPU architectures once every two years. The roadmap also reveals that AMD plans to introduce what it described as a High Performance Computing APU which carries a 200 – 300 watts TDP, having the company stating that the APU in question will excel in HPC applications.

AMD apparently did not attempt to go with newer APUs in the future because it was not viable in terms of memory bandwidth. Instead, the company’s stacked High Bandwidth Memory will be used as an alternative, making the design extremely effective. The second generation of HBM is said to be 9 times faster than GDDR5 memory and 128 times faster than DDR3 memory.

The company has not revealed any code names for the GPU architectures, but a previous leak pointed out that the architecture will debut on 16nm FinFET and will be code-named Arctic Islands. More specific details about AMD’s products will be revealed in May at the Financial Analyst Day event.

Thank you WCCF for providing us with this information

€2,900 12GB Asus GTX Titan Black Card Makes An Appearance

Nvidia are no stranger to creating ultra high-end graphics cards with prices fit for a king, and today is no different. A mysterious new Asus graphics card has appeared online; a 12GB edition of the GTX Titan Black with a price tag of €2,900!

The normal GTX Titan Black comes equipped with 6GB of memory and costs around €900, so this is a significant price increase over the standard model. To put this into perspective, only the Titan Z comes equipped with 12GB of memory to cater to its dual GPU architecture, so this is either a crazy over priced Titan Black with double the memory, or it’s a dual GPU Titan Black.

The Asus GTX Titan Black-12GD5 features 12GB of memory, a standard dual DVI, HDMI and DP interface at the back, a price tag of €2,900; unfortunately that’s all the information we have right now, but we have to wonder who would be prepared to pay this given recent hardware price drops following the launch of GTX 9xx series cards, as well as price drops of competing AMD graphics hardware such as the R9 295X2.

Thank you Chiphell for providing us with this information.

Image courtesy of Chiphell.

ASUS STRIX GeForce GTX 970 Design Revealed

After much anticipation, the first images of Nvidia GeForce GTX 9xx cards have started leaking online. One of the first images to come our way is of the new ASUS STRIX GeForce GTX 970; unfortunately, it doesn’t look all that different from the current GTX 780 series cards.

The new design, or should I say old design, has been tweaked a little bit to better meet the needs of the graphics card. There are less heat pipes near the PCI-E interface; this is no surprise given that the new graphics card has a significantly reduced power consumption over the current generation of hardware.

The card features the popular DirectCU II cooling fans, these only start spinning when the card reaches 65 C to help keep thing nice and quiet when you’re not gaming; the fans can even start as low as 500RPM.

On the box you can see a few of the specifications for the card, such as the DIGI+VRM, 4GB of GDDR5 memory and a factory overclock; although the new clock speeds are still unknown.

Prices are estimated to cost between $400 – 450, with a release date of September 19th.

Thank you VideoCardz for providing us with this information.

Image courtesy of VideoCardz.

AMD Radeon R7 250XE Starts Appearing In Japanese Stores

It looks like AMD have launched a card, but neglected to tell anyone about it. The new Radeon R7 250XE may not be the most exciting card in AMDs range, actually it’s pretty much destined to be one of the least exciting, but it’s still important none the less. The new card appears to have been launched to counter the Nvidia GeForce GT 730/740 range.

The entry-level card features a low-profile, single slot design, so it may be a tempting option for compact HTPC and office style system, or just those who need a to upgrade from an on-board GPU solution. The card is said to be based around a 28nm Cape Verde refresh, features 640 stream processors, a 128 bit wide GDDR5 memory interface and is equipped with 1GB of memory. Core clock is 860 MHz and 4.5GHz for memory. The card is priced around $60-70 and we’re uncertain if it’ll be launching to a wider market, since AMD haven’t even told anyone about the original launch.

Thank you TechPowerUp for providing us with this information.

Image courtesy of TechPowerUp.

Intel Reveals Details Regarding Intel’s “Knights Landing” Xeon Phi Coprocessor

Intel has announcement the ‘Knights Landing’ Xeon Phi Coprocessor late last year, having released very few details about the lineup back then. As time passes, details are bound to be revealed and Intel is said to start shipping the series next year. This is why Intel apparently has decided to reveal some more details regarding the ‘Knights Landing’ Xeon Phi Coprocessor.

The announcement from last year points to the Knights Landing taking the jump from Intel’s enhanced Premium 1 P54C x86 cores and moving on to the more modern Silvermont x86 cores, significantly increasing the single threaded performance. Furthermore, the cores are said to incorporate AVX units, allowing AVX-512F operations and provide bulk Knight Landing’s compute power.

Intel is said to offer 72 cores in Knight Landing CPUs, with double-precision FP63 performance expected to reach 3 TFLOPS, having the CPUs boasting the 14nm technology. While this is somewhat old news, Intel revealed some more insights at the ISC 2014.

During the conference, Intel stated that the company is required to change the 512-bits and GDDR5 memory present in the current Knights Corner series. This is why Intel and Micron have apparently struck a deal to work on a more advanced memory variant of Hybrid Memory Cube (HMC) with increased bandwidth.

Also, Intel and Micron are said to be working on a Multi-Channel DRAM (MCDRAM) specially designed for Intel’s processors, having a custom interface best suited for Knights Landing. This is said to help scale its memory support up to 16 GB if RAM while offering up to 500 GB/s memory bandwidth, a 50% increased compared to Knights Corner’s GDDR5.

The second change made to Knights Landing is said to include replacing the True Scale Fabric with Omni Scale Fabric in order to offer better performance compared to the current fabric solution. Though Intel is currently keeping this information on a down-low, traditional Xeon processors are said to benefit from this fabric change in the future as well.

Lastly, compared to Intel’s Knights Corner series, the Knights landing will be available both in PCIe and socketed form factor, mainly thanks to the MCDRAM technology. This is said to allow the CPU to be installed alongside Xeon processors on specific motherboards. The company has also emphasised that the Knights Landing version will be able to communicate directly with other CPUs with the help of Quick Patch Interconnect, compared to current PCIe interface.

In addition to the latter, having the Knights Landing socketed would also allow it to benefit from the Xeon’s NUMA capabilities, being able to share memory and memory spaces with the Xeon CPUs. Also, Knights Landing is said to be binary compatible with Haswell CPUs, having the company considering writing programs once and running them across both types of processors.

Intel is expected to start shipping the Knights Landing Xeon Psi Coprocessor somewhere around Q2 2015, having the company already lining up its first Knights Landing supercomputer deals with National Energy Research Scientific Computing Center with around 9300 Knights Landing nodes.

Thank you Anandtech for providing us with this information
Image courtesy of Anandtech

FirePro W8100 Professional Graphics Solution Announced by AMD

AMD has just launched the AMD FirePro W8100 professional graphics card, bringing a new generation of AMD Graphics Core Next architecture as well as even grater workstation performance, which is dubbed to be 38x better than any other competing product on the market.

Featuring OpenCL, a best-in-class 8 GB of GDDR5 memory, 4.2 TFLOPS single-precision compute performance, 2.1 TFLOPS double-precision compute performance and of course OpenCL, the AMD FirePro W8100 is said to be designed for Media & Entertainment workflows, as well as engineering analysis and supercomputing applications.

In terms of productivity, the FirePro W8100 is said to be the ideal professional-grade solution for 4K Computer Aided Design (CAD) and other applications dedicated towards video editing, colour correction, compositing, design visualisation and GPU-accelerated compute tasks. Thanks to its super-fast memory and vast capacity, users are said to be able to load large datasets or handle ultra-HD video frames on the GPU’s internal memory for processing purposes, resulting in reduced lag and improved overall responsiveness.

Users are also said to have the option to combine up to four AMD FirePro W8100 cards in a single system in order to have over 16 TFLOPS compute performance, in addition to increased productivity with up to four 4K display support and certified application performance with differentiated graphics features.

The FirePro W8100 is said to deliver support for increasing real-time 4K productivity with single-GPU and multi-GPU configurations from various system integrators, including Armari, CARRI, Colfax, Exxact, Mouse Computers, PSSC Labs, Scan International, Tarox, Versatile Distribution Services, Workstation Specialists and Wortmann.

AMD has stated that it will start shipping in July from SAPPHIRE Technology, AMD FirePro Ultra Workstation providers as well as major workstation providers.

Thank you TechPowerUp for providing us with this information
Image courtesy of TechPowerUp

Zenbook NX500 Ultrabook Announced by ASUS, Featuring 4K UHD Touchscreen

ASUS announced its latest addition to the multiple-award-winning Zenbook Ultrabook series, the NX500, stated to come in a new sleek and elegant all-aluminium chassis and with state-of-the-art components. In addition to the latter, the NX500 is said to pack the new ASUS VisualMaster display technology and the world’s first 15.6-inch 4K UHD touchscreen, using 3M QDEF technology in order to deliver vivid and natural colour.

In terms of performance, the Zenbook NX500 boasts Intel’s Core i7 Quad-Core CPU and NVIDIA’s GeForce GTX 850M graphics, featuring 2 GB of GDDR5 video memory. ASUS also provides customers with the ability to choose between one 512 GB PCI Express x4 SSD or up to two SATA 3 SSDs, configurable as a RAID 0 array, when it comes to the NX500’s disk space capabilities. Other aspects of the NX500 include three USB 3.0 ports, next-generation Broadcom dual-band three-stream 802.11ac Wi-Fi and integrated Bluetooth 4.0.

The NX500 also comes with the SonicMaster Premium audio solution, featuring the ICEpower and Bang & Olufsen technology. The sound solution is said to deliver a deep bass, rich and crystal-clear vocals, having a wide frequency range and high volume levels. Together with the MaxxAudio Master by Waves, recipient of a Technical GRAMMY award, it brings professional-level sound processing and an enhanced listening experience to the NX500.

The VisualMaster display technology is said to provide the NX500 with incredible clarity, accuracy and vibrant lifelike colour, delivering stunning detailed images to the 4K UHD 15.6-inch IPS touchscreen display. The 3M QDEF technology is also said to use quantum dots in order to provide an ultra-wide colour gamut of 100% NTSC, 108% Adobe RGB and 146% sRGB, while having a wide viewing angle of 178-degrees.

ASUS’ Zenbook NX500 is said to boast a new refined slim shape, having it carved from a single block of aluminium and featuring tapered slim edges. The keyboard is also said to be a work of art, having it as a one-piece frameless construction, featuring chicklet design and backlighting controlled by an ambient-light sensor.

Thank you TechPowerUp for providing us with this information
Image courtesy of TechPowerUp

Galaxy Launches 6 GB Variant of its GeForce GTX 780 Hall Of Fame Edition Graphics Card

Galaxy has just launched another NVIDIA variant of GeForce GTX 780 Hall of Fame graphics card, bearing the GF-GTX780-E6GHD/SOC model name, featuring the company’s white PCB signature and performance-oriented cooling solution.

In terms of specifications, the card comes with 6 GB if memory, having the rest be similar to the original Galaxy GeForce GTX 780 HOF Edition released in July last year. The 6 GB variant also comes with a big factory overclock, having the core GPU clocked at 1019 MHz compared to the original variant clocked at 863 MHz and its boost speed capped at 1071 MHz compared to 928 MHz from its predecessor.

The GDDR5 memory apparent has been left untouched, having it clocked at 6008 MHz as the original Galaxy GeForce GTX 780 HOF. Other aspects of the 6 GB GTX 780 HOF variant include the 28 nm GK110 silicon architecture, offering 2,304 CUDA cores, 192 TMUs, 48 ROPs and a 384-bit memory interface. The graphics card also features a 10-phase VRM, drawing power from two 8-pin PCIe power connectors.

The graphics card is said to feature a dual-BIOS configuration as well, having the toggle switch located near the I/O bracket. The card offers four display outputs, having two dual-link DVI, one HDMI and one Display Port configuration, while its cooling solution features a Hybrid Vapor Chamber which directs heat to a large aluminium fin-stack heatsink with the help of four nickel-palter copper heat pipes. The heatsink is then ventilated by a pair of 90mm fans, having the base-plate with heatsink fins cooling the VRM and memory.

Thank you TechPowerUp for providing us with this information
Image courtesy of TechPowerUp

First Images of NVIDIA’s GeForce GT 740 Revealed, Confirmed to be Entry-Level GPUs

NVIDIA has announced the GeForce GT 740 since Q2 2013, but now word about the latter has been heard so far. That is until now, having the first pic of two model GeForce 740 released by EXPReview.com (via WCCFTech). From the image at hand, it is clearly visible that the two have the PCB of an entry-level GPU, having one in standard blue and the other black with red stripes and cooler.

However, in terms of performance, it can only be speculated at the moment. No official information about the GeForce 740 has yet been released, therefore it is not quite clear what they are capable of. Many other articles and news sources, including databases point to the cards being of Kepler architecture. Although, there is also rumors of it being of Maxwell architecture, but the probability is fairly small. What is clear at the moment is the fact that the GPU is based on 28nm, anything else is up for speculation.

The two graphics cards in the image are of the Galaxy GT 740 and Gainward GT 740 Zhao Edition, one requiring a 6-pin external connector while the other one does not require any type of auxiliary power. The rumors are that the power connector is present only to keep the graphics card stable while overclocked, since the PCIe slot outputs 75W or less. The low power consumption however does not necessarily mean they are Maxwell, but the probability does come from this matter.

Another confirmed characteristic of these graphics cards is the memory and bus width, which is 1 GB GDDR5 and 128-bit. In terms of connectivity, the Galaxy GeForce GT 740 is seen to feature one DVI-D, one DVI-I, one HDMI and one Display Port while the Gainward version will feature one DVI-D, one VGA and one HDMI port.

Thank you WCCFTech for providing us with this information
Images courtesy of WCCFTech and EXPReview.com

ASUS Announces 0dB Strix R9 280 and GTX 780 Graphics Cards

ASUS announced their upcoming Strix R9 280 and GTX 780 graphics cards, which are dubbed to be 15% faster, 20% cooler and 3x quieter. The Strix series is also said to expand this year and incorporate other products such as exceptionally agile mice, super-cool gaming graphics cards, boundary-pushing keyboards and awesome gaming headsets.

The Strix ASUS version of the R9 280 and GTX 780 presents DirectCU II technology, giving the user a cooler, quieter and faster performance for high-end action gaming. In addition to the latter, the 0dB-cooling technology allows the graphics cards to offer total silence in gaming sessions.

ASUS DIGI+ voltage regulation modules are also featured on the graphics cards, providing enhanced stability and efficiency while the GPU Tweak feature provides simple overclocking and online streaming.

Both Strix R9 280 and GTX 780 graphics cards present a new exclusive fan design, which is said to be able to allow gamers to enjoy latest titles played in full HD ( 1920 x 1080 ) at environmental temperatures of 65˚C in total silence by stopping the fans. The temperature is then maintained with the help of highly conductive 10mm copper cooling pipes in direct contact with the card’s GPU, along with a heat-sink that delivers a heat-dissipation area which is 220% larger than reference.

In terms of specs, the ASUS Strix R9 280 comes with 3GB of GDDR5 memory and a 384-bit memory interface, having the GPU clocked at 980 MHz and memory clocked at 5200 MHz. Looking at the output ports, the graphics card presents one DVI-D, one DVI-I, one HDMI and one Display Port.

The ASUS Strix GTX 780 however features 6GB of GDDR5 memory and a 384-bit memory interface, having the GPU clocked at 889 MHz base and can go up to 941 MHz when boosted while the memory is clocked at 6008 MHz. In terms of connectivity, the output ports seem to be similar to the R9 280, having one DVI-D, one DVI-I, one HDMI and one Display Port.

No confirmed release date and price has been announced for the two graphics cards, but more details can be found on ASUS’ official website.

Thank you VideoCardz for providing us with this information

Images courtesy of VideoCardz

Palit GeForce GTX 780 JetStream 6GB OC Graphics Card Released

Palit are kings of design in my opinion, their graphics cards look absolutely stunning and this is especially true of their JetStream cooler design. Of course the black and gold look isn’t to everyone’s taste, but style will always remain a subjective quality. Now Palit Microsystems Ltd have rolled out their JetStream series once again with the release of their latest card, which enters the GeForce GTX 780 linup. Their new card is the 6GB OC edition of their popular Palit GeForce GTX 780 JetStream, this means you can expect big improvements in performance, resolution capabilities and more.

“GeForce GTX 780 features a massively powerful NVIDIA Kepler-based GPU with 2304 cores-50% more than its predecessor. Plus, Palit design the 6GB of high-speed GDDR5 memory on GeForce GTX780 to offer rich realistic and explosive gaming performance under maximum Ultra HD 4K resolution setting. This Paliit GTX780 JetStream 6GB is the best choice for gamers who want to experience ultimate 3D gaming by NV 3D Vision Surround technology. ” Said Palit in a recent press release.

The JetStream cooler may look stunning, but it’s also proven a very capable device and that is largely thanks to its triple fan design which features two 80mm fans on either side of the larger 90mm fan. The cooler also features a large copper base and more heat pipes, as well as fan control technologies such as TurboFan Blade.

The cooler is certainly needed to tame the raw power from their new card, which run an 8-Phase PWM, DrMOS and of course that thundering Kepler GPU. The card should start showing at popular online retailers over the next few days. Unfortunately there was no information on price, but expect the card to be around 10-15% over the standard Palit 780, which is already in excess of £400.

Thank you TechPowerUp for providing us with this information.

Image courtesy of TechPowerUp.