Kepler Space Telescope Recovered from Emergency Mode

It was recently reported that the deep space telescope, Kepler had run into trouble and had forced itself to enter its emergency mode. Thankfully the planet-hunting spacecraft has since returned to a stable state.

NASA’s Kepler team engineers were able to direct the communications array aboard the craft towards Earth and have begun the long process of downloading the data that could reveal the cause of the emergency. Due to the spacecraft being 75 million miles from Earth, any signal to and from it takes a whole 13 minutes.

When the telescope was first found to have run into issues late last week, NASA had declared a mission emergency, providing the Kepler team with priority access to the Deep Space Network, which is used to contact distant spacecraft. Due to emergency mode consuming vastly more fuel than normal, restoring Kepler’s functionality was a race against time before it would be unable to complete its mission. Now that Kepler has returned to a stable state, access to the DSN has returned to normal priorities.

Whether Kepler will be returned to “science mode” is yet to be decided by the mission engineers and they are currently performing health checks on all data received from the craft. At the time of failure, the telescope was only 14 hours away from beginning the next section of its ongoing mission, however, the craft has until July 1st to complete this stage, should it be deemed fit to return to full operation.

Planet Hunting Spacecraft Kepler Enters Emergency Mode

NASA engineers have raised a mission emergency in regards to the exoplanet-hunting spacecraft Kepler, which has unexpectedly entered its emergency mode 75 million miles from Earth. This mode is the lowest level of operation for the craft and worryingly, also consumes the most fuel while in it.

The last time that NASA communicated with Kepler was on April 4th, where it was still fully operational and reporting no issues. Despite this, by the 7th, Kepler was reporting that it had been in emergency mode for a day and a half. This is certainly not good but as communication with the spacecraft is still possible, recovery from whatever went wrong may still be possible.

It won’t be easy to get Kepler back on track, though, as due to the enormous distance from Earth, any messages will take a whole 13 minutes in order to reach the craft. In order to have the best chance of getting Kepler back into normal operation, the mission support team have been granted priority access to NASA’s deep space telecommunications system and will provide updates on the craft’s status as it develops.

Kepler is no stranger to technical difficulties and its mission team have proven themselves capable of recovering the craft in the past. In July 2012 and later May 2013, Kepler lost one of its four reaction wheels used to steer the craft. Being down to half of these wheels should have proven fatal to the craft, which required precise directional control to search for planets. Despite this, a workaround using the pressure from the sun was found that allowed Kepler to continue its mission and has operated this way for almost 3 years.

Take a Look at the New Nvidia Pascal Architecture

With the reveal of the Tesla P100, Nvidia has taken the wraps off of their new Pascal architecture. Originally set to debut last year, delays with 16nm kept Pascal from being a reality, leading to Maxwell on 28nm. Now that Pascal is finally here, we are getting an architecture that combines the gaming abilities of Maxwell with much improved compute performance. The new Unified Memory and Compute Pre-Emption are the main highlights.

First off, Pascal changes the SM (Stream Multiprocessor) configuration yet again. Kepler featured 192 CUDA cores per SM, Maxwell had 128 and Pascal will now have 64. By reducing the number of CUDA cores per SM, it increases the fine grain control over compute tasks and ensure higher efficiency. Interestingly, 64 is also the same amount of cores GCN has in each CU, AMD’s equivalent to SM. The TMU to CUDA core ratio remains the same as Maxwell with 4 per SM instead of 8, in line with the drop in cores/SM.

For compute, the gains mostly come from increasing the number of FP64 or Dual Precision CUDA cores. DP is important for scientific and compute workloads though game rarely make use of them. Kepler started cutting out some FP64 units and Maxwell went even further, with virtually no FP64 even in the Tesla’s. This was one reason why Maxwell cards were so efficient and Nvidia only managed to hold onto their leadership in compute due to CUDA and their Single Precision performance.

With Pascal, the ratio of SP to DP units goes to 2:1, significantly higher than the 32:1 of Maxwell and 3:1 of Kepler. GP100 in particular has about 50% of its die space dedicated to FP32, about 25% to DP and the last 25% split between LD/ST and SFUs. This suggests that Pascal won’t be changing much in terms of gaming performance. The only gains will be from a slight increase in efficiency due to the smaller SMs and the die shrinking from 16nmFF+. GeForce variants of Pascal may have their FP64 units trimmed to cram in more FP32 resources but again, most of the gains will be due to increased density.

Lastly, Pascal brings forward unified memory to allow threads to better share information. This comes along with improved L2 cache sizes and the more than double register file sizes. P100, the first Pascal chip, also uses HBM2, with 16GB of VRAM over a 4096bit bus for a peak bandwidth of  720 GB/s. For CUDA compute tasks, a new Unified Memory model allows Pascal GPUs to utilize the entire system memory pool with global coherency. This is one way to tackle AMD’s advancement with HSA and GCN and Intel’s Xeon Phi’s.

Overall, Pascal looks to be an evolutionary update for Nvidia. Perhaps, Nvidia has reached the point that Intel has, making incremental progress. In other ways though, the reduction in SM size has great potential and provides a more flexible framework to build GPUs. Now all we are waiting for is for the chips to finally drop.

Astronomers Observe Exploding Star for the First Time

For the first time ever, astronomers have recorded the moment at which a star started to explode – known as the “shock breakout” – within the optical wavelength of NASA’s Kepler space telescope.

The astronomy team, led by Peter Garnavich, Professor of Astrophysics at the University of Notre Dame in Indiana, USA, monitored the light levels emitted by sources within 500 galaxies – around 500 trillion stars – over a three-year period in order to detect the early signs of a supernova. The observations led to Kepler monitoring two stars in particular – both red supergiants – that were on the verge of exploding. The massive stellar bodies, KSN 2011a and KSN 2011d, were around 1.2 billion light years away.

“To put their size into perspective, Earth’s orbit about our sun would fit comfortably within these colossal stars,” Garnavich explained on the NASA website.

KSN 2011a and KSN 2011d were then observed exploding. While the Type II supernovae of both stars matched known mathematical models of the phenomena, the explosion KSN 2011a was not preceded by the expected shock breakout.

“In order to see something that happens on timescales of minutes, like a shock breakout, you want to have a camera continuously monitoring the sky,” Garnavich said. “You don’t know when a supernova is going to go off, and Kepler’s vigilance allowed us to be a witness as the explosion began.”

The video below shows KSN 2011d exhibiting the shock breakout prior to its supernova:

“That is the puzzle of these results,” Garnavich added. “You look at two supernovae and see two different things. That’s maximum diversity.”

Nvidia May Launch Confusing GT 930 in Early 2016

For the longest time, both AMD and Nvidia have taken to rebranding their low-end cards in order to present something “new” at low cost. While rebranding has become the norm, Nvidia’s GT 930 may be setting a new standard when it comes to that. Set to launch in Q1 2016, the GeForce GT 930 will reportedly come in 3 widely different flavours spanning 3 generations over 6 years in total.

From what we know right now, the 930 will use either Fermi, Kepler or Maxwell based chips. These will also be paired with either GDDR5 or DDR3 VRAM, accessed over either a 64bit or 128bit interface, meaning a lot of variation in performance. Due to the different chips used, the features offered and power consumption characteristics will vary widely as well.

The oldest chip is the Fermi one, the GF108 released back in 2010 with 96 CUDA cores. Slightly newer is the Kepler-based GK208 which was released in 2013 and features 384 CUDA cores. Finally, there is the new chip which is the Maxwell-based GM108 featuring 384 CUDA cores, offering the most features and performance. With such great variation, it won’t be surprising if consumer end up being confused and won’t be sure which GT 930 they are getting till they start gaming.

NASA’s Kepler Telescope Finds New Earth-Like Planet

NASA’s Kepler Space Telescope (above) has discovered an Earth-like planet within our galaxy. The planet, given the designation Kepler-452b, is thought to be older and larger than the Earth, but with a similar atmosphere and gravity that could be capable of sustaining life.

The Kepler Space Telescope was launched in March 2009, and during its lifespan has discovered over 1,000 new planets, but the discovery of Kepler-452b is considered to be its most important discovery to date.

A NASA statement reads, “Today, and thousands of discoveries later, astronomers are on the cusp of finding something people have dreamed about for thousands of years – another Earth.”

Kepler-452b appears to be what NASA calls a ‘Goldilocks’ planet, positioned within a zone not too close and not too far from the star that it orbits, ideal for sustaining life. It has twice the land mass of the Earth, has a 380-day year, and its atmosphere is capable of producing cloud cover and a rain cycle.

Jeff Coughlin, Kepler research scientist at SETI Institute in Mountain View, California, thinks that this discovery is just the beginning. “Is this the end? Hell no!” he said at the press conference announcing the discovery.

Thank you The Independent and NASA for providing us with this information.

Examining Nvidia’s Driver Progress Since Launch Drivers: GTX 780 Ti & GTX 680


Just over a month ago we published our AMD driver analysis article looking at the progress two generations of AMD flagship GPUs, the HD 7970 and R9 290X, had made with driver updates. We compared each flagship card’s launch drivers to the latest drivers and calculated the improvements in a variety of games and benchmarks. Now that we’ve done AMD it is of course time to do the same for Nvidia. We will be taking Nvidia’s last two GTX series flagships, the GTX 680 for the GTX 600 series and the GTX 780 Ti for the GTX 700 series, and comparing their launch drivers to the current latest drivers. This means for both graphics cards we will be testing two scenarios on identical test systems, the first scenario is with the drivers the graphics cards shipped with at launch and the second is the most current driver release for the graphics cards at the time of doing the testing.

Nvidia’s GTX 680 launched on March 22nd 2012, exactly four months after AMD’s HD 7970 launched. At launch the Nvidia GTX 680 shipped with Nvidia Forceware 301.10 drivers. The GTX 780 Ti launched on the 7th of November 2013 with the special press driver package 331.70, although this was identical to the 331.65 package except with “official” GTX 780 Ti support. We used the 331.65 package since this supports the GTX 780 Ti and is identical to the unavailable 331.70 package. The latest driver package for both these Nvidia graphics cards is 340.52. The story with Nvidia is a similar one to AMD, both the flagships of last two generations are based on the same microarchitecture which is, in Nvidia’s case, 28nm Kepler. This means that the GTX 680 has had 2.5 years of driver updates while the GTX 780 Ti hasn’t even had a year yet. Of course with the GTX 780 Ti being based on the same Kepler 28nm architecture as the GTX 680 by the time the GTX 780 Ti was released many of the largest performance optimisations had already been made for the Kepler architecture. This means the GTX 680 will show greater performance improvements compared to the GTX 780 Ti. If you read our equivalent AMD driver analysis you will see that the story with the HD 7970 and R9 290X is nearly identical.

One thing I would like to note before we dive into the results is that you shouldn’t use these results to compare the GTX 680 to the HD 7970, or the R9 290X to the GTX 780 Ti. Why? Because for our Nvidia tests we used Gigabyte’s GHz Edition GTX 780 Ti and MSI’s Lightning Edition GTX 680, both of these are heavily overclocked Nvidia cards out of the box. For the AMD tests we used two cards that were close to, if not at, stock speeds. This was not done intentionally to favour Nvidia but these were the only non-reference cards we had to hand, and we only use non-reference cards as we want to avoid inconsistencies associated with thermal throttling that AMD and Nvidia cards both experience. In short, Nvidia’s results have a 10%~ or more advantage than the AMD ones due to the factory overclocked cards tested. For those who wonder why we didn’t just downclock the Nvidia cards to stock speeds, well we could have done that but the Turbo boosting baked into Kepler cards means no two cards will perform the same even when clocked at the same speed.

Nvidia’s Tegra K1 Coming To HP’s Chromebook 14

Nvidia’s Tegra K1 processor has already hit the market with Nvidia’s Shield Tablet and Xiaomi’s MiPad and according to the latest reports it will now start arriving with Chromebooks.

HP’s Chromebook 14 is set to be the second Chromebook to get Nvidia’s Tegra K1 chip after Acer recently unveiled a Tegra K1 powered notebook. HP’s Chromebook 14 will have a 14 inch 1366 x 768 display, 2GB of RAM, 16GB of storage, a 3 cell battery and of course Nvidia’s Tegra K1 system on chip which includes a quad core ARM processor and 192 Kepler GPU cores. Nvidia’s power frugal Tegra K1 should be able to sustain better battery life two – around 12-14 hours seems likely.

Current pricing of HP’s Chromebooks equipped with Intel Haswell Celerons is around $299 so we should expect the Tegra K1 variant to be similarly priced if not slightly cheaper.

Expect more details soon.

Source: Mobile Geeks (de), Via: Softpedia

Image courtesy of HP

Nvidia Maxwell GM204 GPU Gets Detailed

More details have emerged on Nvidia’s upcoming 20nm Maxwell GPUs which will form the new GTX 800 series, however, as always these are very premature and somewhat vague details. As a result we encourage readers to remember they could be untrue, inaccurate or subject to change. The GM204 is the part in question and GM204 is believed to be the direct successor to the GK104 Kepler GPU that currently makes up the GTX 770, GTX 760, GTX 680, GTX 670 and GTX 660 Ti. Like GK104 the GM204 makes use of a 256 bit wide memory bus which also implies the card will come as standard with 2GB of GDDR5 VRAM with an option for 4GB models if Nvidia vendors choose to double up. We’ve also heard about the existence of the flagship GM210 GPU which will succeed the GK110 GPU, GM210. As a result we expect the GM210 to lead the top of the GTX 800 series (for example the GTX 890, GTX 880 Ti, GTX 880), while the GM204 GPU will likely make up the mid-range (GTX 870, GTX 860 Ti, GTX 860). Speculation suggests that it is possible the GM204 GPU may use Maxwell architecture but make use of the 28nm process to keep the costs down, this leads to further logical reinforcement that GM204 will be a mid-range Maxwell GPU.

Source: Expreview

Image credit to HQWallPapers.Org and Expreview

Maxwell Graphics Cards Rumored to Feature New Turbofan Cooling Solution

After successfully announcing the entry-level GeForce 740 and the grand (and expensive) GeForce Titan Z, NVIDIA seems to be ready to announce the long-awaited Maxwell-powered graphics cards later this year.

However, with great power also comes great temperatures. This is where the GeForce 700 series, which was the first lineup of graphics cards to make use of the GK110 core, had the NVTTM cooler specially designed to cope with the GPU thermals. This also meant that NVIDIA came true to its promise of providing low operating temperatures and noise. This is where NVIDIA seems to be planning something similar for the Maxwell lineup in terms of cooling solutions.

Rumors are that the company has recently patented a so-called Turbofan design, which seems to be an Axial-Radial Hybrid fan, similar to MSI’s cooling solution found on the Mini-ITX cards. The decision to move away from the current NVTTM design is said to be made in order to deliver a balanced noise-thermal resolution to the upcoming Maxwell architecture, since the new GPUs are said to be more energy-efficient and therefore operate a lower temperatures to start with.

This however does not mean that the upcoming solution is less efficient than the current one. Sources indicate that the Turbofan is more energy-efficient, offering lower temperatures as well as operating under lower noise profiles. Also, it is not currently known which Maxwell graphics cards will feature the Turbofan cooling solution, having only rumors indicating the higher-end GM204 GPUs recently announced. Hopefully NVIDIA will shed some more light on this matter later on.

Thank you WCCFTech for providing us with this information
Image courtesy of WCCFTech

First Images of NVIDIA’s GeForce GT 740 Revealed, Confirmed to be Entry-Level GPUs

NVIDIA has announced the GeForce GT 740 since Q2 2013, but now word about the latter has been heard so far. That is until now, having the first pic of two model GeForce 740 released by (via WCCFTech). From the image at hand, it is clearly visible that the two have the PCB of an entry-level GPU, having one in standard blue and the other black with red stripes and cooler.

However, in terms of performance, it can only be speculated at the moment. No official information about the GeForce 740 has yet been released, therefore it is not quite clear what they are capable of. Many other articles and news sources, including databases point to the cards being of Kepler architecture. Although, there is also rumors of it being of Maxwell architecture, but the probability is fairly small. What is clear at the moment is the fact that the GPU is based on 28nm, anything else is up for speculation.

The two graphics cards in the image are of the Galaxy GT 740 and Gainward GT 740 Zhao Edition, one requiring a 6-pin external connector while the other one does not require any type of auxiliary power. The rumors are that the power connector is present only to keep the graphics card stable while overclocked, since the PCIe slot outputs 75W or less. The low power consumption however does not necessarily mean they are Maxwell, but the probability does come from this matter.

Another confirmed characteristic of these graphics cards is the memory and bus width, which is 1 GB GDDR5 and 128-bit. In terms of connectivity, the Galaxy GeForce GT 740 is seen to feature one DVI-D, one DVI-I, one HDMI and one Display Port while the Gainward version will feature one DVI-D, one VGA and one HDMI port.

Thank you WCCFTech for providing us with this information
Images courtesy of WCCFTech and

Palit GeForce GTX 780 JetStream 6GB OC Graphics Card Released

Palit are kings of design in my opinion, their graphics cards look absolutely stunning and this is especially true of their JetStream cooler design. Of course the black and gold look isn’t to everyone’s taste, but style will always remain a subjective quality. Now Palit Microsystems Ltd have rolled out their JetStream series once again with the release of their latest card, which enters the GeForce GTX 780 linup. Their new card is the 6GB OC edition of their popular Palit GeForce GTX 780 JetStream, this means you can expect big improvements in performance, resolution capabilities and more.

“GeForce GTX 780 features a massively powerful NVIDIA Kepler-based GPU with 2304 cores-50% more than its predecessor. Plus, Palit design the 6GB of high-speed GDDR5 memory on GeForce GTX780 to offer rich realistic and explosive gaming performance under maximum Ultra HD 4K resolution setting. This Paliit GTX780 JetStream 6GB is the best choice for gamers who want to experience ultimate 3D gaming by NV 3D Vision Surround technology. ” Said Palit in a recent press release.

The JetStream cooler may look stunning, but it’s also proven a very capable device and that is largely thanks to its triple fan design which features two 80mm fans on either side of the larger 90mm fan. The cooler also features a large copper base and more heat pipes, as well as fan control technologies such as TurboFan Blade.

The cooler is certainly needed to tame the raw power from their new card, which run an 8-Phase PWM, DrMOS and of course that thundering Kepler GPU. The card should start showing at popular online retailers over the next few days. Unfortunately there was no information on price, but expect the card to be around 10-15% over the standard Palit 780, which is already in excess of £400.

Thank you TechPowerUp for providing us with this information.

Image courtesy of TechPowerUp.

NVIDIA GTX 750 Ti Rumored To Be Based On Maxwell GPU Chipset

Rumors are that NVIDIA plans on releasing the GeForce GTX 750 Ti in February, but what is more interesting is that it is said to be based in Maxwell GPU architecture, rather than previously reported Kepler architecture.

It is really interesting that NVIDIA is using the 700 series for the new Maxwell lineup, instead of the 800 series. The mobility series however is using the 800 series brand name. However, according to WCCF, NVIDIA might be using the 700 series lineup as an entry-level for the Maxwell cards.

Currently, the 700 series consist of the GeForece 780 Ti, GeForece 780, GeForece 770, GeForece 760. A GeForce 750 Ti would make sense for an early low-end Maxwell launch. It would be in range of the 700 series performance, while demoing what users should expect from the later more high-end models that will come out under the 800 series lineup.

If the rumors prove to be true and the GeForce 750 Ti will be based on Maxwell chipset, it will be the first GPU architecture to feature Unified Virtual Memory which allows the GPU and CPU to share the address space. The hardware level integration helps improves memory management and reduces overhead. In addition to that, Maxwell based GPUs would be the first to integrate the Denver CPU. The custom 64-bit ARM Dual-Core CPU will be fused on the PCB and would enhance the GPGPU workload by shifting load from the CPU on to the custom ARM cores. It will also be useful when running NVIDIA’s next generation FLEX (Unified GPU PhysX) processes.

More reports state that the Maxwell desktop chip codename will be GM107 / GM117 and would replace GK106 in terms of performance. The GeForce GTX 750 Ti is rumored to launch in late February and would replace the GeForce GTX 650 Ti Boost for a similar price range.

Thank you WCCF for providing us with this information
Image courtesy of WCCF

Nvidia ShadowPlay Gameplay Recording Software Review

Introduction, Overview and Features

We don’t often take a look at software when it comes to graphics cards but Nvidia latest (beta) software is something quite unique. They’ve attempted to integrate functionality that gamers use a lot, that of recording their in-game footage, and make it not only free for their customers to use but also incredibly easy to use and access. Enter Nvidia ShadowPlay which is Nvidia’s (relatively) new beta software that utilises a built in H.264 hardware encoder on the Kepler GPU. ShadowPlay is part of Nvidia’s free GeForce Experience software and it is supported on the GTX 650 desktop graphics card or higher.

Nvidia’s ShadowPlay aims to beat its major rivals by not only offering a fuller suite of features, but also by taking advantage of GPU encoding that is dramatically more efficient than traditional forms of game recording. ShadowPlay supports local recording via GPU encoding, it supports all Direct X 9, 10 and 11 GPUs and is bringing Twitch Streaming support in the near future. Best of all it is “free”, of course you have to buy a GTX 650 or better desktop graphics card but once you have that it doesn’t cost you a thing.


  • Powered by Kepler’s dedicated hardware H.264 video encoder
  • Records up to the last 20 minutes of gameplay in Shadow Mode (10 minutes in Windows 7 With the GeForce Experience 1.8 Update you can now do 20 minutes in Windows 7 like in Windows 8)
  • Records unlimited length video in Manual Mode (Up to 3.8GB in Windows 7 With the GeForce Experience 1.8 Update you can now record unlimited length video like in Windows 8)
  • Outputs 1080p at up to 50 Mbps
  • Results in minimal performance impact (less than 10%)
  • GeForce GTX 650 or higher desktop GPU required (notebook GPUs are not supported at this time)

As the features suggest the ShadowPlay software is more versatile on Windows 8 than Windows 7, something gamers should consider before brushing off Windows 8, it does offer some advantages. That said on Windows 7 it still allows you to record enough in-game footage to make decent length clips, or live stream continuously.


ASUS Announce GTX 760 DirectCU Mini 2GB

ASUS has made the GTX 760 Direct CU Mini official according to TechPowerUp. The ASUS GTX 760 Direct CU Mini (model: GTX760-DCMOC-2GD5) is the successor to the GTX 670 Direct CU Mini and very little has changed. The graphics card is literally identical other than it swaps out the GTX 670 GPU for a GTX 760 GPU. The graphics card has a 170mm long PCB that makes it suited for small form factor cases and mini-ITX builds. The PCB features a 5 phase VRM that draws power from a single 8 pin PCIe connector. A vapor chamber plate draws heat away from the GPU and is cooled by ASUS’ new CoolTech fan design. On this particular unit reference clock speeds of 980MHz core, 1033MHz GPU boost and 6GHz memory are stuck to.

If we draw comparisons directly to the GTX 670 Direct CU Mini we can expect this card to be around 5-7.5% slower due to the weaker GPU and lack of an overclock. ASUS are expected to price this card at around $299 which is $50 more than a normal GTX 760. The ASUS GTX 670 Direct CU Mini currently costs around $330 but Newegg currently sell it for $305. If you can find the ASUS GTX 670 Direct CU Mini for a similar price then it would definitely be a better buy than this. This new card from ASUS is expected to compete directly with Galaxy’s GTX 760 Mini.

Image courtesy of ASUS

Nvidia’s Maxwell (GTX 800) Confirmed For Q1 Of 2014

We have already heard that we may be seeing an Nvidia GTX 790 dual GK110 GPU in the near future and now VideoCardz reports that we will also be seeing Nvidia’s Maxwell GPUs come as early as Q1 of 2014. VideoCardz cites manufacturer sources which state that the GTX 800 series will arrive at some stage in February/March.

The report states that it is unlikely that we will see 20nm GPUs with Maxwell as TSMC will not be ready with mass production in time. Samples of 20nm GPUs could be available to Nvidia by the end of this year but if 20nm GPUs launch in February or March 2014 we should expect an effective paper launch and serious supply shortages because mass production isn’t expected to take place until June 2014. Since the GTX 700 series was just a rebranded GTX 600 series, the GTX 800 series could be a revised version of the 28nm Kepler architecture in a “tick-tock” style but again the details are sketchy.

Either way the message is clear, expect something new from Nvidia’s desktop graphics card segment in early 2014. We expect many more details to emerge between now and then.

Image courtesy of Nvidia

MSI GTX 760 Gaming Series Graphics Card Pictured

With detailed specifications of the GTX 760 already known, as well as performance of the new graphics card, it was only a matter of time before we saw some Nvidia partner versions of the card would appear. Thanks to the first Nvidia GTX 760 to be spotted comes courtesy of MSI’s Gaming Series GTX 760 although unfortunately we have nothing more than the box to look at.

We should expect to see the third of the GTX 700 series GPUs come by the end of June as it appears Nvidia has now shipped out the bulk of GTX 760s to its AIB partners like MSI, ASUS, EVGA, Gigabyte and so on. The graphics card pictured in the above image is MSI’s GTX 760 Gaming Series variant likely to have a Twin Frozr cooler and it comes with overclocked frequencies.

If you are interested in seeing more specifications about the new GTX 760 then check that link. We expect Nvidia will probably attach a hefty price premium to the GTX 760 to the tune of around $300, while if we get a GTX 760 Ti that will probably slot in at $349 while the GTX 770 holds the $400 price point.

Image courtesy of

Nvidia GTX 760 Specifications Detailed Further

According to a leaked report by Chiphell the Nvidia GTX 760 will feature 1152 CUDA cores, unlike any other Nvidia GK104 card on the market currently, meaning it is not based on a  rebranded version of the GTX 670, GTX 660Ti or a GTX 660. The GTX 760 is a completely “new” Kepler design and features 1152 CUDA cores which sits in between the 960 of the GTX 660 and the 1344 of the GTX 660 Ti/GTX 670. Furthermore, Nvidia is continuing the trend of aggressive clock speeds and they opt for a 1072MHz stock base clock and 1111MHz GPU boost clock. The RAM is also clocked at an impressive 7012MHz.

The design then is essentially a GK104 GPU with 3/4 cluster units enabled. This gives 1152 CUDA cores, 96 Texture Mapping Units, 32 Render Output Units and a 256 bit GDDR5 memory interface. With it being a new card it should feature GPU Boost 2.0 and expect its TDP to be around the 150-170W region. Another key feature is that unlike the GTX 660 and GTX 660 Ti this card is not constrained by a 192 bit memory interface so should be able to achieve better performance as a result.

Image courtesy of Chiphell

GTX 760 3DMark 11 Performance Revealed

The latest specifications of the GTX 760 reveal that it sits somewhere in between a GTX 660 and GTX 660Ti in terms of the CUDA core count. Yet according to the latest 3DMark 11 test leaks from Chiphell it actually performs more or less identically to a GTX 670. The above screenshot, that for some reason has been edited to say P 830 not P8830, suggests that the stock GTX 760 is right around GTX 670 levels and about 500-700 3DMark 11 points above a GTX 660Ti.

This is interesting because it shows that with some memory bus tweaking and some overclocking Nvidia have been able to take a 1152 CUDA core count card higher than the 1344 CUDA core GTX 660Ti and to equal footing with the GTX 670. If the GTX 760 is priced right we could see one of the most competitive graphics cards on the market place to date.

The performance breakdown of this 3DMark 11 test is as follows:

  • Overall score: P8830
  • Graphics score: 8897
  • Physics score: 9676
  • Combined score: 7435

If we take a look at some of the Graphics score results for a GTX 670, we can see a reference model scores around 8600 points and a reference GTX 680 scores around 9500 points so the GTX 760 sits just above the GTX 670 with 8897. Of course the GTX 670 will have greater overclocking headroom so inevitably the GTX 760 will only end up being a better card than the GTX 660Ti.

Image courtesy of Chiphell.

Rumour: Nvidia GTX 760 Set For June 25th Release Date

Nvidia’s new GTX 700 series has already launched and it has been dominated by the GTX 780 and GTX 770 video cards. Now the speculation moves towards the unreleased graphics cards and today we are addressing some rumours that have emerged about the GTX 760 courtesy of In their report they say that we can expect the GTX 760 to be released on either Tuesday June 25th or Thursday June 27th. The launch was reportedly going to take place next week and as there was a mid-June launch expected. Though apparently AIBs managed to delay the launch as they were not ready for a transition to the new card that week.

Now apart from that details are still sketchy in terms of what the GTX 760 will actually be. Though most reports seem to agree on the rebrand and “drop-down” concept, that is that the GTX 760 will be a rebranded GTX 660 Ti. Though there have been conflicting reports that suggested the GTX 760 would just be a rebadged GTX 660 (the 1152 CUDA core model not the 1344 CUDA core GTX 660 Ti). In which case then it seems the GTX 660 Ti may vanish from Nvidia’s portfolio as the GTX 670 will form the GTX 760 Ti while the GTX 660 Ti is phased out. However there is also a third possibility which is that both the GTX 760 Ti and the GTX 760 will be based on the GTX 670 GPU but the GTX 760 Ti will obviously have a faster version of it.

Either way even with all the conflicting rumours that have probably made you as confused as I am it is reported Nvidia are planning just one GPU launch this month and it is definitely going to be the GTX 760 according to this report. We will have to wait and see as most rumours have a tendency to only be partially correct.

Image courtesy of Nvidia

Nvidia launches GTX 700M Enthusiast Notebook GPUs

The run up to Computex is always a busy time of year. There are new APUs from AMD in the form of Richland, new graphics card releases from Nvidia in the form of the GTX 700 series and new processors from Intel in the form of Haswell. Not all of these have been released yet, but will be over the coming weeks. Nvidia is now adding another big release to the bunch by launching the GTX 700M series of graphics cards for notebooks.

The current launch of the GTX 700M series from Nvidia is comprised of the GTX 760M, GTX 765M, GTX 770M and GTX 780M. If you missed the launch of earlier models, that is the GT 710M through the GT 750M, then you can check the link for those. Based on these specifications our previous article about the GTX 770M being based on a rebranded GTX 670MX does indeed seem to be true. As we can see at a hardware level all of these graphics cards are more or less the same using either GK104, GK106 or GK107 – all Kepler architectures.

That said these GPUs are not “identical” as Nvidia has tweaked and tuned base clocks, memory clock speeds and added GPU Boost 2.0 to the GTX 700M series. Therefore every part in the Nvidia GTX 700M series will be much faster than its GTX 600M series counterpart because it takes the previously higher stacked model (for example the GTX 670MX drops down to the GTX 770M) and then further increases clocks to add even more performance.

Anandtech are the ones who have revealed all this information about the GTX 700M series and they believe that BioShock Infinite at maximum settings at 1080p produces 41.5 FPS while the GTX 675MX produces 35.6 FPS and the HD 7970M produces 45.3 FPS.

Image #1 courtesy of Nvidia and Image #2 courtesy of Anandtech 

Gigabyte GeForce GTX 770 OC WindForce 3x 2GB Graphics Card Review

With the release of the GTX 770 and the recent launch of the GTX 780, NVIDIA made on fundamental change to their cards over the GTX 690 and Titan. This change is one that may seem simple, but it is one that has a major role in the graphics market and for each of NVIDIA’s partners. This change is the grant to change the PCB layout and most importantly the cooling on the cards. When the GTX Titan was released, NVIDIA put a halt on any non reference designs and in effect the only alteration partners could make was to put a sticker with their name on the card.

The is not the case with the 700 series however, whilst the GTX 770 and GTX 780 used the exact same cooler as seen on Titan, manufacturers are now allowed to deviate from the reference design and put their own mark on their cards to set them apart from the competition. In Gigabyte’s case, the cooler of choice is WindForce and for a number of years now this has been at the forefront of their marketing campaigns.

With the release of the 770, Gigabyte are keen to show off the latest revision of their well known cooler, which now features a metal housing rather than the older plastic design. On top of this, Gigabyte have given the GK104 core the overclock treatment to take the 770 to the next level and let it stretch its legs a bit more.

When it comes to looking at this card, there is little more than the card in a box to look at as this is a review sample, meaning that Gigabyte have omitted the usual gubbins and accessories that we would normally see as part of a graphics card bundle.

NVIDIA GTX 770 2GB Graphics Card Review

Last week we saw the release of NVIDIA’s latest graphics range – namely the 700 series and its top model, the GTX 780. In many respects the GTX 780 brings a whole new level of performance to a greater audience and as I showed, there is only a small difference between the 780 and Titan on a single screen.

Working through the new 700 series line-up, NVIDIA are now lifting the lid on their next card, the GTX 770. Like the GTX 780, the GTX 770 has had many rumours surrounding its release and like the 780, these are all related to specifications, performance and most of all the GK104 core and a GTX 680. Like the GTX 780 I first of all want to put one of these rumours to rest and state the reason why. The one that I am referring to is the speculation that GTX680 owners would be able to turn their card into a GTX 770 through a BIOS update. Simply put this CANNOT be done. Whilst both cards share the same GK104 GPU core, there are a number of factors that lead to this impossibility. Like the 780 to Titan comparison, the GTX 770 has a slightly different revision of the GK104 core with varying number s of CUDA cores and texture units, however the most significant factor for the inability to ‘convert’ the GTX680 lies with the on-board memory.

One of NVIDIA’s major shouting points with the GTX770 is the inclusion of memory that runs at a whopping 7Gbps at stock, these are no overclocked ICs either, they are entirely new, so unless you have the ability to unsolder and resolder the ICs on to a GTX 680 as well as change the PCB layout slightly, there is no possibility of changing your card from one to the other.

GTX 770 Details Revealed – Higher Clocks And Lower Price Than Expected

We’ve had a lot in the way of GTX 770 speculation recently, yet we haven’t heard much recently. That said a report from Hermitage Akihabara has suggested both new details about the GTX 770, its price and specifications.

Nothing has changed on the “GTX 680” rebrand side, we still have identical hardware specifications to a GTX 680 as predicted earlier several times. However, the clock speeds are radically different to what we expected with an expected base clock of 1046MHz and 1085MHz boost. Traditionally AMD have been the only graphics card vendor in recent times to brag about smashing the 1GHz boundary.

They also released pricing information which suggests that the card will sell for 40,000 Japanese Yen which translates into $380-400 and €380-400 and therefore about £340-£350. This would be a massive shake up to the video card market as the GTX 680 currently costs around $500 and £385 respectively. The GTX 770 would totally undercut this and probably force Nvidia to hack down prices of GTX 600 series video cards. In addition we can expect AMD to suffer in terms of sales when the HD 7970 GHz Edition is still costing $450 and £350 yet Nvidia would have a marginally faster counterpart for around 20% less cost.

There’s no guaranteeing that this pricing will actually translate, it is possible there could be an extra mark-up for the European and North American markets. Only time will tell. Would you buy a GTX 770 if it costs around $380/€380/£340?

Image courtesy of Hermitage Akihabara

Nvidia GeForce GTX 780 3GB Graphics Card Review

It’s that time of year again where NVIDIA have a new series of cards in the pipelines and as we have seen running up to today, the number of rumours and leaks that have been flying about are as profound as ever. For some this leads to pure confusion as to what is to be seen and what is complete rubbish, and for people like myself it leads to pure frustration as I know all the true facts and figures, meaning that when I see the rumours and false facts floating around I can do nothing but sit and wait until the NDA lifts to put a number of these claims to rest with the real specifications and performance figures behind the new cards.

So here we have it, the GTX 780 – the first in the new line of Kepler based 700 series cards and before we get too far into the nitty gritty of what’s new in the 700 series, I want to make the following fact clear and true – the GTX 780 CANNOT be flashed in any way to effectively turn it into Titan. There are a number of reasons for this; first off, whilst both cards share the same GK110 core, the 780 has far less CUDA cores, is a different revision of the core chip and has less texture units on-board. On top of this, there is also half the amount of video memory and a number of components in the power region of the PCB are missing as the 780 does not require these as opposed to Titan.

Point out of the way, NVIDIA’s new 700 series cards are here to replace the ever popular 600 series, although they are not a re-hash and re-brand of 6xx cards as some may presume. Whilst the GK110 cores may be featured on both 600 and 700 series cards, they will have subtle variances to them, mainly on the front of CUDA core count and texture filters and so forth.

So what is the 780 in relation to the 600 series cards. Whilst it may look like Titan, it is a slightly lower performing card. Titan is more geared towards users with multiple high resolution displays and thus the higher 6GB of GDDR5 memory that it encompasses. The 780 whilst still home to 3GB of GDDR5 is more aimed at users who are going to be gaming on a single screen at high resolutions with all the settings turned to 11. Over its predecessor, the GTX 680, the 780 has 50% more CUDA cores with a count of 2034, 50% more memory, up to 3GB from 2GB and overall a 34% increase in performance. Interestingly enough, GTX 580 users who upgrade to a 780 will see a whopping 70% gain in performance between the two cards and a 25-30% gain can also be found over AMD’s 7970.

MSI’s GTX 770 Lightning Graphics Card Pictured

Nvidia’s GTX 770 will be a rebranded GTX 680 with a few performance tweaks. As a result we are not surprised to see MSI ready with a GTX 770 Lightning Edition straight away given that they had a GTX 680 Lightning Edition graphics card which is in effect the same product. You can see the performance of the GTX 770 here and the specifications of the GTX 770 here. It is worth noting that while the stock GTX 770 has clock speeds of 1046 MHz core, 1085 MHz boost and 7GHz effective memory, the MSI GTX 770 Lightning Edition will probably have much higher clock speeds – to the tune of around 1100MHz core, 1150MHz boost and 7.4GHz memory is my best “guestimate”.

The MSI GTX 770 Lightning Edition uses the Twin Frozr IV cooler with a pair of 10cm PWM fans. There is a dense aluminium heatsink and a bunch of heatpipes to cool the GPU, VRM and memory. The fans are equipped with MSI’s “Dust Removal” technology which reverses the fans on start-up to expel dust. In addition the GPU is expected to ship with MSI’s GPU Reactor module which beefs up the VRM and reduces static noise, allowing for greater overclocking.

Expect a price point of around $449+ for this model as it will be among the-best-of-the-best when it comes to GTX 770s that are available on the market. For reference the current MSI GTX 680 Lightning Edition costs $499.99 on Newegg. Expect the GTX 770s to hit the market on the 30th of May if rumours are to be believed, availability will probably come in the following week and pricing is expected at $399-$449.

Check out the pictures below and let us know what you think of it!


Nvidia GTX 770 Performance Revealed

While we already know the GTX 770 is just a rehashed GTX 680 with some slight performance tweaks, we haven’t really understood what that will translate into in terms of gaming performance, until today. We won’t bore you again with the specifications of the GTX 770 as you can see those here.

Just recently the GTX 770 was benchmarked in comparison to the GTX 680. The GTX 770 in question had 1059 MHz core, 1076 MHz boost and 7GHz effective memory clocks. The GTX 680 had 1006MHz core, 1056MHz boost and 6GHz effective memory. This makes the GTX 770 5% faster in terms of core clock speeds and 17% faster in terms of memory speeds.

This is reportedly the “stock” configuration for all GTX 770s. Unlike the GTX 680 the GTX 770 is now limited to 8GHz effective memory clock not 7.2GHz. This is because it uses  HY R2C particles not HY ROC particles for the memory. This means you will be able to achieve better memory overclocks on GTX 770s versus GTX 680s.

The card was tested in a variety of gaming configurations and tests and yielded approximately a 10% performance boost over a stock reference GTX 680. That is stock GTX 770 vs stock GTX 680.

3DMark FireStrike (Extreme):

  • NVIDIA GeForce GTX 770: 3535 Marks
  • NVIDIA GeForce GTX 680: 3150 Marks

3DMark FireStike (Performance):

  • NVIDIA GeForce GTX 770: 7078 Marks
  • NVIDIA GeForce GTX 680: 6331 Marks

3DMark 11 (Extreme):

  • NVIDIA GeForce GTX 770: 3840 Marks
  • NVIDIA GeForce GTX 680: 3411 Marks

3DMark 11 (Performance):

  • NVIDIA GeForce GTX 770: 10693 Marks
  • NVIDIA GeForce GTX 680: 09777 Marks

FarCry 3 (1920×1080):

  • NVIDIA GeForce GTX 770: 70.9 FPS
  • NVIDIA GeForce GTX 680: 79.8 FPS

CRYSIS 3 (1920×1080):

  • NVIDIA GeForce GTX 770: 42.9 FPS
  • NVIDIA GeForce GTX 680: 39.1 FPS

TOMB RAIDER (1920×1080):

  • NVIDIA GeForce GTX 770: 87.3 FPS
  • NVIDIA GeForce GTX 680: 78.3 FPS

The GTX 770 is expected on May 30th at a price point of $399-$449.What are your thoughts on these performance numbers? Is it a real performance increase or just as a result of the higher clocks that the GTX 770 has over the GTX 680?

Source, Via #1 | #2