Many will remember playing RuneScape, the game being a popular favourite way back before large multiplayer online games flooded the video game market. From the village of Lumbridge to the battlegrounds of Castle Wars, players levelled and traded all within their browser for the past fifteen years, but that is all to change with the games relaunch, featuring new game engine and all.
Using a new visual engine and game client, RuneScape will no longer be played in your browser, instead sitting on your computer awaiting your adventures. The new adventure isn’t going to just change its location with a wide range of technical improvements including support for DirectX12 and Windows 10.
The graphics are clearly on a whole different level to the pixels and blocks that once strained your eyes as you mined for copper and tin, with new draw distances, water effects and dynamic lighting and shadows now welcoming you into the world on a whole new level.
Jagex isn’t stopping with the new game client, with Jagex promising further enhancements to the game’s visuals, including the inclusion of volumetric lighting, improved animations, and higher-resolution textures.
I remember starting back on RuneScape many years ago, and the new graphics definitely look to bring the urge to boot it up again to the surface. If you’re interested you can download the new game client here.
When new technology comes out it tends to take time for systems and developers to get to grips with them, with their advertised bonuses normally offered at some price, but is it too steep with Hitman’s developer saying that if you want to experience DX12’s bonuses only achievable by dropping DX11 support completely.
Hitman was released earlier this year to favourable reviews, with an entire YouTube series putting people in command of Agent 47, including the likes of the chuckle brothers. The lead developer behind the game, Jonas Meyer of IO Interactive, has not stated that if you want the 20% CPU and 50% GPU bonuses that Microsoft promise with DX12 you will have to drop DirectX 11 support entirely. Hitman, on the other hand, was more of a port from the former framework to DX12.
With games getting released more and more often with DX12 at their core, such as the remake of the classic Gears of War games, suffering from less than amazing performances the new graphical library doesn’t look to show off as much as it was advertised just yet.
AMD has a serious image problem with their drivers which stems from buggy, unrefined updates, and a slow release schedule. Even though this perception began many years ago, it’s still impacting on the company’s sales and explains why their market share is so small. The Q4 2015 results from Jon Peddie Research suggests AMD reached a market share of 21.1% while NVIDIA reigned supreme with 78.8%. Although, the Q4 data is more promising because AMD accounted for a mere 18.8% during the last quarter. On the other hand, respected industry journal DigiTimes reports that AMD is likely to reach its lowest ever market position for Q1 2016. Thankfully, the financial results will emerge on April 21st so we should know the full picture relatively soon. Of course, the situation should improve once Polaris and Zen reach retail channels. Most importantly, AMD’s share price has declined by more than 67% in five years from $9 to under $3 as of March 28, 2016. The question is why?
Is the Hardware Competitive?
The current situation is rather baffling considering AMD’s extremely competitive product line-up in the graphics segment. For example, the R9 390 is a superb alternative to NVIDIA’s GTX 970 and features 8GB VRAM which provides extra headroom when using virtual reality equipment. The company’s strategy appears to revolves around minor differences in performance between the R9 390 and 390X. This also applied to the R9 290 and 290X due to both products utilizing the Hawaii core. NVIDIA employs a similar tactic with the GTX 970 and GTX 980 but there’s a marked price increase compared to their rivals.
NVIDIA’s ability to cater towards the lower tier demographic has been quite poor because competing GPUs including the 7850 and R9 380X provided a much better price to performance ratio. Not only that, NVIDIA’s decision to deploy ridiculously low video memory amounts on cards like the GTX 960 has the potential to cause headaches in the future. It’s important to remember that the GTX 960 can be acquired with either 2GB or 4GB of video memory. Honestly, they should have simplified the process and produced the higher memory model in a similar fashion to the R9 380X. Once again, AMD continues to offer a very generous amount of VRAM across various product tiers.
Part of the problem revolves around AMD’s sluggish release cycle and reliance on the Graphics Core Next (GCN) 1.1 architecture. This was first introduced way back in 2013 with the Radeon HD 7790. Despite its age, AMD deployed the GCN 1.1 architecture on their revised 390 series and didn’t do themselves any favours when denying accusations about the new line-up being a basic re-branding exercise. Of course, this proved to be the case and some users managed to flash their 290/290X to a 390/390X with a BIOS update. There’s nothing inherently wrong with product rebrands if they can remain competitive in the current market. It’s not exclusive to AMD, and NVIDIA have used similar business strategies on numerous occasions. However, I feel it’s up to AMD to push graphics technology forward and encourage their nearest rival to launch more powerful options.
Another criticism regarding AMD hardware which seems to plague everything they release is the perception that every GPU runs extremely hot. You only have to look on certain websites, social media and various forums to see this is the main source of people’s frustration. Some individuals are even known to produce images showing AMD graphics cards setting ablaze. So is there any truth to these suggestions? Unfortunately, the answer is yes and a pertinent example comes from the R9 290 range. The 290/290X reference models utilized one of the most inefficient cooler designs I’ve ever seen and struggled to keep the GPU core running below 95C under load.
Unbelievably, the core was designed to run at these high thermals and AMD created a more progressive RPM curve to reduce noise. As a result, the GPU could take 10-15 minutes to reach idle temperature levels. The Hawaii temperatures really impacted on the company’s reputation and forged a viewpoint among consumers which I highly doubt will ever disappear. It’s a shame because the upcoming Polaris architecture built on the 14nm FinFET process should exhibit significant efficiency gains and end the concept of high thermals on AMD products. There’s also the idea that AMD GPUs have a noticeably higher TDP than their NVIDIA counterparts. For instance, the R9 390 has a TDP of 275 watts while the GTX 970 only consumes 145 watts. On the other hand, the Fury X utilizes 250 watts compared to the GTX 980Ti’s rating of 275 watts.
Eventually, AMD released a brand new range of graphics cards utilizing the first iteration of high bandwidth memory. Prior to its release, expectations were high and many people expected the Fury X to dethrone NVIDIA’s flagship graphics card. Unfortunately, this didn’t come to fruition and the Fury X fell behind in various benchmarks, although it fared better at high resolutions. The GPU also encountered supply problems and emitted a large whine from the pump on early samples. Asetek even threatened to sue Cooler Master who created the AIO design which could force all Fury X products to be removed from sale.
The rankings alter rather dramatically when the DirectX 12 render is used which suggests AMD products have a clear advantage. Asynchronous Compute is the hot topic right now which in theory allows for greater GPU utilization in supported games. Ashes of the Singularity has implemented this for some time and makes for some very interesting findings. Currently, we’re working on a performance analysis for the game, but I can reveal that there is a huge boost for AMD cards when moving from DirectX11 to DirectX12. Furthermore, there are reports indicating that Pascal might not be able to use asynchronous shaders which makes Polaris and Fiji products more appealing.
Do AMD GPUs Lack Essential Hardware Features?
When selecting graphics hardware, it’s not always about pure performance and some consumers take into account exclusive technologies including TressFX hair before purchasing. At this time, AMD incorporates with their latest products LiquidVR, FreeSync, Vulkan support, HD3D, Frame rate target control, TrueAudio, Virtual Super resolution and more! This is a great selection of hardware features to create a thoroughly enjoyable user-experience. NVIDIA adopts a more secretive attitude towards their own creations and often uses proprietary solutions. The Maxwell architecture has support for Voxel Global Illumination, (VGXI), Multi Frame Sampled Anti-Aliasing (MFAA), Dynamic Super Resolution (DSR), VR Direct and G-Sync. There’s a huge debate about the benefits of G-Sync compared to FreeSync especially when you take into account the pricing difference when opting for a new monitor. Overall, I’d argue that the NVIDIA package is better but there’s nothing really lacking from AMD in this department.
Have The Drivers Improved?
Historically, AMD drivers haven’t been anywhere close to NVIDIA in terms of stability and providing a pleasant user-interface. Back in the old days, AMD or even ATI if we’re going way back, had the potential to cause system lock-ups, software errors and more. A few years ago, I had the misfortune of updating a 7850 to the latest driver and after rebooting, the system’s boot order was corrupt. To be fair, this could be coincidental and have nothing to do with that particular update. On another note, the 290 series was plagued with hardware bugs causing black screens and blue screens of death whilst watching flash videos. To resolve this, you had to disable hardware acceleration and hope that the issues subsided.
The Catalyst Control Center always felt a bit primitive for my tastes although it did implement some neat features such as graphics card overclocking. While it’s easy enough to download a third-party program like MSI Afterburner, some users might prefer to install fewer programs and use the official driver instead.
Not so long ago, AMD appeared to have stalled in releasing drivers for the latest games to properly optimize graphics hardware. On the 9th December 2014, AMD unveiled the Catalyst 14.12 Omega WHQL driver and made it ready for download. In a move which still astounds me, the company decided not to release another WHQL driver for 6 months! Granted, they were working on a huge driver redesign and still produced the odd Beta update. I honestly believe this was very damaging and prevented high-end users from considering the 295×2 or a Crossfire configuration. It’s so important to have a consistent, solid software framework behind the hardware to allow for constant improvements. This is especially the case when using multiple cards which require profiles to achieve proficient GPU scaling.
Crimson’s release was a major turning point for AMD due to the modernized interface and enhanced stability. According to AMD, the software package involves 25 percent more manual test cases and 100 percent more automated test cases compared to AMD Catalyst Omega. Also, the most requested bugs were resolved and they’re using community feedback to quickly apply new fixes. The company hired a dedicated team to reproduce errors which is the first step to providing a more stable experience. Crimson apparently loads ten times faster than its predecessor and includes a new game manager to optimize settings to suit your hardware. It’s possible to set custom resolutions including the refresh rate, which is handy when overclocking your monitor. The clean uninstall utility proactively works to remove any remaining elements of a previous installation such as registry entries, audio files and much more. Honestly, this is such a revolutionary move forward and AMD deserves credit for tackling their weakest elements head on. If you’d like to learn more about Crimson’s functionality, please visit this page.
However, it’s far from perfect and some users initially experienced worse performance with this update. Of course, there’s going to be teething problems whenever a new release occurs but it’s essential for AMD to do everything they can to forge a new reputation about their drivers. Some of you might remember, the furore surrounding the Crimson fan bug which limited the GPU’s fans to 20 percent. Some users even reported that this caused their GPU to overheat and fail. Thankfully, AMD released a fix for this issue but it shouldn’t have occurred in the first place. Once again, it’s hurting their reputation and ability to move on from old preconceptions.
Is GeForce Experience Significantly Better?
In recent times, NVIDIA drivers have been the source of some negative publicity. More specifically, users were advised to ignore the 364.47 WHQL driver and instructed to download the 364.51 beta instead. One user said:
“Driver crashed my windows and going into safe mode I was not able to uninstall and rolling back windows would not work either. I ended up wiping my system to a fresh install of windows. Not very happy here.”
NVIDIA’s Sean Pelletier released a statement at the time which reads:
“An installation issue was found within the 364.47 WHQL driver we posted Monday. That issue was resolved with a new driver (364.51) launched Tuesday. Since we were not able to get WHQL-certification right away, we posted the driver as a Beta.
GeForce Experience has an option to either show WHQL-only drivers or to show all drivers (including Beta). Since 364.51 is currently a Beta, gamers who have GeForce Experience configured to only show WHQL Game Ready drivers will not currently see 364.51
We are expecting the WHQL-certified package for the 364.51 Game Ready driver within the next 24hrs and will replace the Beta version with the WHQL version accordingly. As expected, the WHQL-certified version of 364.51 will show up for all gamers with GeForce Experience.”
As you can see, NVIDIA isn’t immune to driver delivery issues and this was a fairly embarrassing situation. Despite this, it didn’t appear to have a serious effect on people’s confidence in the company or make them re-consider their views of AMD. While there are some disgruntled NVIDIA customers, they’re fairly loyal and distrustful of AMD’s ability to offer better drivers. The GeForce Experience software contains a wide range of fantastic inclusions such as ShadowPlay, GameStream, Game Optimization and more. After a driver update, the software can feel a bit unresponsive and takes some time to close. Furthermore, some people dislike the notion of GameReady drivers being locked in the GeForce Experience Software. If a report from PC World is correct, consumers might have to supply an e-mail address just to update their drivers through the application.
Before coming to a conclusion, I want to reiterate that my allegiances don’t lie with either company and the intention was to create a balanced viewpoint. I believe AMD’s previous failures are impacting on the company’s current product range and it’s extremely difficult to shift people’s perceptions about the company’s drivers. While Crimson is much better than CCC, it’s been the main cause of a horrendous fan bug resulting in a PR disaster for AMD.
On balance, it’s clear AMD’s decision to separate the Radeon group and CPU line was the right thing to do. Also, with Polaris around the corner and more games utilizing DirectX 12, AMD could improve their market share by an exponential amount. Although, from my experience, many users are prepared to deal with slightly worse performance just to invest in an NVIDIA product. Therefore, AMD has to encourage long-term NVIDIA fans to switch with reliable driver updates on a consistent basis. AMD products are not lacking in features or power, it’s all about drivers! NVIDIA will always counteract AMD releases with products exhibiting similar performance numbers. In my personal opinion, AMD drivers are now on par with NVIDIA and it’s a shame that they appear to be receiving unwarranted criticism. Don’t get me wrong, the fan bug is simply inexcusable and going to haunt AMD for some time. I predict that despite the company’s best efforts, the stereotypical view of AMD drivers will not subside. This is a crying shame because they are trying to improve things and release updates on a significantly lower budget than their rivals.
As always, most of the focus on Polaris has been on the top end chip. This has meant that much of the talk ahs been focused on the Polaris 10, the R9 390X/Fury replacement. Today though, we’ve been treated to a leak of the mainstream Polaris chip, Polaris 11. Based off of a CompuBench leak, we’re now getting a clearer picture of what Polaris 11 will look like as the Pitcairn replacement.
The specific Polaris 11 chip spotted features a total of 16CUs, for 1024 GCN 4.0 Stream Processors. This puts it right where the 7850/R7 370 is right now. Given the efficiency gains seen by the move to GCN 4.0 though, performance should fall near the 7870XT or R9 280. The move to 14nm FinFET also means the chip will be much smaller than Pitcairn currently is. Of course, this information is only for the 67FF SKU so there may be a smaller or more likely, a larger Polaris 11 in the works.
Other specifications have also been leaked, with a 1000Mhz core clock speed. Memory speed came in at 7000Mhz, with 4GB of VRAM over a 128bit bus. This gives 112GB/s of bandwidth which is a tad higher than the R7 370 before you consider that addition of delta colour compression technology. GCN 4.0 will also bring a number of other improvements tot he rest of the GPU, most importantly FreeSync support, something Pitcairn lacks.
While we can’t guarantee the same SKU was used, Polaris 11 was the GPU AMD pitted against the GTX 950 back at CES. During the benchmark of Star Wars Battlefront, the AMD system only drew 84W compared to the Nvidia system pulling 140W. For the many gamers who buy budget and mainstream cards, Polaris 11 is shaping out very well.
From the many leaks and rumours that have come out, the expected release of Pascal will come later this year at Computex. During the Taiwanese event, Nvidia will finally unveil the GTX 1000 lineup to the public. Today, we’re getting yet another report confirming this. In addition, Nvidia AiB partners like ASUS, Gigabyte and MSI will showcase their reference cards then as well. As revealed yesterday, mass shipments won’t begin till July.
As we’ve covered earlier, the sources appear to suggest that Nvidia will have a head start over AMD in launching Pascal ahead of Polaris. However, the lead might not amount to much. The report suggests that Nvidia will only ship large amounts till July. AMD on the other hand is also launching Polaris in June, the same month as Pascal. Given AMD’s previous history, we will probably see Polaris cards out by July as well. If Nvidia does have a lead, it won’t be for very long.
Right now, there is no word yet if GDDR5X will be utilized for top end Pascal chips. While there are some reports out that suggest GDDR5X, the timeline is very tight as GDDR5X probably won’t reach enough capacity till May/June at the earliest. Perhaps this is why we won’t be seeing Pascal or Polaris in numbers till July.
Yesterday, we reported on AMD’s plans to supposedly launch their next graphics architecture, codenamed ‘Polaris’ in June. The details surrounding NVIDIA’s response with Pascal aimed at consumers was unknown but it seemed likely the range would arrive at a similar date. According to new information sourced by Digitimes, Pascal will be unveiled during Computex and enter mass shipments in July:
“The graphics card players will begin mass shipping their Pascal graphics cards in July and they expect the new-generation graphics card to increase their shipments and profits in the third quarter.”
Interestingly, the report claims that AMD might only unveil the Polaris range in June, and shipments could occur at a later date. Apparently, NVIDIA will have the advantage and be the first company to release their new line-up to the public:
“Meanwhile, AMD has prepared Polaris-based GPUs to compete against Nvidia’s Pascal; however, the GPUs will be released later than Nvidia’s Pascal and therefore the graphics card players’ third-quarter performance will mainly be driven by demand for their Nvidia products.”
Thankfully, both graphics card manufacturers look set to release brand new products and I cannot wait to see the performance improvements and pricing across various tiers. AMD appears to be focussing on performance per watt on Polaris and the demonstrations thus far have been impressive. Sadly, we haven’t really seen anything from NVIDIA’s new consumer line-up, so it will be fascinating when the samples finally arrive. It’s still unknown which products will opt for HBM2, if any. It’s clear that the majority of models from both companies are set to utilize GDDR5X. While this isn’t a patch on HBM2, it’s a good improvement from the older GDDR5 standard.
Recently, there were some murmurings about NVIDIA delaying mainstream Pascal under 2017. This doesn’t look like the case at all, and if anything, reports suggest they will be the first to market.
Asynchronous Compute has been one of the headline features with DX12. Pioneered by AMD in their GCN architecture, Async Compute allows a GPU to handle both graphics and compute tasks at the same time, making the best use of resources. Some titles such as Ashes of the Singularity have posted massive gains from this and even titles that have a DX 11 heritage stands to have decent gains. In an update to Async Compute, AMD has added Quick Response Queue support to GCN 1.1 and after.
One of the problems with Async Compute is that it is relatively simple. It only allows graphics and compute tasks to be run at the same time on the shaders. Unfortunately, Async Compute as it stands right now will prioritize graphics tasks, meaning compute tasks only get the leftover resources. This means there is no guarantee when the compute task will finish as it depends on the graphics workload. Quick Response Queue solves this by merging preemption where you stop the graphics workload entirely, with Async Compute.
With Quick Response Queue, tasks can be given special priority to ensure they complete on time. Quick Response Queue will also allow the graphics task to run at the same time albeit with reduced resources. By providing more precise and dependable control, this allows developers to make better use of Async Compute, especially on latency/frame sensitive compute tasks. Moving on, we may see greater gains from Async in games as AMD allows more types of compute workloads to be optimized. Hopefully, this feature will reach GCN 1.0 cards but that depends on if the hardware is capable of it.
These days, when a company releases a game there tends to be two sets of development going on. Internally the company looks to bring out expansions, DLC, and fixes for the game, while externally everyday people look to create a whole new experience by modding the game to open up a whole new world for the player. One such mod looks to bring your GTA V experience to a whole new level with the GTA Redux mod.
The Grand Theft Auto V Redux mod is the successor to the “Pinnacle of V” mod, designed to bring the game to a whole new level of graphics with a variety of new and updated features.
Looking for 4K resolution? They’ve got it. How about updated explosion graphics? Got it. Smoke and particle effects? Got it. The mod even adds new debris textures, making your gameplay experience even closer to that of a movie than an actual game. Redux will include newly worked physics and explosions alongside an overhauled weapon system, meaning that not only does the game look better it will play better as well.
Anyone else excited by this? The mod is made by John Romito, and since releasing his website for the mod he’s received 987 requests for tweaks to be made to the game. Ranging from something as fundamental as an overhaul of the physics engine to re-branding every vehicle and billboard advert found in the game. With so many features being brought to the game it’s only a matter of time before we get the mod released and lose ourselves to the highly detailed world of Los Santos again.
One of the inevitable signs of an imminent release of new products is when the old model starts becoming hard to find. A seamless transition to the new version is a mark of good logistics and something Nvidia is known for. In line with the expectations for Pascal, Nvidia has reportedly stopped shipping GTX 980Ti’s to their AiB partners, which indicates that Nvidia is winding down the supply chain for the high-end card.
A stop in GTX 980Ti production points to a Pascal chip to replace it coming soon down the line. Usually, shipments to stores are ahead by a month and production a month or so before that. If Nvidia stops supplies now, there will still be about 2-3months before supplies run low. This puts the timeframe smack dab during Computex where Pascal is expected to be launched. It seems like perfect timing for GTX 980Ti supply to dry up just as Pascal launches and becomes available.
Given the movement to 16nmFF+, we can expect the GTX 1080 to at the very least match the GTX 980Ti. With a replacement product, it makes sense for the GTX 980Ti to cease production now. For now, it seems that Nvidia hasn’t started supplying their partners with Pascal just yet but that should happen shortly if Pascal is to arrive at Computex. The leaked shrouds suggest that the AiB partners have already tooled up in expectation of Pascal. Of course, this is still an unconfirmed report, released on April 1st to boot, so take this with a fist full of salt.
Looking back, AMD missed a big opportunity to get into the mobile phone and tablet market. According to Raja Koduri, SVP for RTG, AMD may be contemplating getting back into the mobile graphics market, provided the circumstances are right.
Originally a part of ATI, the mobile graphics division, Imageon was acquired by AMD along with the parent company. After running into severe financial hardship, AMD decided to sell the mobile division off to Qualcomm which renamed it Adreno, an anagram of Radeon. Combined with their custom ARM CPUs, Qualcomm has managed to become the largest mobile SoC vendor, putting Adreno into millions of devices. The only other major competitors are Imagination and Mali from ARM itself.
By considering the mobile GPU market if the right customer comes by, AMD is opening yet another market for them to enter. Right now, Adreno is still largely based on the VLIW architecture that ATI and AMD left in 2011. GCN, on the other hand, is a more complex and advanced architecture with arguably better performance per watt. With the rise of GPU based compute being used in gaming, GCN may be a potent force in tablets.
Seeking more custom chip customers makes sense of AMD given that their consoles deals are helping keep the firm afloat as other sources of revenue are dropping. There is a large measure of risk however as Nvidia has demonstrated with their flagging Tegra lineup. By securing a customer first, AMD can pass on the risk and run a much safer course. Perhaps, the next PSP or DS will be running GCN.
Dark Souls III is set to be another lesson in patience, putting you, your controller/keyboard/mouse and your temper to the test in another hard as nails adventure. Aside from the inevitable “You Are Dead” type messages every other minute throughout the game, we now have some idea of how hard the game is going to be on your system, as screenshots of the advanced graphics settings have surfaced online.
We already know that the PC version will be 60fps capable, not limited to 30fps as first rumoured, so that’s a good start. However, unlike the games console counterparts, you’ll also find a range of advanced features for tweaking in-game texture quality, shadows, depth of field, motion blur, lighting quality, effects, reflections, water, shaders and AA. That’s a lot of options, although all of them are still fairly common for a PC game. This means that you should be able to max things out and get a real visual treat over the console versions of the game, we hope, but it’s also good news for those of you with less powerful systems, as you can dial a lot of the settings down to keep in-game performance running smoothly.
Are you looking forward to Dark Souls III? Have you even completed the last two entries in the series yet?
The Division is already winning praise from gamers around the world, not only for being an Ubisoft game that apparently doesn’t suck but for also being a visual tour de force. Of course, when it comes to being a PC gamer, we all know that no matter how great a game is when it launches, the dedicated efforts of the modding community can always do better.
SweetFX is one of the most popular mods around, helping push even more detail in many popular games by enhancing everything from colour reproduction to post processing effects. The team at DSO Gaming used the Soetdjur 4K SweetFX mod and got snap happy in their latest screenshot gallery and the end results are certainly spectacular.
Of course, not everyone has got a system meaty enough to push this game at 4K resolutions, but you’ll still see great benefits to the visuals at other resolutions, as there are many other SweetFX profiles out there for you to download, or you can even create your own! Either way, after seeing these screenshots, you’ll never want to play The Division without SweetFX enabled ever again!
Check out the gallery below and let us know what you think in the comments!
Traditionally if you wanted to play video games on a PC you needed a full case PC with all the components within the case taking up space. In recent years, the craze of small gaming rigs has seen gaming laptops catch up with the level of power they offer for your daily gaming needs. This week though a new competitor may be on the market with Intel’s new Skull Canyon NUC preparing to take on gamers needs.
Typically when dealing with NUC’s (Next Unit of Computing. A term used to define mini-PC’s), you were offered limited power in exchange for a pocket PC but the Skull Canyon NUC seems to break this by offering an i7-6770HQ.
The difference between this and a traditional i7 or i5 is that for most units they would contain 24 execution units for the graphics. The i7-6770HQ offers you a whole new level with 72 execution units. The Skull Canyon will also support Razer Core, Razers latest creation that will allow for external graphics card set ups.
Supporting thunderbolt 3.0, USB3.1 and even HDMI 2.0, you could enjoy streaming 4k content with ease before switching over to your games.
In regards to price, you should be able to find one on the market as soon as May for $650 (£450 approximately) but you will need to provide the RAM, SSD and operating system but with the overall price for a system on a 16GB and 256GB SSD set up coming in under $1,000, miniature PC’s could soon be showing up at gaming tournaments.
The Far Cry franchise gained notoriety for its impeccable graphical fidelity and enthralling open world environment. As a result, each release is incredibly useful to gauge the current state of graphics hardware and performance across various resolutions. Although, Ubisoft’s reputation has suffered in recent years due to poor optimization on major titles such as Assassin’s Creed: Unity and Watch Dogs. This means it’s essential to analyze the PC version in a technical manner and see if it’s really worth supporting with your hard-earned cash!
Far Cry Primal utilizes the Dunia Engine 2 which was deployed on Far Cry 3 and Far Cry 4. Therefore, I’m not expecting anything revolutionary compared to the previous games. This isn’t necessarily a negative concept though because the amount of detail is staggering and worthy of recognition. Saying that, Far Cry 4 was plagued by intermittent hitching and I really hope this has been resolved. Unlike Far Cry 3: Blood Dragon, the latest entry has a retail price of $60. According to Ubisoft, this is warranted due to the lengthy campaign and amount on content on offer. Given Ubisoft’s turbulent history with recent releases, it will be fascinating to see how each GPU this generation fares and which brand the game favours at numerous resolutions.
“Far Cry Primal is an action-adventure video game developed and published by Ubisoft. It was released for the PlayStation 4 and Xbox One on February 23, 2016, and it was also released for Microsoft Windows on March 1, 2016. The game is set in the Stone Age, and revolves around the story of Takkar, who starts off as an unarmed hunter and rises to become the leader of a tribe.” From Wikipedia.
In the few days after AMD first demoed Polaris 10 to us at Capsaicin, more details about the upcoming graphics cards have been revealed. Set to be the big brother to the smaller Polaris 11, the better performing chip will drop sometime after June this year.
First off, we’re now able to bring you more information about the settings Hitman was running at during the demo. At Ultra Settings and 1440p, Polaris 10 was able to keep to a constant 60FPS, with VSync being possible. This means the minimum FPS did not drop below 60 at any point. This puts the card at least above the R9 390X and on par if no better than the Fury and Fury X. Of course, the demo was done with DX12 but the boost is only about 10% in Hitman.
Another detail we have uncovered is the maximum length of the engineering sample. Based on the Cooler Master Elite 110 case used, the maximum card length is 210mm or 8.3 inches. In comparison, the Nano is 6 inches and the Fury X 7.64 inches. Given the small size, one can expect Polaris 10 to be as power efficient as Polaris 11 and potentially be using HBM. Given that Vega will be the cards to debut HBM2, Polaris 10 may be limited to 4GB of VRAM. Finally, display connectivity is provided by 3x DP 1.3, 1x HDMI 2.0 and 1 DVI-D Dual Link though OEMs may change this come launch unless AMD locks it down.
After Samsung and Nvidia had their recent legal spat, more light has been shed on the world of GPU patents and licensing. While Intel holds their own wealth of patents, no doubt some concerning GPUs, Nvidia and AMD, being GPU firms, also hold more important patents as well. With Intel’s cross-licensing deal with Nvidia set to expire in Q1 2017, the chip giant is reportedly in negotiations with AMD to strike up a patent deal.
Being one of the big two GPU designers, AMD probably has many important and critical GPU patents. Add in their experience with APUs and iGPUs, there is probably quite a lot there that Intel needs. With the Nvidia deal expiring, Intel probably sees a chance to get a better deal while getting some new patents as well. Approaching AMD also makes sense as being the smaller of the two GPU makers, AMD may be willing to share their patents for less. It’s also a way to inject some cash into AMD and keep it afloat to stave off anti-trust lawsuits.
AMD also has a lot to offer with the upcoming generation. The GPU designer’s GCN architecture is ahead of Nvidia’s when it comes to DX12 and Asynchronous Compute and that could be one area Intel is looking towards. Intel may also be forced into cross-licencing due to the fact with some many patents out there, there have to be some they are violating. The biggest question will be if AMD will consider allowing their more important and revolutionary patents to be licensed.
With the Nvidia deal being worth $66 million a quarter or $264 million a year, AMD has the chance to squeeze out a good amount of cash from Intel. Even though $264 million wouldn’t have been enough to put AMD in the black for 2015, it wouldn’t have hurt to have the extra cash.
2016 may well go down as the year VR finally takes off for real. Sony and Microsoft have both been making progress towards VR and augmented reality while Oculus and HTC are set to launch the Rift and Vive respectively. Given the efforts and lengths AMD has gone to push VR, it should come to no surprise that a report has revealed that the company has a massive 83% lead in providing the hardware for VR capable systems.
Hardware wise, it is not surprising to see the lead over Nvidia. While PC hardware is a large segment of the VR market, only higher end systems are capable of producing the frames necessary for VR at 90fps and enough resolution for both eyes. Because of this, the PS4 is a viable candidate for VR adoption and with the APU inside it being AMD, Nvidia stands no chance in terms of sheer hardware market share for VR.
As noted many times during the Capsaicin event, AMD has been working with many developers in both gaming and other forms of media with LiquidVR and GPUOpen. AMD has also been on the forefront with developments like VR cafes and partnering with Oculus and HTC to ensure that the Rift and Vive work seamlessly with Radeon. There is even a Radeon VR Ready Premium program to ensure consumers are informed.
With the VR market still in it’s growing stages, AMD has seen an opportunity to get in before it’s competitors have a chance and secure a bastion of developer support and integration. Considering the price of VR capable hardware, AMD stands a good chance to reap a windfall when VR takes off. This can only bode well for AMD as for once they are ahead and hopefully will be able to leverage their position to help the rest of their business grow.
Gears of War: Ultimate Edition has gotten off to a shaky start on PC, partly due to the erratic performance on a wide range of systems, causing frame rate drops and other glitches; basically the same as every other game launch these days. Then there’s the fact I’ve yet to know anyone who actually bought it anyway, so there’s some food for thought.
So what’s this game really got to offer anyway? Well, it’s got massively improved graphics vs the original release and that’s certainly no bad thing for those who have yet to play the original, or even just for those keen to revisit the classic first entry in the franchise. PC gaming is known for being able to push great graphics, but how do the graphics settings fair when they’re dialed all the way down to their lowest, vs how they look at their maximum? That’s what these screenshots aim to capture.
The images are ordered max/low, max/low, etc throughout.
Some fairly mixed results here, on minimum the game somehow manages to have worse textures that the original, which managed to run great on modern hardware, with high frame rates and even a mid range GPU can push it to 1440P and 4K without breaking into a sweat. However, the new one seems really muddy to me, although admittedly the maximum images do look great, they don’t seem to really change much more than a few textures. The lighting, decals, detailing and more seem fairly locked down in this images.
Overall, it’s still a great improvement on maximum from the original game, but perhaps not the big upgrade people were expecting, nor does the PC version offer any visual advantages over the Xbox One release; short of higher frame rates, but with the current state of the engine, they’re not guaranteed either.
The problem you often find when you play any game with items is that you end up torn between looking good and actually being good. Some of the best look items I’ve ever found in games are mid-tiered, meaning that while they look awesome playing with them actually makes me weaker than I could/should be. This is a problem commonly found with games, but Diablo III has you covered if you felt like you needed a little more style thanks to its new cosmetics patch.
Typically without buying the collectors edition (something that is rather hard to find these days), you were limited in how amazing you could look thanks to the limited pets and wings available for your characters.
The latest patch (2.4.1) is a small one by many standards, but will add a wide range of graphical choices for your characters, from the Wings of Kokabiel making you look like a fairy demon to the Star Helm and Pauldrons calling you to battle in space.
Below you can find just a few of the new items and pets that you will be now able to quest for and upon receiving, look like a true warrior.
The lovely pet Galthrak
Cow King pet
Star Helm and Pauldrons
Wings Of Kokabiel
When you play a game do you look for style or are you one for the stats the items bring? Is there a way to find a balance? Give us your opinion on if stats or looks are more important in games
Earlier we reported that Dark Souls 3, the latest in the series that’s known for making grown men reap as they die time and time again, will be limited to 30 FPS on PC. This has now been overruled with the series official Twitter posting that the latest game will run at 60 FPS on PC.
Frames per second is a huge issue for developers and gamers alike. Some claim anything about 30 FPS doesn’t matter while others go so far as to say that anything below 60 FPS just makes them feel uneasy and unwell. When the directory of Fallout 4 claimed the game would run at 30 FPS on “everything”, the community gathered arms asking if this included PC’s who keep upgrading their graphics cards and feel like they shouldn’t be penalised for the slow catch-up of consoles have in regards to their graphical power.
From my experience, you can see the difference when playing a game, with consoles normally stuck at 30 FPS and PC’s being able to go well above 60 FPS. For a game as graphically fine as Dark Souls 3 will be, the last thing you want to experience is a slow or stuttering feel as you try to fight for your life in a brutal world.
AMD have been pushing hard to improve their software experience, as well as improving the frequency of graphic driver updates, helping them better compete with the relentless release of Nvidia’s Game Ready drivers. So far, their new system has been a big improvement and today is no exception, as AMD push the release of the latest Radeon Software Crimson Edition.
Version 16.2.1 is marked as a “non-WHQL” release, so basically a beta release. However, the driver comes with the CrossFireX profile for the latest AAA release, Far Cry Primal. On top of that big profile release, you can also expect some game specific bug fixes for Fallout 4 as well as Rise of the Tomb Raider.
FreeSync users have cause to celebrate too, we hope, as AMD is also including a bug fix for choppy display on systems that use both FreeSync and CrossFire at the same time, which I’m sure you can imagine is a frustrating issue for a system that’s meant to provide a smoother visual experience.
Radeon Software Crimson Edition 16.2.1 Highlights
Crossfire Profile available for
Far Cry Primal
A black screen/TDR error may be encountered when booting a system with Intel + AMD graphics and an HDMI monitor connected
Choppy gameplay may be experienced when both AMD Freesync and AMD Crossfire are both enabled
Display corruption may be observed after keeping system idle for some time
Fallout 4 – Flickering may be experienced at various game locations with the v1.3 game update and with AMD Crossfire enabled
Fallout 4 – Foliage/water may ripple/stutter when game is launched in High/Ultra settings mode
Fallout 4 – Screen tearing in systems with both AMD Freesync and AMD Crossfire enabled if game is left idle for a short period of time
Fallout 4 – Thumbnails may flicker or disappear while scrolling the Perk levels page
Far Cry 4 – Stuttering may be observed when launching the game with AMD Freesync and AMD Crossfire enabled
FRTC options are displayed on some unsupported laptop configurations with Intel CPU’s and AMD GPU’s
Radeon Settings may sometimes fails to launch with a “Context Creation Error” message
Rise of the Tomb Raider – Corruption can be observed at some locations during gameplay
Rise of the Tomb Raider – Flickering may be experienced at various game locations when the game is left idle in AMD Crossfire mode under Windows 7
Rise of the Tomb Raider – Game may intermittently crash or hang when launched with very high settings and AA is set to SMAA at 4K resolution
Rise of the Tomb Raider – Lara Croft’s hair may flicker in some locations if the Esc key is pressed
Rise of Tomb Raider – A TDR error may be observed with some AMD Radeon 300 Series products after launching the “Geothermal Valley” mission
The AMD Overdrive memory clock slider does not show original clock values if memory speeds are overclocked
World of Warcraft runs extremely slowly in quad crossfire at high resolutions
A few game titles may fail to launch or crash if the Gaming Evolved overlay is enabled. A temporary workaround is to disable the AMD Gaming Evolved “In Game Overlay”
Star Wars: Battlefront – Corrupted ground textures may be observed in the Survival of Hoth mission
Cannot enable AMD Crossfire with some dual GPU AMD Radeon HD 59xx and HD 79xx series products
Fallout 4 – In game stutter may be experienced if the game is launched with AMD Crossfire enabled
XCOM 2 – Flickering textures may be experienced at various game locations
Rise of the Tomb Raider – The game may randomly crash on launch if Tessellation is enabled
Core clocks may not maintain sustained clock speeds resulting in choppy performance and or screen corruption
Gaming is a lot of fun and small computer systems are a joy because they are easier to carry around, but what if you want the graphics performance to be optional? For that to be an option we need some proper external graphics card support and it looks like AMD is working on just that, again.
We have seen multiple external graphics card solution over time such as Alienware’s Graphics Amplifier or MSI’s Gaming Dock that solve the problem with lack of graphics power in portable systems without increasing the size and weight of the ultrabook or laptop itself, but they’ve been limited to specific devices and that can be an issue that holds the adoption rate back.
AMD’s Robert Hallock teased us with something new via Facebook that indicates that AMD could be working on just such a solution. This isn’t the first time that AMD’s graphics department wants to do something like that, although they still were called ATI at the time. Some might remember the ATI XGP (eXternal Graphics Platform) from 7-years ago that already attempted this, but it never gained the large traction due to its natural limitations.
Ultrathin notebooks are awesome for their portability, but nobody in their right mind would use them as a gaming notebook. Those are some heavy items that while still portable, weigh as much as a stationary mITX system. This is where external graphics solutions come into play and we’re able to get the same graphics performance from a small portable system when needed while keeping it light enough to take everywhere for normal usage.
“External GPUs are the answer. External GPUs with standardized connectors, cables, drivers, plug’n’play, OS support, etc.”
AMD is a big believer in open and free standards and the way everything is worded points in the same direction rather than a single locked down product with an AMD branding. Standardization could make a huge difference here as long as the manufacturers adopt it and bring it to the market. Oh and just to clarify, none of the shown examples of external graphics solutions here is the new deal. Robert himself teased with a Razer Core graphics enclosure equipped with an AMD Radeon Nano card.
The teaser ends with the words “More info very soon”, maybe GDC? Only time will tell, but we’ll stay on the ball and keep you informed.
NVIDIA’s upcoming architecture codenamed ‘Pascal’ is based on the TSMC 16nm manufacturing process and utilizes the latest iteration of high bandwidth memory. HBM2 will feature bandwidth exceeding 1 Terabyte per second and ship in larger forms compared to HBM1. Some of you might remember, HBM1 was a major talking point on AMD’s Fiji XT line-up but the memory has a limitation of 4GB. Thankfully, HBM2 resolves this and could potentially allow for GPUs sporting a whopping 32GB. While this isn’t confirmed, rumours suggest that NVIDIA could launch a flagship with a huge amount of video memory. However, I personally think a 32GB graphics card will target professionals requiring huge compete power.
It’s important to remember that HBM is up to 9 times faster than the current GDDR5 standard and a huge revolution in memory technology. Pascal will also be the first architecture to utilize NV-Link. This is an energy-efficient, high-bandwidth communications channel that uses up to three times less energy to move data on the node at speeds 5-12 times conventional PCIe Gen3 x16. In theory, this enables faster communication between the CPU and GPU as well as multiple graphics cards. This is only a very brief insight into the potential of Pascal and it could be the biggest step forward in graphics technology we’ve seen in a long time.
Recent information from Zauba.com shows a shipment of Pascal graphics cards which suggests the launch date isn’t too far off. However, according to the Tech Times, Pascal might not launch until June at the earliest! The general consensus is the launch will occur sometime in Q2 this year. With AMD and NVIDIA preparing new architectures, it’s a very exciting time and I cannot wait to see the kind of performance gains compared to the previous generation. Hopefully, the new range offers superb performance at reasonable prices, and AMD keeps NVIDIA in check to drive costs down.
The Gears of War franchise is one of Microsoft’s most successful exclusives and continues to attract a very passionate fan base. Instead of focussing on an emotional narrative, the Gears of War series prioritizes fun, chaotic gameplay and creates a very memorable experience. Originally, the first entry launched on the Xbox 360 and received critical acclaim for its dark, atmospheric setting. Additionally, the graphical quality was absolutely sensational for the time, and a real showcase of the Xbox 360’s capabilities. Unfortunately, the previous title, Gears of War: Judgement felt a little uninspired and didn’t really offer anything new. There’s clearly been some stagnation in the franchise, but this is quite a common notion across almost every major game series. After a fairly long hiatus, Gears of War 4 is heading to the Xbox One and looks very impressive.
“Delivering [with Ultimate Edition] the first 60fps multiplayer experience in franchise history really taught us a lot about what it means to have a 60fps culture on the team and we’re leveraging that experience for Gears of War 4.”
“Like how the original Gears of War was a visual showcase for the Xbox 360, Gears of War 4 will be a graphical showcase for the Xbox One.”
Microsoft’s latest console has been heavily criticized for its technical limitations and consistently struggles to match the PlayStation 4’s performance in multi platform releases. The company’s initial entertainment focus and poor specification limits the amount of power developers have at their disposal. As a result, it’s extremely difficult to attain 60 frames-per-second at 1080P and concessions usually have to be made. It’s not uncommon for some Xbox One titles to run at 900P or even less! Hopefully Gears of War 4 helps restore the console’s reputation and showcase the benefits of impressive optimization. However, I’m quite sceptical because it’s impossible to ignore the console’s very limited hardware.
Oh Gamespot, how you tickle me so. I must admit, I’ve nothing against the people at Gamespot, I visit their site every now and then for a bit of casual gaming news from time to time, as I do many parts of the web. However, this week they and a few others out there may have gone a little too casual by demonstrating something I find a little frustrating. They seem to think the graphics settings between the PC and console versions of The Division have more in common than they actually do.
Now, I’m all for additional graphics settings, even on console games. However, for a console focused site, brainwashing the console fans with this garbage does nothing to help the whole “peasant” and “PC master race” debate. As we’ll have people saying it has “PC graphics” all over again, when it simply does not.
So here’s what we’ve got in the graphics settings. Chromatic aberration, something I personally always turn off on a PC game anyway as it provides no pleasant visual benefit as far as I am concerned, it just blurs colours to simulate a camera lens. Then we’ve got Sharpen Image, which as many of you will know either makes things look blurry or jagged. Sharpen has little benefit to most, but it can help on some poor quality displays, more often than not, older TVs, so it’s not the worst thing to have, but it’s hardly “PC-like visual settings.” I’ve seen some sites claiming this is allowing to adjust the AA setting, but it is not, it’s a much simpler upscale/downscale visual effect and even causes a halo artefacts issue on nearby objects when maximised; don’t remember my graphics getting worse when I maxed AA.
So what should some PC-like graphics settings look like? Full antialiasing, particle detail, wind-affected snow, volumetric fog, reflection quality, sub-surface scattering, anisotropic filtering, and that’s just the tip of the iceberg. Just look at the graphics tweaking guide released today for Rise of the Tomb Raider for another example.
I’m not ripping on consoles, I honestly am not, but I would like to see a gaming community that is better educated on what options they’re actually being sold. Do you think we’ll ever see real PC like graphics tweaks on consoles, or do you think that’s a realm that will forever stay with PC gaming?
The Division is looking great on consoles and PC already and it’s certainly a lot of fun, have you been playing it this week? Let us know what you think in the comments section below.
Rise of the Tomb Raider is off to a flying start on the PC. The games graphics are jaw dropping, although, given the graphical powerhouse that we saw from the last game, that’s hardly surprising. Pushing the game to its limits is proving quite easy thanks to a well optimised game engine, and Nvidia has already released a GPU performance tweaking guide, ensuring you’re getting the best graphics and frame rates you can on Lara’s latest adventures.
The only major issue so far, as is often the case with modern gaming releases, is that SLI isn’t exactly working as we had hoped. If anything, SLI performance is a complete and utter mess on Rise of the Tomb Raider, at least for most users. Thankfully, ‘Blaire‘ from 3D Center has found a few simple tweaks that can not only get the SLI back up and running as it should, but they’ve even reported 95% scaling between their two Nvidia GPUs; I would test it right now, but I’m running a pair of AMD cards, so that may be a bit of an issue although reports are already showing similar results.
To get it working, you need to create a custom profile. You do this using the Nvidia Inspector Tool, where you can search for the Rise of the Tomb Raider profile and change its SLI bits (DX11) to 0x080002F5. Once that’s done, press on the magnifier icon and this will reveal Nvidia’s Undefined Options. Search for 0x00A0694B and set it to 0x00000001.
See, that wasn’t hard was it? If done correctly, which I’m sure you did, you should now have proper SLI support and see proper performance scaling. Now get back to your day/night of gaming and enjoy the full performance of your shiny gaming hardware.
Be sure to let us know in the comments if you got it working, or not!
Having trouble getting the performance or the best graphics out of the latest Tomb Raider game? If you’re running an Nvidia graphics card, you’ll be happy to hear that Nvidia has released a GPU performance guide, to help you get the most performance out of the game. Of course, most of this will be common knowledge to a lot of PC gamers, but not everyone out there is the graphics settings guru you or I may be.
Crystal Dynamics have done a great job on the graphics engine, and there’s little doubt that it’s the best looking Tomb Raider game to date.
Crystal Dynamics’ Foundation Engine returns for the sequel with upgrades galore. Physically Based Rendering gives materials a natural look under all conditions, HDR and adaptive tone mapping create stunningly bright zone transitions and highlights, deferred lighting with localized Global Illumination increases the realism of lighting, volumetric lighting adds God Rays and other shafts of light, dynamic color grading gives artists control over the appearance of individual areas, reactive water enables dynamic water ripples game-wide, physically correct forward lighting enables translucencies to be accurately lit, particle lighting enables particles to be dynamically lit by light from their surroundings, and Subsurface Scattering and Backscattering increases the quality of lighting on characters’ skin.
The visual effects in this game are pretty breathtaking, so long as you’ve got the settings right.
The guide is pretty extensive so telling you all of it here would be pretty futile, to say the least. Nvidia has worked hard to bring you performance graphs for each settings, as well as side by side image comparisons to demonstrate what each effect is, how it looks and what kind of performance impact you can expect on some of their most popular cards.
This is really handy for those of you who aren’t sure what is what in the graphics settings and while there have been updated drivers released, the performance will likely improve a bit more with tweaked drivers over the coming weeks.
Are you enjoying Rise of the Tomb Raider? Let us know in the comments section below.