Patriot Viper 4 DDR4 3200MHz 16GB (2x8GB) Dual Channel Memory Kit Review

Introduction


DDR4 memory kits are steadily superseding DDR3 DIMMs due to competitive pricing and the advent of Intel’s LGA1151 chipset which supports speeds in excess of 3200MHz. Furthermore, DDR4 modules require less voltage to remain stable despite the typical increase in memory bandwidth. Recently, professional overclocker Shamino set an astounding world record and overclocked the G.Skill Ripjaws 4 to 4255MHz using a mere 1.3 volts. Clearly, this is an extreme case and the majority of DDR4 kits available to consumers range between 2400MHz and 4000MHz. Plus, the performance difference in gaming tasks primarily revolves around your system’s graphics card, and CPU. Nevertheless, it’s still important to select high-quality DIMMs to keep your PC perfectly stable and compliment the other components.

The Patriot Viper series is synonymous for offering excellent memory speeds at an affordable price point. Here’s a brief description of the product directly from the manufacturer:

“Patriot Memory’s Viper 4 Series memory modules are designed with true performance in mind. Built for the latest Intel® Skylake processor utilizing the 100 series platform, the Viper 4 series provides the best performance and stability for the most demanding computer environments.

The Viper 4 series utilizes a custom designed high performance heat shield for superior heat dissipation to ensure rock solid performance even when using the most taxing applications. Built from the highest quality Build of Materials, Patriot Memory’s Viper 4 Series memory modules are hand tested and validated for system compatibility.

Available in dual kits, 8GB, 16GB and 32GB kits, Patriot Memory’s Viper 4 Series will be offered at speeds from 2400MHz up to 3600MHz and XMP 2.0 ready. Hand tested for quality assurance the Viper 4 series is backed by a lifetime warranty.”

As you can see, the latest version of the Viper range comes in a variety of capacities and memory speeds to suit a wide range of user requirements. Given the impressive 3200MHz speed, 16-16-16 timings and respectable voltage, I expect to see some superb numbers which legitimately rival the best dual channel kits we’ve tested!

Specifications

Packaging and Accessories

Patriot have adopted a clean, bold design to the memory’s packaging which makes it easy to read the key specifications while admiring the DIMM’s colour scheme. Here we can see a visual run down of the memory’s speed, capacity, XMP version and other essential statistics. Many kits on the market utilize pretty plain blister packs which don’t enthuse a luxury feel. In this case, the packaging draws you in and leaves a very positive initial impression.

On the rear section, there’s information about Patriot’s lifetime warranty, a brief synopsis of the product, and links to the company’s presence across various social media platforms.

A Closer Look

From an aesthetics standpoint, the DIMMs have a rather understated look and targets the mainstream gaming audience. Any red and black heatspreader combination is going to become a popular choice, and the different shades combine quite nicely. Another striking touch is the contrast between the textured black finish and matte section towards the PCB. I’m also quite fond of the sophisticated Viper logo and small gap between the main heatspreader which creates an impressive visual effect. Sadly, the green PCB is difficult to overlook and detracts from the attractive design. If a black PCB was used instead, the memory would be the perfect choice for a high-end build. Despite these qualms, once the RAM is installed, you’re not going to notice the PCB colour in an enclosed chassis.

Triple The Speed But US Still Lags In Internet Speed Comparisons

One of the most competitive things you can mention is your internet speed. Being able to download and watch your latest games and movies at the press of a button relies on a steady and speedy connection, something that few have. The Federal Communications Commission (FCC) have released a report that shows something that might upset some people, even though all it does is say your internet is more than likely faster than it used to be.

The report states that between March 2011 and September 2014 internet speeds within the US tripled, jumping from 10MBPS to 31MBPS. While this may sound amazing, in comparison to other countries like Canada and Japan the US still ranks 25 out of 39 countries as stated by a study in 2013.

While this all sounds amazing you have to remember just how much extra bandwidth people are using on an everyday basis. With the FCC estimating that over 60% of internet traffic is video, estimated to rise to over 80% by 2019, you may need that fiber optic broadband sooner than you think. Don’t forget that the next big thing is 4k, with such a data consuming technology set (or at least many hope) to become the new standard you could soon see speeds of MBPS not being enough to catch up with your favourite shows or watch that new Netflix or Prime movie.

How to Stop Windows 10 From Sharing Your Bandwidth

The Windows 10 upgrade procedure is remarkably simple and easily reversed if you prefer an older operating system. To manage the bandwidth demands, Microsoft uses a peer-to-peer (P2P) system which allows your network to host the data for additional machines. However, a number of Reddit users discovered the P2P update delivery protocol extended to computers outside of your network and across the globe. Subsequently, this can reduce your download and upload bandwidth as you seed the data to other Windows users.

Currently, there’s no substantial evidence which estimates the impact of the worldwide P2P delivery. It’s clear this has caused some concern and can be manually disabled via the following process:

Firstly, click the “Start Menu” and select the “Settings” tab.

This should open a new window and you need to click on the “Update & Security” sub-menu.

Navigate to the “Windows Update” option on the left side panel, and click “Advanced Options”.

This next menu simply involves scrolling down to the bottom and clicking on “Choose how updates are delivered”.

Once complete, change the highlighted option to “PCs on my local network”. Doing so will disable network sharing across the internet and restrict your bandwidth to a local connection.

In real terms, I’m not entirely convinced the network sharing will have a major impact on the average user’s internet connection. Although, some people may oppose the idea of using their own network to manage traffic instead of Microsoft building a greater networking infrastructure.

Thank you PCWorld for providing us with this information

ASUS Confirms DDR3 1.5-1.65v Works With Skylake

Budget shoppers looking for the best bang per buck will soon have more options opened up to them. According to ASUS, Skylake CPUs and their accompanying Z170 and other 100 series motherboards can support DRR3 running at 1.5 or 1.65 volts. There has been some confusion as to whether or not DDR3 would work as Intel has only talked about DDR4 and DDR3L which use 1.2v and 1.35v respectively.

For those on a budget, it means they will be able to reuse their DDR3 with the new lineup. This means with the right motherboards, budget users can avoid shelling out an extra $80+ for new DDR4 ram. Given that DDR3 performs better with Skylake unless you have high-speed and low-latency DDR4, there isn’t much performance left on the table by not using the newer ram. DDR4 also still costs a bit more than DDR3 for similar speeds, capacity and latency.

For those that might consider upgrading to DDR4 though and want a motherboard to support that, there are some options. The most notable one is the Hi-Fi H170Z3 from Biostar which has 2 DDR4 and 2 DDR3 slots. Given that DDR4 supports 16GB DIMMs, it’s possible to put up to 16GB of DDR3 in now and later upgrade to a max of 32GB of high-speed low-latency DDR4 ram. Hopefully, there will be more boards from other firms for dual DDR3/DDR4 support.

Thank you Computerbase.de for providing us with this information

AMD Officially Announce Details of High Bandwidth Memory

We’ve been waiting for details on the new memory architecture from AMD for a while now. Since we heard the possible specifications and performance of the new R9 390x all thanks to the new High Bandwidth Memory (HBM) that will be utilised on this graphics card.

Last week, we had a chat with Joe Macri, Corporate Vice President at AMD. He is really behind HBM and has been behind it since product proposal. Here is a little bit of background information, HBM has been in development for around 7 years and was the idea of a new AMD engineer at the time. They knew, even 7 years ago, that GDDR5 was not going to be an ever-lasting architecture and something else needed to be devised.

The basis behind HBM is to use stacked memory modules to save footprint and to also integrate them into the CPU/ GPU itself. This way, the communication distance between a stack of modules is vastly reduced and the distance between the stack and the CPU/ GPU core is again reduced. With the reduced distances, the bandwidth is increased and required power dropped.

When you look at graphics cards such as the R9 290x with 8GB RAM, the GPU core and surrounding memory modules can take up around a typical SSD size footprint and then you also need all of the other components such as voltage regulators; this requires a huge card length to accommodate all of the components and also the communication distances are large.

The design process behind this, in theory, is very simple. Decrease the size of the RAM footprint and get it as close to the CPU/ GPU as possible. Let’s take a single stack of HBM, each stack is currently only 1GB in capacity and only four ‘DRAM dies’ high. What makes this better than conventional DRAM layout is the distance between them and the CPU/ GPU die.

With the reduced distance, the bandwidth is greatly increased and also power is reduced as there is less distance to send information and fewer circuits to keep powered.

So what about performance figures? The actual clock speed isn’t amazing, just 1GBps when compared to GDDR5, but that shows just how powerful and refined they are in comparison. Over three times the bandwidth and lower voltage; it’s ticking all the right boxes.

There was an opportunity to ask a few questions towards the end, sadly only regarding HBM memory, so no confirmed GPU specifications.

Will HBM only be limited to 4GB due to only 4 stacks (1GB per stack)?

  • HBM v1 will be limited to just 4GB, but more stacks can be added.

Will HBM be added into APU’s and CPU’s?

  • There are thoughts on integrating HBM into AMD APU’s and CPU’s, but current focus is on graphics cards.

With the current limitation only being 4GB, will we see negative performance in high demanding games such as GTA V at 4K that require more than 4GB?

  • Current GDDR5 memory is wasteful, so despite lower capacity, it will perform like higher capacity DRAM

Could we see a mix of HBM and GDDR5, sort of like how a SSD and HDD would work?

  • Mixed memory subsystems are to become a reality, but nothing yet, main goal is graphics cards.

I’m liking the sound of this memory type; if it really delivers the performance stated, we could see some extremely high power GPU’s enter the market very soon. What are your thoughts on HBM memory? Do you think that this will be the new format of memory or will GDDR5 reign supreme? Let us know in the comments.

AMD’s Official Roadmaps Reveals the Company’s Plans for the next 5 Years

AMD has revealed what the company plans to do with its GPUs and CPUs in the next 5 years at the PC Cluster Consortium event in Osaka Japan, where AMD’s Junji Hayashi revealed the company’s roadmap.

During the event, AMD has focused on its graphics IP and the products that involved it, including discrete Radeon graphics cards and Radeon powered Accelerated Processing Units. There have been talks about AMD’s upcoming K12 ARM as well as the x86 Zen CPU core, including a strategy of how the company plans to introduce both x86 and ARM powered SOCs to the market in a pin for pin compatible platform code-named SkyBridge.

It is said that both CPUs are 64-bit capable parts coming in a 14nm FinFET ‘shell’, but one is based on the ARMv8 architecture while the other is based on the more traditional x86 AMD64 architecture, having them target the server, embedded, semi-custom and client markets.

AMD has also talked about “many threads” revealing that the K12 will come with Simultaneous Multi-Threading (SMT) technology in contrast to the company’s Clustered Multi-Thread (CMT) technology we are able to see in the Bulldozer family. SMT essentially takes advantage of the various resources in the core which are underutilized and dedicate to an additional, slower, execution thread for added throughput. In contrast, CMT is looking for opportunities to share resources between two different CPU cores, instead of doing it inside a single CPU core.

Hayashi also revealed AMD’s GPU roadmap, which reveals that the company is employing a two-year cadence to updating its GPU architecture inside APUs. It looks like the company will add Accelerated Processing Units with updated GPU architectures once every two years. The roadmap also reveals that AMD plans to introduce what it described as a High Performance Computing APU which carries a 200 – 300 watts TDP, having the company stating that the APU in question will excel in HPC applications.

AMD apparently did not attempt to go with newer APUs in the future because it was not viable in terms of memory bandwidth. Instead, the company’s stacked High Bandwidth Memory will be used as an alternative, making the design extremely effective. The second generation of HBM is said to be 9 times faster than GDDR5 memory and 128 times faster than DDR3 memory.

The company has not revealed any code names for the GPU architectures, but a previous leak pointed out that the architecture will debut on 16nm FinFET and will be code-named Arctic Islands. More specific details about AMD’s products will be revealed in May at the Financial Analyst Day event.

Thank you WCCF for providing us with this information

Official Titan X Specifications Leaked, Launch Set in 24 Hours

VideoCards has just leaked the official specifications of NVIDIA’s GTX Titan X, having them match everything that has been leaked before.

As we all know, the Titan X houses the GM200 core, which is the successor of the GM204. One of the problems found in the GM204 was the lack of FP64 performance and if the GM200 is able to address that, we are looking at a worthy successor to the GK110-based original Titan chip.

There are two inconsistencies however, one linked to the other. The memory clock was thought to be 2000 MHz (8000 in total), however the table shows the memory is 1753 MHz. This in turn affects the memory bandwidth due to the fact that a 2000 MHz memory clock would have given the card a 384 GB/s output, but instead only 336 GB/s are outputted if the memory frequency turns out to be the one displayed in the table.

Thank you WCCF and VideoCards for providing us with this information

From A Router To The Cloud: How Gaming Companies Manage Online Bandwidth

The development in online streaming and multiplayer gaming has progressed rapidly in the past few years which is great for gamers but has also meant that the hardware and software used to power these new advances has had to progress fast as well. Superfast broadband and suped-up fibre optic speeds have helped more and more people connect effortlessly with each other, and it is the providers who have felt the pressure in having to bump up their offered services to ensure that their customers get the very best from their seamless streaming capabilities.

The rising popularity in massively multiplayer online role-playing games (MMORPG) has meant that players can connect to millions of online players from around the world and spend hours and hours playing until their hearts’ content. This of course means that there is a terrific strain on the broadband and if the rest of the family wants to hop on the web and do their own thing, they may have a few issues if you aren’t hooked up to a suitably speedy broadband service.

From the likes of StarCraft, to World of Warcraft, to online card rooms – which have millions of players every day – brands have to invest heavily to ensure that they can accommodate and function with such large volumes of players.

Coping With Volume

At PokerStars, a site which has over 50 million members, platforms are heavily invested in to deal with the large surges of players. With over 700 hands dealt every second, and potentially almost half a million people seated at a time, bandwidth congestion could be a major issue. Yet, at the PokerStars Data center, that issue is resolved with an incredible infrastructure that is similar to the likes of those in place by Google, Microsoft, and Amazon.

It’s almost like building an internet on top of an internet, with a sea of servers at their HQ in the Isle of Man keeping gameplay up to speed as well as avoiding the possibilities of a loss of connection whether it be via mobile or desktop.

And loss is of major importance to them – or more, avoiding that. Alongside the servers to keep players connected and playing fluidly, the brand also has plenty of storage which saves every hand played on the real money tables.

Although it isn’t just online casinos that have hugely advanced systems in place to keep gameplay free-flowing, and users happy and communicating with each other in their multi-player communities.

Gaming providers have to take the brunt of the millions of online gamers looking to piggy-back off their servers in order to play and compete in the colossal online arenas of MMORPG. And let’s not forget game streaming services provided by the likes of Twitch.tv and Steam Broadcasting which are becoming very popular amongst the gaming community. Twitch’s service in particular allows the live streaming of lots of gaming-related content including live coverage from some of the biggest esports tournaments taking place around the world. Twitch users can broadcast their own channels of gaming session, playthroughs, speedruns and more. Steam is now doing much the same and already has over 100 million users. So with all this major usage and live streaming, the impact on these providers’ servers will surely be immense. But that is the power and the brilliance of cloud gaming.

Cloud gaming allows you to play high-quality games anywhere you have an Internet or WiFi connection. Once you have the connection, you can tap in to it with most modern devices effortlessly. Much like your music or podcast collection, you can access various games, new or saved, directly from the cloud’s library and play or continue your gameplay from whichever device you wish. What also makes this a popular choice for a lot of gamers is the memory they save on their tablet, smartphone or hard drive. There are no downloads, no installs and no need to constantly update games and your system with upgrades and patches.

This can save all its players a significant amount of memory and drive space, something that is usually packed to the brim with other apps and memory sucking data, leaving players struggling for space to squeeze in another game or two. There is no hardware needed and there is no overly complicated set-up involved for you to get going. You just login and away you go. Now what gamer in the world won’t find that an attractive prospect?

A Little Closer To Home

Of course however, it’s important that a gamer has their own connection sorted too.

For avid gamers, the last thing you want is a monthly cap on the amount of bandwidth you use. There are providers out there that have fixed restrictions on the bandwidth they offer. A certain amount would be capped and then billed on top of your normal monthly rate if you happen to go over. Some providers even go as far to limit traffic at certain times of the day and then offer unlimited broadband at others. These caps and unnecessary restrictions are certainly not ideal, especially if you’re in the middle of an intensive game of World of Warcraft or Call of Duty.

Typically there will always be particular periods of the day and of the week that are noticeably more traffic heavy than others. We often find a slight dip in broadband speeds at our offices when the schools empty out and most people return home after work. Characteristically, most people would be heading on to the internet on their computers or handheld devices in their downtime after a hard day’s work or during the weekends.

But periods such as Christmas will have another major impact, particularly for the online gaming industry. Christmas day and the week that follows is a particularly popular time for gamers to spend online. Most people have time off work during this period, not to mention it is the peak period for brand new and exciting games being gifted to each other – so it is the perfect time to try them out.

But with the likes of PokerStars having their ‘internet on top of the internet’, it creates fewer issues, whilst when it comes to World of Warcraft, they have a host of different realms which you can play, taking into account the population in there at the moment, whether there is a queue, and also the main language in there.

It’s a clever way to run what could otherwise be slow and lethargic gameplay. By including different rooms and realms, as well as investing millions to ensure that users get the best experience, coupled with a good connection at home from router to the cloud, we are enjoying the quickest and smoothest gameplay we’ve ever had.

Big brands will continue to push boundaries as demand for gaming soars. We want higher quality, high definition titles, and with that CPUs and higher bandwidths are required. Brands with their hundreds and hundreds of servers are constantly improving to make this happen; it’s just whether we can get our own broadband connections up-to-date enough to keep up.

35% of All US Internet Traffic Comes from Netflix

A new report says that 35% of all US internet traffic, on average during peak hours, comes from Netflix. A study was conducted by a company called Sandvine, best known for building ISP equipment. They said that Netflix accounted for 35% of downstream traffic in peak hours during the second half of 2014.

Interestingly, YouTube only accounted for 14% of downstream traffic, but on mobile devices there was a different story, with YouTube topping out the scale at 20%, closely followed by Facebook with 19%.

The study also revealed that Netlifx surprisingly comes second in upstream traffic, very high considering the site is all about downloading. BitTorrent came out on top at 25%.

The dominance of Netflix in internet traffic is yet another symbol of the website’s success and is perhaps also an example of how traditional television is facing ever growing competition from streaming websites.

 Source: USA Today

Intel Reveals Details Regarding Intel’s “Knights Landing” Xeon Phi Coprocessor

Intel has announcement the ‘Knights Landing’ Xeon Phi Coprocessor late last year, having released very few details about the lineup back then. As time passes, details are bound to be revealed and Intel is said to start shipping the series next year. This is why Intel apparently has decided to reveal some more details regarding the ‘Knights Landing’ Xeon Phi Coprocessor.

The announcement from last year points to the Knights Landing taking the jump from Intel’s enhanced Premium 1 P54C x86 cores and moving on to the more modern Silvermont x86 cores, significantly increasing the single threaded performance. Furthermore, the cores are said to incorporate AVX units, allowing AVX-512F operations and provide bulk Knight Landing’s compute power.

Intel is said to offer 72 cores in Knight Landing CPUs, with double-precision FP63 performance expected to reach 3 TFLOPS, having the CPUs boasting the 14nm technology. While this is somewhat old news, Intel revealed some more insights at the ISC 2014.

During the conference, Intel stated that the company is required to change the 512-bits and GDDR5 memory present in the current Knights Corner series. This is why Intel and Micron have apparently struck a deal to work on a more advanced memory variant of Hybrid Memory Cube (HMC) with increased bandwidth.

Also, Intel and Micron are said to be working on a Multi-Channel DRAM (MCDRAM) specially designed for Intel’s processors, having a custom interface best suited for Knights Landing. This is said to help scale its memory support up to 16 GB if RAM while offering up to 500 GB/s memory bandwidth, a 50% increased compared to Knights Corner’s GDDR5.

The second change made to Knights Landing is said to include replacing the True Scale Fabric with Omni Scale Fabric in order to offer better performance compared to the current fabric solution. Though Intel is currently keeping this information on a down-low, traditional Xeon processors are said to benefit from this fabric change in the future as well.

Lastly, compared to Intel’s Knights Corner series, the Knights landing will be available both in PCIe and socketed form factor, mainly thanks to the MCDRAM technology. This is said to allow the CPU to be installed alongside Xeon processors on specific motherboards. The company has also emphasised that the Knights Landing version will be able to communicate directly with other CPUs with the help of Quick Patch Interconnect, compared to current PCIe interface.

In addition to the latter, having the Knights Landing socketed would also allow it to benefit from the Xeon’s NUMA capabilities, being able to share memory and memory spaces with the Xeon CPUs. Also, Knights Landing is said to be binary compatible with Haswell CPUs, having the company considering writing programs once and running them across both types of processors.

Intel is expected to start shipping the Knights Landing Xeon Psi Coprocessor somewhere around Q2 2015, having the company already lining up its first Knights Landing supercomputer deals with National Energy Research Scientific Computing Center with around 9300 Knights Landing nodes.

Thank you Anandtech for providing us with this information
Image courtesy of Anandtech

NVIDIA Launches Its $3000 GeForce GTX Titan Z

Latest news point to NVIDIA releasing its long-awaited GeForce GTX Titan Z graphics card, a true beast amongst graphics cards, boasting outstanding performance at an incredible price tag.

In terms of specifications, the NVIDIA GeForce GTX Titan Z presents dual-GPU graphics based on GK110 architecture, featuring 5760 CUDA cores, 480 Texture Mapping Units and 96 Raster Operating Units, while having a total of 12 GB GDDR5 memory across 384-bit interface per GPU.

The Titan Z’s GPU is clocked at 705/876 MHz, indicating that there’s a 170 MHz difference between base and boost clock, while having the memory clocked at 7 GHz. This results in the graphics card having a bandwidth of 672 GB/s.

The graphics card features a triple-slot design having a single fan in the center, while having the card measuring the same as every high-end GeForce cards, more specifically 4.38 x 10.5 inches. NVIDIA also states that the GeForce GTX Titan Z supports DirectX 12. However, currently it is just a feature from the current DirectX 11.

In terms of power consumption, the Titan Z draws its power from two 8-pin connectors, having its TDP measured at 375W, considerably less when compared to the 500W TDP from AMD’s R9 295X2.

NVIDIA was not kidding when talking about its price, having it set to $3000. The more interesting factor is that manufacturers such as Gainward, Zotac, EVGA and others started producing their own Titan Z variations. The real questions is, who has the money to actually buy it?

Thank you VideoCardz for providing us with this information
Images courtesy of VideoCardz

BT And Alcatel-Lucent’s Fiber Optic Tests Reveal 1.4 Tb/s Speeds

BT and Alcatel-Lucent have been reportedly working on a way to address the current internet speed bandwidth and congestions which some parts of the UK are facing. In an experiment performed, the companies managed to achieve a speed of 1.4 Tb/s over an existing fiber optic connection and commercial grade hardware, something which is the equivalent of transmitting 44 uncompressed HD films in one second.

The test has been performed from the BT Tower in London, all the way to a research campus located 255 miles away. The team has reportedly reached a speed of 5.7 bits/second/Hertz as part of a “Flexgrid” infrastructure, having a 50 GHz transmission channel rate and allowing a 42.5 percent data transmission efficiency compared to common fiber optic networks. But the best part still remains the means through which this has been achieved. Having the tests successful on the current fiber optic network means that ISPs will be able to deploy the new system without the need of additional physical cables, drastically reducing the costs.

The current broadband users will have no say in this, since the system requires at least fiber optic connection, though it will improve bandwidth for both broadband and fiber optic users. The discovery also has also opened some doors into the high-bandwidth Internet service required for heavy traffic, such as streaming high quality tracks and even 4K resolution videos in the future.

Thank you electronista for providing us with this information

Chrome Update To Reduce Data Consumption Up To 50% for iOS and Android Users

Google announced an upcoming update to Chrome for iOS and Chrome for Android will feature a way for you to save on your data usage by as much as 50%. According to a survey from Pew, 20% of U.S. adults do the majority of their surfing on a mobile browser. The only problem with depending on your mobile browser is the consumption of data when you’re not connected to a Wi-Fi network. If you’re depending on a 3G/4G connection to your mobile operator to hook up to the internet, you could find your monthly data allowance spent before you know it.

However, Google has reportedly found a way to deliver data more efficiently. Once you receive the update, you can use Chrome’s data compression and bandwidth management on your iOS or Android device, and cut your data usage in half. At the same time that you’re saving precious data, you are also protecting yourself against malicious websites by using Chrome’s Safe Browsing.

After the update, users can go to the Bandwidth Management section inside the application’s Settings, and then use the Reduce Data Usage feature. Turn the toggle switch to on and your browser will start saving data. And this menu will also show you how much bandwidth you are saving each month by using Chrome.

Google has also announced that it will be adding Google Translate to Google Chrome for iOS in the next few days. That will make translating foreign websites on your Apple iPhone and Apple iPad a snap. And finally, the upcoming update for the Android version of Chrome will allow you to create shortcuts to your favorite websites right from your homescreen.

Thank you Phonearena for providing us with this information