The race to develop self-driving cars is very competitive, with a growing number of companies doing their utmost to beat out the competition to develop the technology. One company, in particular, is taking self-driving car technology very seriously, Tesla, who have already rolled out a very limited form of the technology to their popular Model S. Now according to Electrek, Elon Musk’s car company have recently hired a veteran of processor design, Jim Keller to assume the lead of their Autopilot hardware engineering team.
Exactly what Keller will be working on a Tesla is currently undisclosed, with there being no implications that Tesla will be developing in-house microprocessors to use in their vehicles’ systems. Self-driving systems like Tesla’s Autopilot certainly require a lot of processing power in a tight space, however, and Tesla intends to make full use of Keller’s hardware engineering experience and especially his low-power design expertise.
Keller himself is a renowned and respected figure in his field, having worked on a number of AMD’s flagship processors over the years, including their upcoming Zen architecture. Not only that, but he has held a high-profile engineering role at Apple where he played a crucial role in the development of their A4 and A5 processors, which went on to power most of Apple’s mobile devices and Apple TVs from 2010 to 2012.
Could Keller’s expertise be just what Tesla needs to transform their Autopilot technology from a limited option to a fully fledged driving assist or self-driving system? The right hardware could allow the company to solve the tough computational problems that come with such systems.
Microsoft has recently stated that security updates for Windows 8 will stop this month, and we all know that leaves people with only two fully supported operating systems for windows. Currently, the choice is now between Windows 10 and Windows 8.1, both options that some people may not like. The decision between the two may not be up to you if you want to upgrade your PC though as Microsoft revealed in their blog yesterday.
In the blog article, they start by listing the OEM partners that celebrated success at CES recently, then going on to explain that compared to windows 7 when combined with windows 10 Skylake processors, the 6th generation of Intel core processors, enable “up to 30x better graphics and 3x the battery life”.
To clarify, the support that will be coming to Windows 7, 8.1 and Windows 10. While they will be supported till July 17, 2017, Skylake devices will not be supported after this date on windows 8.1 or 7.
As new generations of processors are introduced, Microsoft state “they will require the latest Windows platform at that time for support”. To clarify, if you want to use te upcoming processors such as Intel’s “Kaby lake”, QualComm’s “8996” and AMD’s “Bristol Ridge” processors you will need to be on windows 10 to receive support.
Do you feel like choices like this means you are forced to upgrade to windows 10 if you want an up to date PC? Which operating system do you like using and why? Please tell us your thoughts in the comments below
We’ve been waiting for a new processor for a while, AMD recently released the A8-7850K, but no mainstream processor options. Well Intel have finally announced the release date of the Broadwell 14nm processors; June 2nd.
Going on Intel’s ‘Tick Tock’ CPU roadmap, these will be the ‘tick’ of the new 14nm process; since Ivybridge the ‘tick’ process has been slightly problematic. Originally the Ivybridge i5-3570k had thermal issues with some users resorting to ‘de-lidding’; the same reportedly happened to these Broadwell CPU’s, hence the exceptionally late release date.
The CPU’s due to be released are the Core i7-5775k and the Core i5-5675k. These will be similar to the current i7 and i5 k series chips; the i7 will be a quad core with hyperthreading, 3.3GHz base clock and 3.7GHz boost clock speed. The i5 will be a simple quad core and have a slightly lower base clock and turbo clock of 3.1GHx and 3.6GHz respectively. Both chips will feature 6MB last level cache, Intel Iris Pro 6200 graphics and a TDP as low as 65W.
These processors will be released extremely late, which will be close to the release of the Intel Skylake processors in early Autumn, but what makes these appealing is the compatibility with the current Z97 chipset.
Are you looking forward to the release of the Broadwell chips? Will you be hanging on until the release of Skylake? Let us know in the comments.
Thank you to TechPowerUp for providing us with this information.
The 32nm “Vishera” processors from AMD have been around for a long-while; since October 2012 to be exact. Vishera was AMD’s Zambezi successor with Vishera being based on the Piledriver architecture and Zambezi on Bulldozer. Since the first release of Vishera, AMD has continued to refresh its FX product stack with new CPUs based on the same architectural design and AMD’s most recent releases maintains that trend. On September 2nd 2014 AMD officially revealed three new CPUs for the FX line; the FX 8320E, the FX 8370 and the FX 8370E. We are looking at the FX 8370E processor which is AMD’s attempt to tame the high TDP of their 8 core FX line down to 95W; previously the standard TDP stood at 125W.
There are two other releases which we will not be reviewing today. First is the FX 8370 (4/4.3GHz) which is a new flagship part which sits under the FX 9370 (4.4/4.7GHz) and FX 9590 (4.7/5GHz), but improves slightly over the FX 8350 (4/4.2GHz) in clock speed. Secondly is the FX 8320E which is an energy efficient variant of the already-released FX 8320 which is a 3.5/4GHz part. All of the FX 8XXX and FX 9XXX parts sport 8 Piledriver cores divided over four modules.
For the AMD enthusiast these newest releases may disappoint since they do not bring anything new to the market: instead they refresh existing technology. AMD is taking advantage of a matured production process instead of advancing the FX line onto their newest CPU architecture “Steamroller”. Steamroller is what the CPU component of Kaveri APUs are based on and it features improved IPC (Instructions per Cycle) performance and greater power efficiency. The decision by AMD to opt for the same technology means we are unlikely to see any ground-breaking results – instead we should expect AMD to rely on the use of lower prices to remain competent against their main rival Intel.
Interestingly AMD’s PR pitch for their newest E series energy efficient FX CPUs relies on rallying the cost advantage versus the Intel & Nvidia combination. AMD claim by choosing an FX CPU and Radeon GPU you can get better performance at the same price point. I think the R9 285 + FX 8370e is a smart combination as the objectives of both those AMD products have been to improve power efficiency over some of their more power-hungry siblings.
In our review of the AMD FX 8370e we will not be overclocking. My reason for this is that there is no point of pitching an energy efficient CPU if you’re going to throw those power savings away with an overclock, you might as well just buy the FX 8370 instead. You can still overclock the FX 8370e but don’t expect results to be significantly different from the FX 8350 or FX 8370 both in terms of performance and power consumption. You can find 5GHz OC results for the FX 8350 in our graphs.
Before we delve into the review I would like to briefly explain how the FX 8370E’s power saving mechanism works. Unsurprisingly it manages power consumption with clock speed controls. At idle it will clock down to its lowest ratio which is 7X giving a frequency of 1.4GHz and around 0.85 volts.
If you add a medium-high intensity multi-threaded workload it clocks around 3.6GHz.
Moving on to a high intensity load that utilises all the cores and we see it drop back to its base frequency of 3.3GHz. It simply cannot clock higher than this without exceeding its TDP specification of 95W.
The highest clock speed comes on single threaded applications. If you utilise only one core to its maximum you can clock up to 4.3GHz on that particular thread.
The Intel Core i7 5960X, codename Haswell-E, is probably 2014’s worst kept secret. As I am writing this review the full specifications, pricing and pictures of just about every X99 board in existence have already been made public and the NDA is still a few days off. Product launches like this make me wonder what purpose NDAs even serve when they appear to not be worth the paper they are written on. Anyway, politics aside, today we can present your our Intel Core i7 5960X review – at least pretend to be surprised! Intel’s High End Desktop Platform is about to get its first core upgrade since the transition from X48 to X58 when Intel made the leap from 4 to 6 cores, that occurred in 2010. Nearly 4 years later and Intel’s HEDT is making the shift from 6 cores to 8 cores with Haswell-E.
What’s special about Haswell-E apart from the increased core count? Well the X99 platform Haswell-E brings support for DDR4, SATA Express and M.2 (just like Z97 offers), up to 40 PCIe 3.0 lanes and of course 8 core CPUs. If you’re in the market for an upgrade this certainly isn’t going to be cheap, new memory, new storage drives, a new CPU, probably a new power supply…..but I digress. Let’s dive straight into the goodness of the Core i7 5960X. Today we are chucking it on a brand new test system, powered by Gigabyte’s X99 Gaming 5 X99 motherboard and 32GB of Crucial’s fresh-off-the-production line DDR4-2133.
Comparing Intel’s Core i7 5960X to the Core i7 4960X and Core i7 3960X shows some striking similarities. They obviously all share the LGA 2011 package but there are subtle differences. Notably the Core i7 5960X uses a different integrated heat spreader design to the other two.
Moving on over to the rear of the CPU and we actually see a steady decline in the number of built in components. The transition to each newer CPU decreases the number of transistors and other components but we also see an increase in the number of pins. You can see this by comparing the size of the green spacing on the 3960X to the 5960X.
Being a new CPU with a new memory controller this is not compatible with X79 despite still being a LGA 2011 package. Haswell-E takes the LGA 2011-3 package while Sandy Bridge-E and Ivy Bridge-E takes the LGA 2011 package. To prevent people putting the wrong CPUs in the wrong boards Intel has changed the locking points on the CPUs as you can see below.
Intel’s Core i7 5960X comes with a 3GHz base frequency and up to 3.5 GHz with turbo. There’s also native DDR4 support for 2133MHz memory but we are hearing 3000MHz and more is possible with a little bit of tweaking. The other notable thing is a beefy 20MB of shared L3 cache, the most we’ve ever seen on a consumer Intel processor.
Looking at the processor die we can see that it is very different to Haswell for two main reasons: there are 4 more cores and there are no integrated graphics. The new memory controller offers support for only DDR4, there’s no DDR4 and DDR3 combo support like some of our readers may remember on the AMD AM2+ platform.
Intel’s main audience for the Core i7 5XXX series are existing HEDT customers, whether they be X79 or X58. Comparing to X79 Haswell-E and the new X99 chipset brings more cache, more cores, more PCIe lanes, a higher TDP, a different socket, more SATA ports, Thunderbolt support and BCLK overclocking support from the chipset, a feature we also saw moving from Ivy Bridge to Haswell on the mainstream platform.
Like Intel’s previous Extreme Edition CPUs the Core i7 5960X has that $1000 price tag while the Core i7 5930K and 5820K come in for much cheaper. Unlike with Sandy Bridge-E and Ivy Bridge-E going for the 5930K no longer gives you all the performance of the 5960X for less money, the Core i7 5930K has two less cores. The Core i7 5820K also offers less PCIe lanes than the 5930K so each CPU has its own functional purpose: the model separation is better.
When it comes to x86 processors of pure unadulterated power Intel’s high-end Xeon platforms are the only way to go. Intel’s “EP” (Efficient Performance) SKUs are often some of the most impressive processors that come to market sitting between the high-end consumer and small business segments. Over the years we’ve seen the EP series processors grow in core count as Intel’s CPU architectures have become more efficient in terms of power consumption, thermal specifications and their general design. For example, Westmere-EP had up to 6 cores, Sandy Bridge-EP had up to 8 cores and this current Ivy Bridge-EP series has up to 12 cores. Today we are reviewing the flagship processor of the Ivy Bridge-EP series, the Intel Xeon E5-2697 v2processor. Thanks to Intel’s generosity we have the opportunity to test a pair of these monstrous CPUs. On paper these are the most powerful CPUs available on the market within that top-end consumer bracket: these CPUs will work with all consumer hardware such as “normal” graphics cards, unbuffered non-ECC RAM, consumer operating systems like Windows 7 and so on. Yet with 12 cores, 24 threads and a 130W TDP its a heck of a lot of performance for any consumer so Intel’s main target is working professionals, small and medium size businesses and anyone who needs a serious amount of x86 computing power. However, the term reasonable cannot be used for the pricing as Intel expects consumers or businesses using these Xeon chips to pay a pretty penny – $2618 to be exact. That said you get what you pay for: with 12 cores and 24 threads in an efficient 130W package Intel’s Xeon E5-2697 v2 is unmatched by any other current generation hardware. Of course Intel’s Haswell-EP is just around the corner and with that we should expect the core limit to be increased again to a staggering 16 cores (if rumours are to be believed) within similar thermal envelopes, we will also see the jump to DDR4 memory made so Ivy Bridge-EP will be the last DDR3 EP platform from Intel.
Below you can find Intel’s key specifications of the E5-2697 v2 processor, you can get more detailed specifications on the product page here if you so desire. In today’s review we will be testing this CPU in single and dual configurations and putting it up against the Sandy Bridge-EP flagship the Xeon E5-2670. Sadly, we only have one of these for testing since one of them passed away (RIP!), please consider that before commenting on why dual E5-2670 CPUs were not included in the results – we wanted them to be but it simply wasn’t possible. Given the prosumer/business orientation of these products we’ve tried to use more productivity related benchmarks. If there are any relevant benchmarks readers think are missing then please feel free to inform us your thoughts in the comments so we can improve future Xeon processor reviews. In a pre-emptive statement I would also like to clarify that the gaming benchmarks are there not because I see these CPUs as being relevant for gaming, but because we ALWAYS get LOTS of questions about how these CPUs perform in games. Anyway…enough rambling let’s proceed to look at some testing results!
It’s become apparent that Intel is not finished quite yet when it comes to updating their Haswell lineup for desktop systems. Following another update on CPU-World, an additional eight processors have been detailed, all from the low-end of the scale, and interestingly nothing below a Pentium is mentioned at this moment in time.
In the same manner that the first wave of refresh chips brought a slight boost in performance over the existing Haswell lineup, these Core i£ and Pentium G CPUs all see a boost of 100MHz over their older equivalents and whilst this may seem like a good bump, we have to honestly remember that this end of the market is no fought in the same way that the i7 and i5 chips battle it out by getting that slight bit ahead of one another in a bid to be the chip to get.
Whilst other reports suggest that the second wave of chips could be appearing in the flesh somewhere in Q3, there is no word on pricing, but once again, we are almost literally penny-pinching between chips and the difference that this makes may not be as profound as it is on the enthusiast end of the scale.
As we get closer to the full release of DDR4, this could be a last-ditch attempt to get OEM and retail users to buy into Haswell before the mass move over to Haswell-E begins on the top end of the performance ladder.
Today we are looking at AMD’s new AM1 platform and given that I am writing with the realms of a traditional “tech enthusiast” website you’ll either think this is a great platform with potential, or just too slow to add anything new to the market. However, I am in the former, not the latter, camp – I can see the massive potential of AMD’s socketed Kabini APU. I have always been keen on budget and small form factor computing solutions; the Raspberry Pi is a great example of something that caught my eye. Of course at just $35 the Raspberry Pi is hardly comparable to AMD’s new Kabini socketed APUs that will cost a similar amount for just the APU. However, you can build a Kabini quad core system with a motherboard for just $64 – less than twice the cost of Raspberry Pi but no doubt with way more than twice the performance. The ethos with AMD’s AM1 platform is to bring the Athlon and Sempron product lines (that are orientated towards value for money and “upgradeability”) back with a bang.
While the AM1 system may seem like it is catering to a small market – it isn’t! The majority of PCs are bought in those entry level and mainstream price points – below $200-300 shall we say. Yet if we look at emerging markets in Latin America, the Middle East, Africa and so on, then we find that the sub $200 price point is even more popular. As a result the majority of Windows-orientated desktop systems that will be delivered in the future are likely to be in the entry level and mainstream categories. That logic is AMD’s justification for the AM1 platform – it will deliver Windows capable PCs for a fraction of the cost of traditional desktop systems.
AMD is also looking to innovate to correct some of the deficiencies in the PC landscape. A lack of upgradeability, limitations to 32 bit operating systems and poor integrated graphics are common place in small form PCs. Latest generation Intel “Bay Trail” Atom SoCs are not upgradeable, are mainly limited to 32 bit operating systems and with regards to graphics performance most are still largely incapable of anything but video playback and browser-based gaming. Of course AMD’s Kabini Athlon APUs aren’t going to be creating high-end “Gaming PCs” any time soon but they do offer more graphics performance than Intel’s equivalent Atom parts.
AMD is keen to point out the advantages it has over Intel’s Bay Trail equivalents because that is what AMD sees as its main rival in this price point.
AMD’s “AM1” moniker is effectively the “chipset” denotation – although there is no chipset as such. All the “chipset” components are placed on-die with the APU. The socket of the AM1 platform is the FS1b and it is currently upgradeable to a choice of four Kabini APUs.
The FS1b APUs will be available with up to four CPU cores, 128 GCN cores and up to 1600 MHz memory. On-die there are two USB 3.0 ports, eight USB 2.0 ports and two SATA III 6 Gbps ports so storage connectivity is modest but for such a low cost platform you would expect that.
We have covered the basics of what the AM1 platform is, why it has been created and what it is designed to compete with so now let’s move on to cover the technical aspects of it in a little more detail.
AMD’s Kaveri APUs are dependent on system memory. The easiest way to explain this dependency is by thinking of the GPU on the Kaveri APU as a graphics card, but with one significant difference – on its own it has no video memory. The video memory the Kaveri GPU uses is your system memory, or RAM. In the case of Kaveri APUs the CPU and GPU parts both share the system memory resources equally as part of the hUMA process, though the GPU is more performance-dependent on this memory bandwidth than the CPU is. In effect what this means is that as you increase your system memory frequency Kaveri APUs have more bandwidth available to them and thus perform better.Taking your system RAM from 1866MHz to 2400MHz has the same effect on the Kaveri GPU as would overclocking a DDR3 graphics card by the same amount.AMD has really pushed the benefits of memory scaling for Kaveri and as I’ve already alluded to above these benefits are primarily realised by applications and processes than use the GPU. So this memory scaling article will be focusing largely on the gains in GPU related performance, though we will consider CPU performance and power consumption as well.
AMD’s own internal figures show promising results. We should expect to see the most performance gains from 2133MHz memory but 2400MHz also offers additional performance, this is because with much computer hardware there is a diminishing return on increasing clock speeds and this is particularly the case with memory. Our testing will be examining the differences in performance in transitioning from 1866 to 2133 to 2400MHz memory on the Kaveri flagship part – the A10-7850K. From the perspective of Kaveri there is a requirement to consider memory more carefully than we would do on most systems where a discrete GPU and high-end CPU are both relatively unaffected by memory speed. I think it is worth considering the optimal memory speed pairings in terms of cost but also the type of user buying each APU. In my opinion I’d expect to see the cheapest A8-7600 APU paired off with 1600 or 1866MHz DDR3, the A10-7700K with 1866 or 2133 DDR3 and the A10-7850K with 2133 or 2400MHz. This is purely on the basis that if you’re buying a more affordable APU you’re unlikely to spend a lot on RAM, and if you’re buying the premium A10-7850K APU you’re unlikely to “cheap out” on low frequency DDR3 when spending a little more on faster RAM can make such a large difference.
University of California researchers at Intel and Berkley reported to have made a breakthrough in cooling microchips by using a combination of carbon nanotubes, which we previously reported that Japanese researchers found a way to mass produce them as well as their impact on technology, and organic molecules to create a high efficient connection between a chip and its heatsink.
As chips get smaller, and faster even, the heat generated by the chips becomes a more increasing aspect that researchers have to deal with. It was previously noted that carbon nanotubes could work as a high efficient conduit for this heat, but the issue still remained in getting the heat to the carbon nanotubes.
“The thermal conductivity of carbon nanotubes exceeds that of diamond or any other natural material but because carbon nanotubes are so chemically stable, their chemical interactions with most other materials are relatively weak, which makes for high thermal interface resistance,” explained Frank Ogletree, a physicist at Berkeley Lab’s Materials Sciences Division and leader of the study”
And from here, Intel comes in with a little improvement to their plan. Nachiket Raravikar and Ravi Prasher, who were both Intel engineers from the project’s very beginning, were able to increase and strengthen the contact between the carbon nanotubes and the surface of other materials, reducing thermal resistance and increasing heat transport efficiency. It works by using organic molecules to form strong covalent bonds between the carbon nanotubes and metal surfaces, equivalent to using thermal paste between a heat sink and a CPU.
The new system formed allows for a six-fold increase in heat flow from the metal to the nanotubes, and also the method uses nothing more than gas vapour or low-temperature liquid chemistry, meaning it can easily be integrated into the production process of modern chips. But it is not done yet, since the tests currently show that only a small portion of the nanotubes connect to the metal surface, but it is progress nonetheless.
Thank you Bit-Tech for providing us with this information Image courtesy of Bit-Tech
Google already has its own smartphones, tablets, the operating system to power them all, and the best search engine to go with it, but it seems it’s not enough. According to Bloomberg, Google is thinking about designing its own processors using ARM technology. The news comes from a source with more knowledge on the subject that says the move would allow Google to better manager the relationship between hardware and software.
Although the possibility of Google-designed chips for phones and tablets will be a big step forward for the company, it could also mean Google won’t need to rely on Intel as much for server processors. According to Bloomberg, Google is currently Intel’s fifth largest customer, spending around $500 million on Intel chips each year. That might change if Google’s designing its own processors.
Based on the Bloomberg report, a Google spokesperson said it is “actively engaged in designing the world’s best infrastructure” and that this includes both hardware and software design. As always. the company refused to comment further than that.
“We are actively engaged in designing the world’s best infrastructure,” said Liz Markman, a spokeswoman for Google, in an e-mail. “This includes both hardware design (at all levels) and software design.” Markman declined to say whether the company may develop its own chips.
Intel’s Xeon E3-1230Lv3 CPU has been a hotly anticipated processor for a wide variety of target audiences – home users, office users, small business users and enterprise users. Today we’ve got an opportunity to put Intel’s enterprise Xeon E3-1230Lv3 CPU to the test in a professional home user or “prosumer” type of environment, by pairing it up with SuperMicro’s server-grade C7Z87-OCE motherboard. The Intel Xeon E3-1230Lv3 is an important CPU because it offers four cores, eight threads, a 1.8GHz base frequency, a 2.8GHz Turbo frequency and 8MB of cache all for a tiny TDP of just 25W. Below you can see some of those key specifications in more detail:
At $250 its tray price is roughly comparable to the Core i5 4670K which sells for $242. Though we’ve only got an Intel Core i7 4770K which sells for $339 to compare it to, a Core i5 4670K would be ideal but both processors are quite similar in performance anyway. That said, the Xeon E3-1230Lv3 certainly looks like an impressive part for the money and we want to find out exactly how well it performs! We will first walk you through our choice of motherboard and tell you why we made such a choice, then we will get onto some benchmarks followed by detailed power/temperature figures before letting you know our final thoughts on this processor.
Ever since Intel released Ivy Bridge to the LGA 1155 platform, LGA 2011 owners were wondering when they would see High End Desktop (HEDT) processors based on the 22nm Ivy Bridge architecture. Up until today the LGA 2011 platform lagged two generations behind the mainstream LGA 115X platforms which are now as far forward as Haswell, two generations ahead of Sandy Bridge. However, today is a great day for all enthusiasts because Intel are taking the covers off Ivy Bridge-E. Ivy Bridge-E brings the 22nm processors to the socket LGA 2011 platform and the X79 chipset. What can we expect to see? Well, similar things to what we saw with the transition from Sandy Bridge to Ivy Bridge except with bigger numbers as we are working with a six core processor not a quad core. Of course there will be quad core processors available and Ivy Bridge-E brings to the market the Core i7 4960X, the Core i7 4930K and the Core i7 4820K which is the quad core while the previous two are hex cores.
Other than the change in architecture there is actually a lot of continuity with Ivy Bridge-E because Intel keep the same socket pin-out, the same chipset and for current LGA 2011 system owners the vast majority of you will be able to keep the same motherboard – all you’ll need is a BIOS update from your chosen motherboard vendor. In today’s review we are going to examine the performance of the new Core i7 4960X in a variety of benchmarks covering gaming, synthetic CPU performance, power consumption and much more. Of course what we are mainly here to try and decipher is whether Intel’s Core i7 4960X is a worthy successor to the Core i7 3960X and if so where does it triumph over its predecessor. What we’ll also be looking for is to see how well the Core i7 4960X stacks up against Intel’s best LGA 115X CPU, the Core i7 4770K, and how well it fares against AMD’s budget Piledriver based eight core the FX-8350.