Centrino, Core, Core 2, SSDs, SNB, and now SNB-E. Despite Netburst (not THAT bad) and the singular motherboard flub, Intel has been on a roll. I really think they have matured as a company, publically and realistically embracing their mistakes and triumphs. They're still struggling with GPU performance, but they are learning quickly.
What I'd like is Intel IGP drivers that actually work as well as AMD/NVIDIA drivers. They're *better* than they were before, but still not up to the standards of the GPU companies.
Intel has only managed to live up to it's hype with CPU's.
GPU's are a little different.
Remember when they first introduced the Intel x3100 IGP? Intel were lauding it as a gamers IGP. It didn't get released with TnL support, it didn't get released with Shader model 3. It took a good year for those to even be enabled in the drivers.
End result was TnL was only half done as half the time it would default to a software rendered mode, vertex shader performance was poor. Basically new games ran like crap. Older games ran just like crap.
Don't get me started on the craptastic GMA 3150 either.
Sandy Bridges IGP was a good step in the right direction but drivers still aren't up to AMD or nVidia's level and it still has a feature set a generation or two behind. (Direct x 10 only I think.)
An Intel IGP can be twice as fast as AMD's or nVidias. I would still not opt for it due to the really really crap drivers.
While it seems Intel is trying to come a long way in the GPU world, I still have my doubts. As ATI and nVidia have shown for a decade, good graphics drivers just don't fall from the sky. And while AMD and nVidia have a history of long-term support for graphics products, can the same be said of Intel? Intel goes to a new CPU socket every product launch, so they obviously have no problem abandoning things of the past. I just wonder how much Intel is really committed to making games work through driver fixes, as in the past their IGPs had features that they never got around to enabling in the drivers.
Then there's the feature-stripping of different CPU models. Intel doesn't give the consumer what they want, instead they lay down a road of confusion as to what models have what features. In the grand scheme of things, it just amazes me that Intel takes away product advancements to have more SKUs. I'm not talking about mhz, cache, or GPU cores, but things like speedstep, 64bit, VT, and others.
Anand, Any news on if AMD will have a cross-town demo of bulldozer this week?
I agree with you whole-heartedly regarding Intel's confusing SKU's. I was actually really looking forward to both Sandy Bridge and Bulldozer. I was disappointed by both. Bulldozer is MIA, while Sandy Bridge presented the following options (at launch, so yes I know Z68 resolved some of this):
1) IOMMU (VT-d) 2) Unlocked 3) Better IGP
If you wanted IOMMU, you had to give up both unlocked and better IGP options. If you wanted to actually use the better IGP, you had to go with a chipset that didn't support overclocking. If you wanted to take advantage of the unlocked models, you had to go with a chipset that didn't support the IGP. So, basically, for those three options, you could only choose one.
I will admit that I could be wrong on this, but as far as I know, there still isn't a Sandy Bridge model that lets you choose all three options (or even just unlocked with IOMMU).
Hey, here's an idea! Instead of wasting 200 million transistors (or more now?) on worthless, redundant video, how about you put that towards MORE CACHE AND EXECUTION HARDWARE, eh Intel?
So this would make Intel's IGP boerderline usable for low resolution gaming (assuming they can cobble together decent drivers) ? They too are going to run into bandwidth wall at some point and I dare to speculate AMD is going to be able to pull out better performance at that point for some time still before Intel catches up.
Intel's IGP is already useable for low resolution gaming, it runs stuff like Source engine stuff and WoW fast enough to be playable on my Macbook Pro's screen (1280*800), 60% faster would just be cake on top as far as I'm concerned. :)
As for bandwidth, Ivy Bridge is supposed to bring official support for 2166MHz DDR3 is it not? That's a substantial bump above 1333MHz. Also we can likely expect improved use of L3 buffering for graphics use in IB.
Llano beats Intel's offerings handily on the graphics side, while being pretty "meh" on the processor side. That graphics advantage is of course mainly because of AMDs current lead they bought in the form of ATI some years ago now.
Without appearing as if I'm taking sides in some kind of adolescent graphics/CPU vendor flame war here, Intel's getting there. They've been saying they're getting there for years now without really making any real headway, but now they really are getting there.
AMD better watch out, or in a few more generations they'll be as far behind on the GPU side as they are on the CPU. That would be bad. There is nothing better than competition in the marketplace to make companies excel in their work. My two first PCs were powered by AMD CPUs, a K6 and a K6-III respectively, because Intel was too expensive for me.
AMD will never fall behind in the IGP part, because they have ATI. It's the processor part they need to worry about and we'll be seeing some updates from them soon.
it is because they have the technology that ati has.....no question about the apus being superior in graphics...if intel can acquire nvidia that will be a fair match ^_^
Of course AMD can fall behind on the IGP. If Intel has a sufficiently better manufacturing process, they can cram more EUs in the die and overtake AMD both in absolute performance and performance per watt.
I will point out that Intel's IGPs are clocked significantly higher than AMD's. Remember - that's the only reason the original HD Graphics could compete with AMD's dated IGPs. Who's to say that they can maintain such clock speeds with more shaders on board as well as keep power usage in check, even with tri-gate transistors?
Without a lot of extra memory bandwidth, medium quality details will still hurt it just like it hurts Llano. HD 3000 makes more sense with lower details and I can't see that changing any time soon, not without putting memory on die.
He's not talking about performance or architecture, he's talking about potential problems with Intel's high GPU clockspeed combined with increasing GPU complexity, and perhaps an impact to the TDP of the chip.
Exactly. If you wanted to talk performance, this touted 60% increase in performance may be commendable, but let's consider for one moment that a) Trinity is on its way, and b) Llano is already that much ahead. I'm very happy to see Intel challenge in this area, but until their drivers improve, you have to ask what else they can do - add more hardware or pump up the clocks?
What about the ULV version of the Ivy Bridge GPU? In Sandy Bridge, it had the same name but different performance. It only clocked at 350MHz instead of 650 and turbo'd up to 1GHz instead of 1.2.
I'm actually slightly impressed. I was expecting a 30% improvement based on what Intel had previously said. The Sandy Bridge is OK for many mainstream games in low detail settings. This would make it acceptable in medium detail.
Performance increase will be an added benefit, but i think the best part is the expected quality increase. I know this is still a major concern for people trying to stave best quality backups for videos. GPU transcoding seems to be the worst of the three vs. QuickSync and x264.
Agreed with the above on the lack of OGL4.x support. This seems to be a trend for Intel ignoring OGL support for their GPUs whose DX versions imply are probably only driver locked to earlier OGL versions.
The SB support for only OGL3.1 (rather than the full fat 3.2 or the backport 3.3) created additional work for graphics programmers by introducing a class for GPU that can support the DX10 new engine but cannot take an OGL3.3 Mac/Linux engine with equivalent features (because OGL3.3 is a backport to the OGL3 hardware level it means an OGL4/DX11 engine can be quickly converted to OGL3.3 support while OGL3.0-3.2 would require a far more significant rewrite to the OGL3 conventions and language versions).
I'm also disappointed. Between the shink from 32 to 22, 3D gates, and a micro architecture revision, I was expecting 2x perf increase at least. 10% on the low end is useless. Looking forward, intel is setting the "low end must work" bar and GT1-2012 is incredibly weak.
You pointed out the maths. With Performance at 2, Double it would only make it 4. Compare to ATI or Nvidia, who had their performance starting already at 6 or 8. Even 50% would be sufficient.
It's disappointing on the mobile side; but with Intel being pushed hard to drop power consumption in its mobile parts, with 17W reportedly becoming the new standard level in Haswell (vs 35W now); a lot of the potential of 22nm is having to be consumed in dropping power use, not in boosting performance.
OTOH if they do offer the GPU for more of their mainstream desktop parts it will be a 3.2x boost there.
Does this design fix the shortcoming that Sandy Bridge's graphics had with 23.976 fps?
Did Intel perhaps finally deliver a software or firmware fix for the shortcomings that they had with the initial release version of 23.976 fps?
Also, I never understood from the earlier write-ups what the user experience of the shortcomings were. This seems like an appropriate place to ask what is the user's experience because of the earlier shortcomings?
The problem with playing back 23.976hz video at 24hz is is that every 41.66seconds the 24hz playback gets one frame ahead of the 23.976 playback. To keep the video and audio in sync a frame shown twice, creating a stutter in the playback.
What I've occasionally wondered is if speeding up the audio slightly to adjust might be less noticeable. It's a 0.1% difference so excepting people with perfect pitch I doubt it would be noticeable.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
35 Comments
Back to Article
Germanicus - Monday, September 12, 2011 - link
The Intel hype machine continues. Match the GPU of Llano, eh?I'll believe it when I see it.
aegisofrime - Monday, September 12, 2011 - link
IMHO, Intel has usually managed to live up to its hype far more often than AMD can. I'm confident they will deliver again this round.therealnickdanger - Monday, September 12, 2011 - link
This.Centrino, Core, Core 2, SSDs, SNB, and now SNB-E. Despite Netburst (not THAT bad) and the singular motherboard flub, Intel has been on a roll. I really think they have matured as a company, publically and realistically embracing their mistakes and triumphs. They're still struggling with GPU performance, but they are learning quickly.
therealnickdanger - Monday, September 12, 2011 - link
I should add - will IB finally bring true 24Hz playback? This is the ONE thing preventing SNB from being the perfect home theater PC CPU.JarredWalton - Monday, September 12, 2011 - link
What I'd like is Intel IGP drivers that actually work as well as AMD/NVIDIA drivers. They're *better* than they were before, but still not up to the standards of the GPU companies.StevoLincolnite - Monday, September 12, 2011 - link
Intel has only managed to live up to it's hype with CPU's.GPU's are a little different.
Remember when they first introduced the Intel x3100 IGP? Intel were lauding it as a gamers IGP.
It didn't get released with TnL support, it didn't get released with Shader model 3. It took a good year for those to even be enabled in the drivers.
End result was TnL was only half done as half the time it would default to a software rendered mode, vertex shader performance was poor.
Basically new games ran like crap. Older games ran just like crap.
Don't get me started on the craptastic GMA 3150 either.
Sandy Bridges IGP was a good step in the right direction but drivers still aren't up to AMD or nVidia's level and it still has a feature set a generation or two behind. (Direct x 10 only I think.)
An Intel IGP can be twice as fast as AMD's or nVidias. I would still not opt for it due to the really really crap drivers.
MonkeyPaw - Monday, September 12, 2011 - link
While it seems Intel is trying to come a long way in the GPU world, I still have my doubts. As ATI and nVidia have shown for a decade, good graphics drivers just don't fall from the sky. And while AMD and nVidia have a history of long-term support for graphics products, can the same be said of Intel? Intel goes to a new CPU socket every product launch, so they obviously have no problem abandoning things of the past. I just wonder how much Intel is really committed to making games work through driver fixes, as in the past their IGPs had features that they never got around to enabling in the drivers.Then there's the feature-stripping of different CPU models. Intel doesn't give the consumer what they want, instead they lay down a road of confusion as to what models have what features. In the grand scheme of things, it just amazes me that Intel takes away product advancements to have more SKUs. I'm not talking about mhz, cache, or GPU cores, but things like speedstep, 64bit, VT, and others.
Anand,
Any news on if AMD will have a cross-town demo of bulldozer this week?
Tanclearas - Monday, September 12, 2011 - link
I agree with you whole-heartedly regarding Intel's confusing SKU's. I was actually really looking forward to both Sandy Bridge and Bulldozer. I was disappointed by both. Bulldozer is MIA, while Sandy Bridge presented the following options (at launch, so yes I know Z68 resolved some of this):1) IOMMU (VT-d)
2) Unlocked
3) Better IGP
If you wanted IOMMU, you had to give up both unlocked and better IGP options. If you wanted to actually use the better IGP, you had to go with a chipset that didn't support overclocking. If you wanted to take advantage of the unlocked models, you had to go with a chipset that didn't support the IGP. So, basically, for those three options, you could only choose one.
I will admit that I could be wrong on this, but as far as I know, there still isn't a Sandy Bridge model that lets you choose all three options (or even just unlocked with IOMMU).
KaarlisK - Monday, September 12, 2011 - link
Unlocked and IOMMU is never gonna happen.Intel wants users using IOMMU to buy more CPUs, not overclock them.
maroon1 - Tuesday, September 13, 2011 - link
If you look at Intel history, the performance of IGP has been improving significantly since GMA 4500Wolfpup - Tuesday, September 13, 2011 - link
Wonderful.Hey, here's an idea! Instead of wasting 200 million transistors (or more now?) on worthless, redundant video, how about you put that towards MORE CACHE AND EXECUTION HARDWARE, eh Intel?
Leave the GPUs to Nvidia and AMD.
Arnulf - Monday, September 12, 2011 - link
So this would make Intel's IGP boerderline usable for low resolution gaming (assuming they can cobble together decent drivers) ? They too are going to run into bandwidth wall at some point and I dare to speculate AMD is going to be able to pull out better performance at that point for some time still before Intel catches up.FaaR - Monday, September 12, 2011 - link
Intel's IGP is already useable for low resolution gaming, it runs stuff like Source engine stuff and WoW fast enough to be playable on my Macbook Pro's screen (1280*800), 60% faster would just be cake on top as far as I'm concerned. :)As for bandwidth, Ivy Bridge is supposed to bring official support for 2166MHz DDR3 is it not? That's a substantial bump above 1333MHz. Also we can likely expect improved use of L3 buffering for graphics use in IB.
Llano beats Intel's offerings handily on the graphics side, while being pretty "meh" on the processor side. That graphics advantage is of course mainly because of AMDs current lead they bought in the form of ATI some years ago now.
Without appearing as if I'm taking sides in some kind of adolescent graphics/CPU vendor flame war here, Intel's getting there. They've been saying they're getting there for years now without really making any real headway, but now they really are getting there.
AMD better watch out, or in a few more generations they'll be as far behind on the GPU side as they are on the CPU. That would be bad. There is nothing better than competition in the marketplace to make companies excel in their work. My two first PCs were powered by AMD CPUs, a K6 and a K6-III respectively, because Intel was too expensive for me.
meek29 - Monday, September 12, 2011 - link
AMD will never fall behind in the IGP part, because they have ATI. It's the processor part they need to worry about and we'll be seeing some updates from them soon.tuklap - Monday, September 12, 2011 - link
it is because they have the technology that ati has.....no question about the apus being superior in graphics...if intel can acquire nvidia that will be a fair match ^_^quiksilvr - Monday, September 12, 2011 - link
Pray to God that never happens. The last thing we need is a duopoly.ZoZo - Monday, September 12, 2011 - link
Of course AMD can fall behind on the IGP. If Intel has a sufficiently better manufacturing process, they can cram more EUs in the die and overtake AMD both in absolute performance and performance per watt.silverblue - Monday, September 12, 2011 - link
I will point out that Intel's IGPs are clocked significantly higher than AMD's. Remember - that's the only reason the original HD Graphics could compete with AMD's dated IGPs. Who's to say that they can maintain such clock speeds with more shaders on board as well as keep power usage in check, even with tri-gate transistors?Without a lot of extra memory bandwidth, medium quality details will still hurt it just like it hurts Llano. HD 3000 makes more sense with lower details and I can't see that changing any time soon, not without putting memory on die.
BSMonitor - Monday, September 12, 2011 - link
Welcome to the game noob.Intel's IGP is a completely different architecture than ATI. Which is completely different from nVidia's CUDA cores.
Clock speed has little to do with comparing performance across any of these architectures.
Taft12 - Monday, September 12, 2011 - link
He's not talking about performance or architecture, he's talking about potential problems with Intel's high GPU clockspeed combined with increasing GPU complexity, and perhaps an impact to the TDP of the chip.Noob.
silverblue - Monday, September 12, 2011 - link
Exactly. If you wanted to talk performance, this touted 60% increase in performance may be commendable, but let's consider for one moment that a) Trinity is on its way, and b) Llano is already that much ahead. I'm very happy to see Intel challenge in this area, but until their drivers improve, you have to ask what else they can do - add more hardware or pump up the clocks?tipoo - Monday, September 12, 2011 - link
What about the ULV version of the Ivy Bridge GPU? In Sandy Bridge, it had the same name but different performance. It only clocked at 350MHz instead of 650 and turbo'd up to 1GHz instead of 1.2.KPOM - Monday, September 12, 2011 - link
The new i4-2467 (1.6GHz) turbos up to 1.15GHz and the i7-2677 turbos up to 1.2GHz.iwodo - Monday, September 12, 2011 - link
Intel should have used GT2 in all their Processors.I was expecting Double performance increase. Up to 60% is still not good enough.
Drivers update is still very slow - I dont expect an monthly update like Nvidia or ATI. But at least have quarterly update for GPU drivers.
Why No OpenGL 4?
2x QuickSync speed will finally makes it faster then x264 fastest.
KPOM - Monday, September 12, 2011 - link
I'm actually slightly impressed. I was expecting a 30% improvement based on what Intel had previously said. The Sandy Bridge is OK for many mainstream games in low detail settings. This would make it acceptable in medium detail.MarkLuvsCS - Monday, September 12, 2011 - link
Performance increase will be an added benefit, but i think the best part is the expected quality increase. I know this is still a major concern for people trying to stave best quality backups for videos. GPU transcoding seems to be the worst of the three vs. QuickSync and x264.shivoa - Monday, September 12, 2011 - link
Agreed with the above on the lack of OGL4.x support. This seems to be a trend for Intel ignoring OGL support for their GPUs whose DX versions imply are probably only driver locked to earlier OGL versions.The SB support for only OGL3.1 (rather than the full fat 3.2 or the backport 3.3) created additional work for graphics programmers by introducing a class for GPU that can support the DX10 new engine but cannot take an OGL3.3 Mac/Linux engine with equivalent features (because OGL3.3 is a backport to the OGL3 hardware level it means an OGL4/DX11 engine can be quickly converted to OGL3.3 support while OGL3.0-3.2 would require a far more significant rewrite to the OGL3 conventions and language versions).
Doormat - Monday, September 12, 2011 - link
I'm also disappointed. Between the shink from 32 to 22, 3D gates, and a micro architecture revision, I was expecting 2x perf increase at least. 10% on the low end is useless. Looking forward, intel is setting the "low end must work" bar and GT1-2012 is incredibly weak.iwodo - Tuesday, September 13, 2011 - link
You pointed out the maths. With Performance at 2, Double it would only make it 4. Compare to ATI or Nvidia, who had their performance starting already at 6 or 8. Even 50% would be sufficient.DanNeely - Monday, September 12, 2011 - link
It's disappointing on the mobile side; but with Intel being pushed hard to drop power consumption in its mobile parts, with 17W reportedly becoming the new standard level in Haswell (vs 35W now); a lot of the potential of 22nm is having to be consumed in dropping power use, not in boosting performance.OTOH if they do offer the GPU for more of their mainstream desktop parts it will be a 3.2x boost there.
jah1subs - Monday, September 12, 2011 - link
Does this design fix the shortcoming that Sandy Bridge's graphics had with 23.976 fps?Did Intel perhaps finally deliver a software or firmware fix for the shortcomings that they had with the initial release version of 23.976 fps?
Also, I never understood from the earlier write-ups what the user experience of the shortcomings were. This seems like an appropriate place to ask what is the user's experience because of the earlier shortcomings?
jah1subs - Monday, September 12, 2011 - link
I should add to this that I am thinking only of graphics for digital video transcoding and move playback.DanNeely - Monday, September 12, 2011 - link
The problem with playing back 23.976hz video at 24hz is is that every 41.66seconds the 24hz playback gets one frame ahead of the 23.976 playback. To keep the video and audio in sync a frame shown twice, creating a stutter in the playback.What I've occasionally wondered is if speeding up the audio slightly to adjust might be less noticeable. It's a 0.1% difference so excepting people with perfect pitch I doubt it would be noticeable.
jah1subs - Monday, September 12, 2011 - link
Dan:Thank you
valkyrie743 - Tuesday, September 13, 2011 - link
they better fix the DXVA2 hardware acceleration issues ffmpeg has currently as well as the 23.976 bugi want a perfect HTPC cpu