We've already talked to product managers representing the graphics industry. But what about the motherboard folks? We are back with ten more unidentified R&D insiders. The platform-oriented industry weighs in on Intel's, AMD's, and Nvidia's prospects.
If you read our recent Graphics Card Survey, then you already know another battle in the graphics war is looming. In fact, Nvidia recently fired off yet another salvo.
With 2011 right around the corner, an even more influential shootout is about to happen, as AMD and Intel both bring new weapons to the front line in the form of CPU/GPU hybrids. However, our first survey back in August was a bit one-sided, because it only sought out the voice of video card makers. Hybrids make this a topic for two industries. What about the motherboard guys?
So, while we were putting the call out to experts in the graphics business, we were also making the rounds on the motherboard side. Though, we want to point out that we have other reasons for wanting a second opinion. If you look at the company structure for the tier-one and -two motherboard manufacturers, those companies selling motherboards and graphic cards have completely separate divisions. And while they do collaborate on some marketing and technical aspects, they usually are left to their own devices and operate independently of one another. After all, the technical people in the motherboard division have different goals and agendas. The worries and problems on one side don’t translate well to the other. What does the motherboard team care if their GPU-obsessed colleagues can’t find the right balance of performance to heat?
We think it is important to get the whole story. That is what a survey should be about. There is nothing wrong if the responses turn out the same. A universal answer means there is a universal opinion. However, for those people who actually dig a little deeper, it becomes apparent that there is a little more “meat on the bones.” Similar answers are often similar for different reasons, and it is the reasoning we find important. “Yes” and “No” answers don’t sate our appetite, simply because there is no context for understanding. This is why we will always try and solicit additional comments on all of our questions.
Background
We should make clear these are not marketing representatives sent to evangelize certain agendas. If they are, they’re pulling double duty as product managers. The primary duty of public relations is to get good press, and sometimes it is hard to get those folks out of that mode without having to resort to alcohol (Chris and I are both in agreement that it would probably be unwise to do so, anyway).
We specifically chose to talk to people in charge of the technical aspect of their company’s motherboard business. Depending on the organization, we carefully selected GMs, VPs, heads of departments, and R&D engineers. It is important to note that these are people from headquarters, meaning they bring us their ideas from a global perspective.
There were no barriers in our quest. If we needed to use another language to find the people we wanted, we used it (that’s the beauty of working for a global media company). Distance did not deter us, and if you saw our international phone bill, you’d understand the time we dedicated to this project. No stone was left unturned to find the people we needed. To our participants out there, we extend our most gracious thanks and sincerest apologies for the constant pestering.
Ultimately, we see this as a way to bring a better sense of industry dialog, answer a lot of your questions, end a lot of speculation, and provide insights on current and upcoming industry trends.
This is the first in what is to be a long series of surveys compiled from leading technical people in the motherboard business. This is going to mirror what we have been doing with the graphics survey. You're probably asking again, who is participating? What type of motherboard manufacturers are they: ODM, OEM, or some channel brand? The point of any survey is to get a variety of differing viewpoints, so it is a detriment if we leave anyone out. For that reason, if a company out there makes a motherboard, there is a very high chance we got them involved with our survey.
This specific survey is supposed to be the flip side of the Graphic Card Survey. There are two sides of the desktop graphics business: discrete and integrated. The vendors involved in discrete sales already weighed in, and we want the opinions of their peers.
- There is an increasing move away from emphasizing raw clock rates in favor of parallelized multi-core designs. Do you think this will change as CPU/GPU hybrids take advantage of GPGPU programming, such as DirectCompute, CUDA, and Stream, easing the CPU's role in tasks that once relied on threaded processors for their performance?
- Will there come a time when integrated graphics with programmable logic (like Sandy Bridge and Llano) make discrete graphics unnecessary?
- One year from now, do you continue to see Nvidia active in designing chipsets, or will the company focus on its core business (discrete graphics solutions)?
- The success of hybrid CPU/GPU designs like Sandy Bridge and Llano is closely tied to GPGPU programming. In the last major tech cycle, system integrators and consumers successfully adopted x86-64 processors and operating systems. Yet, potential benefits have been delayed because programmers, even today, are slow to adopt 64-bit programming. Do you think Intel and AMD can cause a major shift towards general purpose GPU programming within a year of their product launches?
- CPU/GPU hybrid designs like Sandy Bridge and Llano potentially mitigate the need for a separate graphics card. Historically, integrated graphics have been inadequate for everything above entry-level desktops. Do you think the integrated graphics from the first generation of CPU/GPU hybrids are powerful enough to drive workstations and high-end desktops?
Ground Rules
We are inevitably dealing with sensitive topics here, including industry trade secrets, proprietary company strategies, and nondisclosure agreements (NDAs) pertaining to unannounced products. We want to make it clear that we fully support and believe in the purpose of NDAs and the preservation of industry secrets, as well as company strategies. These make our industry stronger, not weaker. For example, if Intel was able to change early in its Tick-Tock cycle to develop a product specifically to address the leaked specifications of an upcoming AMD processor, all of the investment capital from that leaked project becomes a sunk cost.
For this reason, information regarding industry trade secrets and proprietary company strategies is edited out, unless it's already considered common knowledge. It’s interesting information, of course, but it really doesn’t serve any purpose other than journalistic sensationalism. Information relating to specific products is generally withheld, minus a few exceptions. Information regarding specific product releases is edited to the quarter, rather than pointing at specific dates. First amendment and fourth estate aside, we are not bound by any NDAs pertaining to what our sources are telling us (NDAs usually come into play when the press gets samples close to the date of announcement).
Additionally, Chris and I have made the executive decision to withhold all participant names and the names of their respective companies for the following reasons.
- The identity of our participants serves no real purpose for the sake of the article. It is what they say that matters.
- Our participants now have the freedom to say whatever is on their minds, free from their company’s legal and media relation teams, without risking getting into trouble.
- We need this to be an ongoing survey. Anything that can get these people into hot water means an ongoing industry dialogue will be cut very short.
Question: There is an increasing move away from emphasizing raw clock rates in favor of parallelized multi-core designs. Do you think this will change as CPU/GPU hybrids take advantage of GPGPU programming, such as DirectCompute, CUDA, and Stream, easing the CPU's role in tasks that once relied on threaded processors for their performance?
- As more and more taxing application come, multi-core designs become increasingly necessary for computing.
- GPGPU solutions are not truly general purpose; they're really optimized for doing a few applications extremely well. Besides, CPU makers still want to add more horsepower to their CPUs, and the best way to do that currently is by adding more cores.
- I foresee the growth toward both on multi-core design and GPGPU programming.
At the moment, it seems like GPGPU is an eventuality, but it is by no means a short-term certainty. We've been hearing about CUDA for what, almost five years now? And while the technology has done amazing things in the scientific, financial, and medical fields, its utility on the desktop is still far less pervasive--to the point where we certainly wouldn't recommend one card over another specifically for its CUDA support.
During that same time frame, CPU architectures made a transition from cramming as much information through a single core as possible to parallelizing across multiple cores, and slowing clocks down to maintain manageable power envelopes given existing manufacturing technologies. And even then, we're still fighting an uphill battle to get developers onboard with threaded code. It's happening, though, and at least one of our respondents picked up on that fact.
Is it probably that we'll see a return to super-high clocks and minimal parallelism? Not likely. Will shrinking process technology enable more complex multi-core CPUs? We're almost sure of it. Will the next revolutionary move in this space involve integration along the lines of what AMD and Intel are planning? It seems increasingly probable. Sure, integration is a cost-saving strategy, but it also has the potential to enhance performance as latencies are slashed and very high-speed pathways between very bandwidth-hungry components are better utilized.
Question: Will there come a time when integrated graphics with programmable logic (like Sandy Bridge and Llano) make discrete graphics unnecessary?
- High-end gamers would still require a more powerful 3D experience, which is difficult for the [CPU/GPU] hybrid architecture.
- … so far, the performance of integrated graphics is not strong enough to replace discrete. As for enthusiasts, they still need discrete graphics for fluid gaming performance.
- Integrated graphics is still not powerful enough for gamers.
- You need discrete graphics for multi-display setups.
- Discrete solutions provide more flexibility to upgrade graphics for DIY users.
- The gaming PCs still need discrete graphic solutions.
- Yes, for most users. But power users will always want discrete solutions.
- There will be always be discrete graphic cards, but they'll be limited to the extreme gaming market.
Unlike the Graphics Card Survey, we are getting a decent mix of people foreseeing the declining impact of discrete graphics. Right off the bat, one third of our experts responded “yes, integrated graphics will de-emphasize discrete.” Two-thirds of those gave the caveat that the high-end market will exist in perpetuity. Meanwhile, 60% of those who responded "no" gave that same addendum. If you put that all together, it should be 77% responding yes, integrated graphics can replace discrete graphics except in the high-end space. If we normalize to that caveat, you can compare this to 16% of the Graphics Card Survey participants. This is a pretty big difference.
A lot of people are going to write this off because, in their minds, the motherboard business has more to gain from a potentially powerful hybrid. This isn’t really true. A video card user will always have a motherboard, but a motherboard user doesn’t necessarily need a video card, provided there is sufficient IGP performance. Remember that Sandy Bridge and Llano should come armed with integrated graphics across the entire processor lineup. If anything, this means that there is nothing to be gained in the number of motherboards sold. In this scenario, the company that makes both video cards and motherboards actually loses out on video card sales.
So what is the real reason behind the difference in opinions? There are a couple of factors at play we should consider:
- In order to design motherboards and get them manufactured on time, motherboard makers have access to technology at a much earlier stage for qualification purposes. For example, it looks like they had Sandy Bridge samples almost a full year before we were able to see a demo.
- We are asking about the future, not just Sandy Bridge and Llano. Motherboard makers are also in on the long-term roadmaps plans for Intel and AMD.
If we are talking about the long term (say five years), it is possible that hybrids could corner the graphics market on everything but the $150+ range. The people that are most likely to be in the know (outside the walls of Intel and AMD) certainly think that this is the eventual end game. Intel already has a strong hold on over half of the graphics market, which basically translates into a domination of the integrated graphics space. AMD’s design looks more likely to deliver something similar to what a discrete solution might offer (which makes sense, given the core's pedigree). So, the fight for the mainstream market is going to be very competitive.
We aren’t discounting Nvidia, but it is hard to have a discussion on Nvidia in the context of hybrids because there is so much uncertainty. What is the company's future? We felt that deserved a question of its own, so read on.
Question: One year from now, do you continue to see Nvidia active in designing chipsets, or will the company focus on its core business (discrete graphics solutions)?
- Currently, Nvidia can only provide entry-level AMD chipsets, and they have always faced a product shortage. Ultimately, we believe Nvidia's motherboard chipset product will become an added service item for most vendors--i.e. 3D applications.
- Nvidia does not get as much profit from UMA solution as discrete VGA.
- We really don't know…
- It seems obvious Nvidia will not be active in the MCP business. But I supposed they will not and cannot count on discrete graphics alone, maybe they will drive toward applications relating to 3D solutions.
The current situation makes everyone a bit unsure about Nvidia’s future. While Nvidia is nowhere near out of the fight, the “loss” of its chipset business gives the impression that revenue options outside of the graphics world are shrinking. The path Nvidia is on (at least on the desktop), seems to be limited to discrete graphics. This is reflected in what we are hearing from our sources in the motherboard business. Nvidia is keeping tight-lipped, since even its traditional motherboard partners are seemingly in the dark. The comment that best sums this up comes from one Nvidia technology partner: “I supposed they will not and cannot count on discrete graphics alone.”
About half feel that Nvidia is going to have no choice but to refocus on its graphics business. There are some out there that feel Nvidia may win its lawsuit and get the all-clear to develop chipsets for Intel's Core ix-series CPUs, but a third of our participants think there is a going to be a different end game. Oddly enough, someone even brought up the idea of a merger. Another cited Nvidia’s focus on 3D entertainment, which our readers feel is more of a gimmick due to the price tag.
The situation with Nvidia and Core ix-based chipsets is basically written in stone until the 2009 chipset lawsuit gets a judgment later this year. For the moment, Nvidia's chipset business is forcibly idle, and we've seen no discussion of them in roadmaps that we've seen. To that end, though, we're unsure if an Nvidia chipset for Intel CPUs makes sense anyway. At one time, superior memory controller performance and a relatively badass audio subsystem were real reasons to consider nForce as an alternative. But as integration pushes more functionality into the CPU itself, any third-party chipset vendor's chances to differentiate diminish substantially.
Rewind the clock back a few years, the main reason Nvidia was even able to acquire a chipset license was due to AMD’s CrossFire threat. It doesn’t seem to be in Intel’s interest to shorten its reach now that it has its own graphics "solution" and Nvidia’s CUDA may marginalize the performance delivered by a CPU. Now that we all know SLI is simply a licensing matter not dependent on specific hardware, Intel's board partners are able to cover CrossFire and SLI in very enthusiast-friendly X58- and P55-based platforms.
Nvidia, meanwhile, proclaims it has moved to focus on to its Tegra processors, which is by no short measure a bluff. Nvidia’s CEO recently reiterated this focus, as the company sees ARM as a huge growth opportunity. If you look at the financial statements, R&D has been given a big budget increase, despite a drop in the company’s year-to-year sales. We have mixed feelings here if only because the ARM market, though loaded with potential, is dominated by Qualcomm and Texas Instruments. Nvidia’s Android-based tablet demos generated a lot of buzz, but it’s still not clear a product in that vein can seriously compete with Apple’s iPad.
Recent FTC settlement only add more drama to the situation. The settlement effectively solidifies the PCI Express standard, as Intel now must provide bus support for another six years. However, this is as much a protection for AMD’s discrete graphic business as it is to Nvidia’s. The settlement in no way impacts the current lawsuit with Intel. Recently, the company issued the statement regarding the FTC settlement: “Nvidia supports the FTC's action to address Intel's continuing global anticompetitive conduct. Any steps that lead a more competitive environment for our industry are good for the consumer. We look forward to Intel's actions being examined further by the Delaware courts later this year, when our lawsuit against the company is heard.” Obviously, Nvidia wants to keep the chipset business if it can.
In Q2, Nvidia issued a warning estimate that lowered its earnings 16%. This came as a surprise to many, if only because Intel and AMD both issued strong earnings. This is just another indicator of how dependent Nvidia has become on the sales of its high-end cards. As discretionary spending decreased, AMD reaped the benefits of its timely Radeon HD 5000-series cards. The fact that Nvidia is only now releasing its DX11 Fermi-based cards for the mainstream and low-end market spaces means it has some catching up to do.
Question: The success of hybrid CPU/GPU designs like Sandy Bridge and Llano is closely tied to GPGPU programming. In the last major tech cycle, system integrators and consumers successfully adopted x86-64 processors and operating systems. Yet, potential benefits have been delayed because programmers, even today, are slow to adopt 64-bit programming. Do you think Intel and AMD can cause a major shift towards general purpose GPU programming within a year of their product launches?
- AMD will introduce its integrated graphics-equipped CPU in 2011. Intel will do so even earlier than AMD. But users still needs more time to be educated about GPU; only then can they really demand it. Consumers still think discrete graphics provide more performance and functionality.
- The 64-bit transition has been very slow and gradual. Software is always behind hardware, so we don't believe GPGPU will see any quantum leaps in the next year.
- Honestly, we have no clue. We are at the mercy of the big three: Intel, AMD, and Nvidia.
- Typically, our collaboration focuses mainly on implementing compatible hardware designs. We help drive demand through marketing, but driving the direction of demand is not within our scope.
- Hardware is always faster than software. I think that is what we are seeing with GPGPU [programming].
- I'm not really sure that it is necessarily slow. We are seeing more 64-bit programming about two years after full x86-64 adoption. If GPGPU [programming] follows suite, we should see more in 2012 or perhaps 2013.
The rise of hybrid processors brings new possibilities. Even on a system armed with integrated graphics, it is possible to see enhanced performance through the addition of some GPGPU programming. Specific tasks can be optimized on the graphics core, and even though systems with the most to gain will be those with powerful discrete graphic solutions, additional processing power can be a boon in environments that benefit most pointedly from parallelism.
By design, our question was meant to solicit the opinions on the speed of GPGPU programming adoption. Lately, progress seems to have ground to a halt (or at least, we're not hearing as much momentum behind apps optimized for CUDA and DirectCompute). Frankly, it is frustrating to see this occur. Reading through the comments from our last survey, readers seem to be in agreement. We are at a point where we have a lot of computer power, but much of the time, we aren't using it.
We also mentioned in the last survey how frustrating it was to see the slow pick-up of 64-bit programming. If you recall the emergence of 64-bit as a feature, both Intel and AMD were actively leveraging that capability as a differentiating feature. Fast forward to today. We are still lacking a concerted effort by the software development community to adopt 64-bit programming--perhaps due to a perceived lack of benefit. We still don't have a 64-bit version of Firefox, and there is no ETA on a 64-bit Flash plug-in. While the benefits of 64-bit in these two scenarios may in fact be negligible, it shows how slow the software community has been in contrast to what today’s hardware provides. Only recently did Adobe update its suite of apps to support a 64-bit architecture, and we’ve already shown the effect of that decision to be massive.
One of the key problems has been a standardized programming layer. Nvidia went with Compute Unified Device Architecture (CUDA). AMD went with Stream. And Microsoft is in the middle with DirectCompute--an attempt to standardize general purpose GPU computing across dissimilar architectures. Similar to the 64-bit extension war, this has delayed GPGPU programming adoption. CUDA was a fairly robust interface from the get-go. If you wanted to do any sort of scientific computational work, Nvidia's CUDA was the library to use. It set the standard. Unfortunately, as with many technologies in the PC industry kept proprietary, this has also limited CUDA's appeal beyond specialized scientific applications, where the software is so niche that it can demand a certain piece of hardware. That's not the case with a transcoding app or a playback utility. Even Adobe seems to have made a brave move by limiting its Mercury Playback Engine to a handful of CUDA-based GeForce and Quadro cards.
Nvidia no doubt wants to keep stressing the GPGPU capabilities rolled up into its Fermi architecture. It even hired the guy (Dr. Mark Harris) who coined the term GPGPU, which stands for "General Purpose computation on Graphics Processing Units." Unfortunately, mainstream adoption isn't going to happen without support from Intel and AMD, who probably have the biggest ability to help augment support for DirectCompute and OpenCL through large development budgets.
We have been playing with some of the CUDA framework and would love to see more mainstream adoption, but we understands the lack of progress. Looking at the big picture, a software developer would have to justify months (maybe even years) of extra programming in CUDA to get some of the GPGPU enhancements. And even then, gains are going to depend on the application.
A single GPGPU coding framework does a lot for adoption, since it allows developers to target any properly-enabled graphics card, and not just one from Nvidia. Again, this makes much more sense in the context of broad adoption. For the moment, CUDA remains the best solution if you have a lot of money, a very specific task able to benefit from parallelism, and the resources to develop with GPGPU in mind. Personally, we are enjoying Jacket for MATLAB. OpenCL and DirectCompute come close, but both give up lower-level hooks into the architecture in favor of compatibility.
Intel and AMD both need to get with the program--particularly AMD. Its much-hyped APUs are right around the corner, and it unquestionably has the advantage with regard to graphics. Intel's solution, at first blush, looks more like an evolutionary afterthought than anything that'll be capable of augmenting its processors. And to be frank, Intel's CPUs are its first priority.
Question: CPU/GPU hybrid designs like Sandy Bridge and Llano potentially mitigate the need for a separate graphics card. Historically, integrated graphics have been inadequate for everything above entry-level desktops. Do you think the integrated graphics from the first generation of CPU/GPU hybrids are powerful enough to drive workstations and high-end desktops?
- High-end gamers will still require a more powerful 3D experience, which is difficult for the CPU/GPU hybrid architecture.
- The performance of a first-generation integrated graphics platform is not powerful enough. It might be sufficient for Web browsing, Flash-based games, and the actions in a simple user environment. However, the graphics performance is not suitable for gaming, 3D graphing, and HD video playback. Some people still don't even know about this new technology, and so AMD and Intel need us to educate their customers about intended use characteristics.
- This question is really hard to answer. It depends on the game provider. If I’m Nvidia, I will work with game providers to create a superior game that must use a discrete graphic card. Besides, Intel and AMD will continue improving their onboard graphics, so this is a seesaw battle.
- Historically, fully-integrated solutions have been powerful enough for general desktop use, but not for workstations and high-performance systems. I think those high-end users will still prefer discrete products that give better performance and are easier to upgrade.
- Discrete graphic cards will be cornered in extreme segment only in the future.
This question was designed to mirror a similar question that we were asked in our graphics card survey. However, at the last minute, we changed it a bit. Instead of asking if the hybrid processors were “powerful enough to replace low-end to mid-range discrete graphic solutions,” we asked if they were “powerful enough to drive workstations and high-end desktops.” Admittedly, the workloads applied to workstations and high-end desktops can be quite large, so we probably should have asked two separate questions. Yet, our respondents seemed to understand that we were trying to gauge the performance that first-generation hybrids could deliver.
We intentionally made the question less loaded for the VGA oriented survey. The motherboard team has very little to fear from a struggling VGA division, so there was no need to pull any punches. It was a bit surprising we got back similar responses though, especially when you consider that the graphics card and motherboard divisions largely work independently of one another. The only people with a more holistic picture are further up the corporate ladder. However, more than half of the respondents in our VGA survey work for graphics card-exclusive companies, so they don’t even have a motherboard team to converse with.
When it comes to workstations, it is possible to task hybrids with certain tasks like transcoding. However, this is going to depend on both Intel and AMD locking down driver support. We're not so worried about this in AMD's case, but Intel has priors.
As it pertains to discrete solutions in the $150+ range, there is no way the first generation of hybrids can provide the performance necessary to compete. Looking at the roadmaps beyond 2011, we’re still skeptical because the performance demands in the mid-range and high-end market segments increase each time the fastest hardware is refreshed.
In essence, the high-end leads the way when it comes to new features and capabilities. We got DirectX 11 from flagship parts, then everything trickled down. Enthusiasts want that early access to the latest and greatest, and they're willing to pay for it. Processor-based graphics can't offer the same fix--nor will it ever be able to. A CPU with onboard graphics is going to evolve much more slowly because there are other subsystems in play.
Don't believe us? Look what memory controller integration did to chipsets. Before AMD's Athlon 64, Intel, AMD, Nvidia, VIA, and SiS could all differentiate their core logic by improving memory performance. Controllers would change every single product generation, and you'd see performance improvements in rapid succession, each new chipset adding support for the next-fastest memory standard. Once that memory controller migrated north, the reason to even consider an nForce chipset nearly dried up (save SLI support). But you notice AMD couldn't make changes to that controller as often. The pickup of DDR2 and DDR3 were much slower as a result. Specifically, Intel was quicker to the DDR2 punch than AMD. Fortunately, AMD's onboard controller gave it enough performance that the technology migration wasn't necessary.
That won't be the case with onboard graphics, though. Now we'll have integrated GPUs set in stone for extended periods delivering middling performance--so that delay between generations will be more painfully-felt. And that's why the vendors selling discrete graphics will continue to excel at the high-end. Hell, Intel isn't even supporting DirectX 11 with Sandy Bridge. How long will it be before we see Intel make the jump? Oh, we know. You're saying, "who cares in the entry-level space." Indeed, there will be a contingent of business-oriented folks who do just fine with DX10 capabilities and fixed-function video decoding capabilities. There will even be gamers who might have previously bought $75 graphics cards who forgo the add-in board. But even with such an encroachment on the entry-level space, we simply don't see hybrid processor architectures evolving quickly enough to keep up with graphics development, and our respondents would seem to concur (albeit, in their own ways).
No comments:
Post a Comment
If you have any Doubt..kindly let me know