Tuesday, May 31, 2011
OCZ has been at the forefront of each generation of SandForce SSD release since the debut of the SF-1500 based Vertex Limited Edition. More recently the Vertex 3 was the first client SSD to use SandForce's SF-2281 controller.
Many of you have written me asking if the Vertex 3 is worth the additional cost over the Vertex 2. Given that you can pick up a 120GB Vertex 2 for $210 ($180 after rebate), and a 120GB Vertex 3 will set you back $300 flat it's tough to recommend the latter despite the performance improvements. If you don't have a 6Gbps platform (e.g. Intel 6-series, AMD 8-series) the Vertex 2 vs. Vertex 3 decision is a little easier to make, otherwise the newer, faster Vertex 3 is quite tempting.
There's another issue holding users back from the Vertex 3: capacity. The Vertex 3 is available in 120, 240 and 480GB versions, there is no 60GB model. If you're on a budget or like to plan frequent but rational upgrades, the Vertex 3 can be a tough sell.
Enter the Agility 3, OCZ's mainstream SF-2281 drive.
Architecturally the Agility 3 is identical to the Vertex 3. You get the same controller running similar firmware, and as a result post similar peak performance stats (note the use of the word peak):
OCZ SF-2200 Lineup
Specs (6Gbps) Agility 3 Agility 3 120GB Vertex 3 120GB Agility 3 240GB Vertex 3 240GB Vertex 3 480GB
Raw NAND Capacity 64GB 128GB 128GB 256GB 256GB 512GB
Spare Area ~6.3% ~12.7% ~12.7% ~12.7% ~12.7% ~12.7%
User Capacity 55.8GB 111.8GB 111.8GB 223.5GB 223.5GB 447.0GB
RAISE No Yes Yes Yes Yes Yes
Number of NAND Devices 8 16 16 16 16 16
Number of die per Device 1 1 1 2 2 4
NAND Type ONFI 1.0 ONFI 1.0 ONFI 2.0 ONFI 1.0 ONFI 2.0 ONFI 2.0
Max Read Up to 525 MB/s Up to 525 MB/s Up to 550MB/s Up to 525 MB/s Up to 550MB/s Up to 530MB/s
Max Write Up to 475 MB/s Up to 500 MB/s Up to 500MB/s Up to 500 MB/s Up to 520MB/s Up to 450MB/s
4KB Random Read 10K IOPS 20K IOPS 20K IOPS 35K IOPS 40K IOPS 50K IOPS
4KB Random Write 50K IOPS 50K IOPS 60K IOPS 45K IOPS 60K IOPS 40K IOPS
MSRP $134.99 $229.99 $249.99 $419.99 $499.99 $1799.99
Street Price ? ? $299.99 ? $559.99 $1799.99
OCZ has started publishing both peak and incompressible write performance data, but only on its product sheets. While peak performance isn't affected, incompressible performance is. Using AS-SSD as a benchmark, OCZ claims the Agility 3 is only able to muster about 200MB/s for peak sequential reads/writes on the 240GB drive - that's less than half the score the Vertex 3 gets in AS-SSD's read test. Our benchmarks, as you'll soon see, confirm the deficit.
If it's not the controller causing this, and it's not the firmware - then it's the NAND. The Agility 3 (and Solid 3) both use asynchronous NAND. What does that mean? Let's find out.
Asynchronous NAND: An ONFi History Lesson
It takes 50µs to read 8KB from a 25nm Intel NAND die. That works out to a staggering 156MB/s, from a single NAND die. Even the old 50nm stuff Intel used in the first X25-M could pull 4KB in 50µs or ~78MB/s. The original X25-M had 10 channels of NAND, giving it the ability to push nearly 800MB/s of data. Of course we never saw such speeds, as it's only one thing to read a few KB of data from a NAND array and dump it into a register. It's another thing entirely to transfer that data over an interface to the host controller.
ONFi 1.0 limited NAND performance considerably
Back in 2006 the Open NAND Flash Interface (ONFi) workgroup was formed with the task of defining a standardized interface for NAND Flash. Today, Intel and Micron are the chief supporters of ONFi while Toshiba and Samsung abide by a separate, comparable standard.
As is typically the case, the first standard out of the workgroup featured very limited performance. ONFi 1.0 topped out at 50MB/s, which was clearly the limiting factor in NAND transfer speed (see my example above). The original ONFi spec called for an asynchronous interface, as in one not driven by a clock signal. Most logic these days is synchronous, meaning it operates off of a host clock frequency. Depending on the architecture, all logic within a synchronously clocked system will execute/switch whenever the clock signal goes high, low or both. Asynchronous logic on the other hand looks for a separate signal, similar to a clock, but not widely distributed - more like a simple enable pin. In the asynchronous NAND world this is the role of the RE, WE and CLE (read/write/command-latch enable) signals.
ONFi 2.0 brought the move to source synchronous clocking, as well as double data rate (DDR) operation. Not only were ONFi 2.0 NAND devices tied to a clock frequency, transfers happened on both rising and falling edges of the clock - a similar transition was made in SDRAM over a decade ago. While ONFi 1.0 NAND was good for up to 50MB/s, ONFi 2.0 increased the interface speed to 133MB/s. Present day synchronous ONFi 2.1/2.2 NAND is no longer interface limited as the spec supports 166MB/s and 200MB/s operating modes. Work on ONFi 3.0 is being done now to take the interface up to 400MB/s.
The Agility 3
OCZ sent us a 240GB Agility 3 for review. Inside it looks like this:
You see the SF-2281 controller in its usual spot and to the right of it are eight 25nm Micron NAND devices. Flip the board over and you get another eight.
As always, we look at the part number to tell us what's going on. Micron's part numbers are a little different than Intel's but the key things to pay attention to here are the 128G (128Gbit packages, 16GB per package) and characters 11 and 14. Character 11 here is an F, which corresponds to 2 die per package (2 x 8GB 25nm die in each NAND device) while number 14 is an A, indicating that this is asynchronous NAND. To date I've only encountered 25nm synchronous (represented by the letter B) NAND, but as with any other silicon device there's always a cost savings if you can sacrifice performance.
Equipped with asynchronous NAND, the Agility 3's max performance is limited to 50MB/s per channel compared to 200MB/s per channel in the Vertex 3. The Vertex 3 doesn't come close to saturating its per-channel bandwidth so there's a chance that this change won't make much of a difference. To further tilt things in the Agility 3's favor, remember SandForce's controller throws away around 40% of all of your data thanks to its real time compression/deduplication algorithms - further reducing the NAND bandwidth requirements. When a Vertex 3 pushes 500MB/s that's not actual speed to NAND, it's just how fast the SF controller is completing its tasks. In a typical desktop user workload without too much in the way of incompressible data access, the Agility 3 should perform a lot like a Vertex 3.
Cost Savings and a 60GB Drive
I mentioned the only benefit to asynchronous NAND being a cost savings, if we go by OCZ's MSRPs the savings don't look too great at 120GB: here the Agility 3 is $229.99 vs. $249.99 according to OCZ. Street pricing tells a different (more expensive) story for the Vertex 3. The 120GB drive is more like $299.99, which would mean the Agility 3 (if its MSRP is accurate) would be a full $70 cheaper. Move to 240GB and the gap likely widens.
With the Agility 3, OCZ is also introducing a 60GB model. SandForce's NAND redundancy technology called RAISE, requires an entire NAND die be sacrificed and added to the spare area pool in the event of a NAND failure. At 25nm a single die is 8GB, which would mean a 64GB drive would lose 1/8 of its capacity just due to RAISE. Get rid of another 6.3% of the drive for the standard spare area and you're looking at a pretty high cost per usable gigabyte.
One feature of the SF-2200 firmware however is the ability to disable RAISE. I've never advocated it simply because I like the idea of being able to recover from a failed NAND die in the array, but at the 60GB capacity OCZ felt it was better left turned off (otherwise the drive would have to be sold as a 56GB drive instead).
Entire NAND die failures are pretty rare but it's still possible that one could happen. The 60GB Agility 3, as a result, makes a potential reliability tradeoff for capacity. Personally I'd like to see OCZ offer the option to enable RAISE, although I'm not sure if any user accessible utilities exist that would allow you to do that easily.
Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)
Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO
Intel DX58SO (Intel X58)
Intel H67 Motherboard
Intel X58 + Marvell SATA 6Gbps PCIe
Intel 220.127.116.115 + Intel IMSM 8.9
Intel 18.104.22.1685 + Intel RST 10.2
Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64
The launch of the Nehalem-EX a year ago was pretty spectacular. For the first time in Intel's history, the high-end Xeon did not have any real weakness. Before the Nehalem-EX, the best Xeons trailed behind the best RISC chips in either RAS, memory bandwidh, or raw processing power. The Nehalem-EX chip was well received in the market. In 2010, Intel's datacenter group reportedly brought in $8.57 billion, an increase of 35% over 2009.
The RISC server vendors have lost a lot of ground to the x86 world. According to IDC's Server Tracker (Q4 2010), the RISC/mainframe market share has halved since 2002, while Intel x86 chips now command almost 60% of the market. Interestingly, AMD grew from a negligble 0.7% to a decent 5.5%.
Only one year later, Intel is upgrading the top Xeon by introducing Westmere-EX. Shrinking Intel's largest Xeon to 32nm allows it to be clocked slightly higher, get two extra cores, and add 6MB L3 cache. At the same time the chip is quite a bit smaller, which makes it cheaper to produce. Unfortunately, the customer does not really benefit from that fact, as the top Xeon became more expensive. Anyway, the Nehalem-EX was a popular chip, so it is no surprise that the improved version has persuaded 19 vendors to produce 60 different designs, ranging from two up to 256 sockets.
Of course, this isn't surprising as even mediocre chips like Intel Xeon 7100 series got a lot of system vendor support, a result of Intel's dominant position in the server market. With their latest chip, Intel promises up to 40% better performance at slightly lower power consumption. Considering that the Westmere-EX is the most expensive x86 CPU, it needs to deliver on these promises, on top of providing rich RAS features.
We were able to test Intel's newest QSSC-S4R server, with both "normal" and new "low power" Samsung DIMMs.
Some impressive numbers
The new Xeon can boast some impressive numbers. Thanks to its massive 30MB L3 cache it has even more transistors than the Intel "Tukwilla" Itanium: 2.6 billion versus 2 billion transistors. Not that such items really matter without the performance and architecture to back it up, but the numbers ably demonstrate the complexity of these server CPUs.
Processor Size and Technology Comparison
CPU transistors count (million) Process
Die Size (mm²)
Intel Westmere-EX 2600 32 nm 513 10
Intel Nehalem-EX 2300 45 nm 684 8
Intel Dunnington 1900 45 nm 503 6
Intel Nehalem 731 45 nm 265 4
IBM Power 7 1200 45 nm 567 8
AMD Magny-cours 1808 (2x 904) 45 nm 692 (2x 346) 12
AMD Shanghai 705 45 nm 263 4
A $69.99 deal for a Pantech Crossover smartphone from AT&T with a two-year contract and data plan could encourage U.S. feature-phone users to move to smartphones. While AT&T is targeting first-time buyers, it also may hope to attract consumers from low-cost T-Mobile. That could ease AT&T's purchase of T-Mobile -- or make AT&T more competitive.
AT&T Relevant Products/Services said Tuesday it will launch a new Android-based smartphone Relevant Products/Services on June 5 that will be priced at $69.95 when buyers sign up for a two-year service Relevant Products/Services contract and a minimum monthly data Relevant Products/Services plan. The new Pantech Crossover's surprisingly low price suggests the U.S. mobile Relevant Products/Services market may be on the verge of a major sea change.
According to Gartner Relevant Products/Services, global Relevant Products/Services smartphone shipments accounted for 23.6 percent of mobile handset sales overall in the first quarter -- an increase of 85 percent year on year. The U.S. market, however, may already be approaching the tipping point at which smartphones become the majority device Relevant Products/Services segment, just like laptops eventually did with desktop Relevant Products/Services PCs.
"In the U.S., I expect that [either] 2011 or 2012 will be the year that smartphones outsell feature phones," said Lisa Pierce, an independent analyst at the Strategic Networks Group. "As for the installed base, [it] will take until 2013-2014 for smartphones to [become] the majority of phones used."
Attracting First-Time Buyers
The Crossover sports a slide-out keyboard, a 3.1-inch touchscreen, a 600-MHz processor, and a three-megapixel camera with camcorder. Based on the Android 2.2 platform, the smartphone has access Relevant Products/Services to the latest social-networking apps Relevant Products/Services, games and other software Relevant Products/Services from the Android Market, and also ships with the AllSport GPS app as well as mobile hot-spot support Relevant Products/Services.
Subscribers who sign up for an AT&T tethering plan will be able to connect Relevant Products/Services as many as five Wi-Fi-enabled devices to the new smartphone. What's more, Crossover users with qualifying data plans will gain unlimited access to AT&T's Wi-Fi hot-spot network Relevant Products/Services, the company said.
AT&T is clearly pitching the Crossover at price-conscious first-time buyers who have been waiting for smartphone offerings to become more affordable. For this marketing tactic to work, however, carriers still need to encourage families to share Relevant Products/Services data plans as they currently share voice plans, with small add-ons for each family member, Pierce observed.
"I don't think a surge in smartphone purchases necessarily means that 3G/4G usage will climb in lockstep," Pierce said. "Many parents limit the range of applications available to their children -- and thus also limit their usage of mobile data services."
A Strategic Move
The Crossover's low introductory price suggests AT&T is subsidizing the new smartphone for a strategic reason. Pierce speculates the wireless Relevant Products/Services carrier's goal may be to attract consumers who would naturally gravitate toward T-Mobile. If successful, she said, this could help AT&T accomplish two things.
"On a superficial level, it could appear to take some of the wind out of the sails of merger opponents who say AT&T will kill off -- versus continue to sell -- low-price offers," Pierce explained. "If the merger doesn't go through, AT&T has acquired more share," she said, with T-Mobile emerging as a weaker player with "an even smaller share" of the U.S. mobile market.
In the global market, low-cost cell-phone shipments are expected to remain the dominant force for years to come -- especially in emerging markets where standard mobile phones with basic texting capabilities are available in the $20 to $25 price range.
"For a variety of reasons, U.S. smartphone/mobile data-usage trends play out in just a few countries," Pierce said. "Most countries won't be emulating these trends anytime soon."
Monday, May 30, 2011
Today Intel is holding their annual investors meeting at their Santa Clara headquarters. In true Intel fashion it’s being treated as a big event (ed: it’s so big they ran out of lunch), as this is the company’s primary vehicle for addressing the investors holding their 125 billion dollars in shares; in essence it’s a reprise of their current and future technology plans as a pep-talk for investors. As such it’s not really a technical event, but it’s not uncommon for a few new technical details to shake out during the presentations.
There are a number of presentations throughout the day, including keynotes from Paul Otellini, and presentations from a number of Intel groups including the architecture group, the data center group, and the manufacturing group. If something is going to shake out it’s bound to be the architecture group, so this is where we’ll start.
A big part of the architecture group’s discussion focused on Atom/SoC. The 32nm Medfield Atom is due this year, so Intel has been laying out their plans for what they’re going to be doing with Medfield. Unsurprisingly, a big push with Medfield is to break into the SoC space in a way that Moorestown could not. Intel never managed any major design wins for smartphones with Moorestown, which is something they want to correct with Medfield. To that extent Intel has been showing off Medfield concept phones to show investors that it’s a viable product and to try to drum up support.
Intel's Medfield Concept Phone
Intel is also spending some effort to dispel the idea that they can’t be competitive on a power consumption basis; in fact for the amount of effort they put into that message and the number of times they repeated it, they seem to be seriously concerned about being perceived as uncompetitive. Intel did some power consumption testing on Medfield and put together a slide showing their findings that Medfield is competitive with some current leading phones, but as always take this with a pinch of salt. Ultimately this is a comparison of 32nm Medfield with 4Xnm ARM SoCs, so it’s only applicable so long as Intel’s still ahead of the ARM producers on manufacturing technology.
Moving on, one thing Intel has been dealing with as Atom has evolved is how to consolidate all the disparate parts of a system onto a SoC, given the wide variety of uses for a SoC. With discrete components manufacturers could simply build a device out of the parts necessary for the features they need, but with Atom SoCs much of that gets shifted onto Intel. For Intel this means they will be focusing on producing a wider variety of SoCs, based on building up individual SoC designs on a modular basis. Intel isn’t going in-depth on how many specific SoC variants they’re planning on offering, but our impression is that there will be many variants, similar to how Intel offers a number of different desktop/laptop/server CPUs based on a common architecture.
Finally, Intel published a new generalized roadmap for Atom through 2014. Unfortunately they aren’t going into any significant detail on architecture here – while Silvermont is named, nothing is confirmed besides the name and manufacturing process – but it’s a start, and it ends with a shocker. We will see Silvermont in 2013 on Intel’s 22nm process, likely hand-in-hand with Intel’s aforementioned plans for additional SoC variations.
Far more interesting however is that Intel didn’t stop with Silvermont on their Atom roadmap. Intel’s roadmap goes out to 2014 and includes Silvermont’s successor: Airmont. We know even less about Airmont than we do Silvermont, but a good guess would be that it’s the tick in Intel’s tick-tock cadence for Atom. The biggest news here is that with a move to tick-tock for Atom, Intel is finally accelerating the production of Atom parts on their newer fab processes. Currently Atom processors are a year or more behind Core processors for using a new process, and even with Silvermont that’s still going to be the case. But for Airmont that window is shrinking: Airmont will be released on Intel’s forthcoming 14nm process in 2014, the same year as their respective Core processor. Intel hasn’t specified when in 2014 this will be, and it’s likely it will still be months after 14nm launches for Core processors, but nevertheless it’s much sooner than it has been before.
By accelerating their production of Atom on new processes, this should allow Intel to capitalize on their manufacturing advantages over the rest of the fabs. With Intel’s existing Atom schedule, they only have a year or less before other fabs catch up with them, so by putting Atoms on new processes sooner, they increase that lead time. So far Intel’s ARM SoC competitors have really only had to deal with Intel’s threats on an architectural level, so having Intel challenge them at a manufacturing level too will make Intel a much more threatening competitor.
Meanwhile, for the rest architecture group’s presentation, it was largely a catch-all for all of Intel’s consumer technologies. Much of the talk focused on where Intel plans to be in the next few years, based on where they expect to be thanks to their newly announced 22nm process. Intel considers their 22nm process to be a significant advantage for their products, so a great deal of their plans in the consumer space involve exploiting it in some manner or another.
Ivy Bridge, Intel’s first 22nm product, is being shown off in a few sample systems with Intel reiterating that it will be launching at the beginning of next year – we’d guess at CES. Longer term, Intel wants to get laptops closer to netbooks/tablets in terms of size and battery life, so that they can push 10 hours on a laptop (something the C2D-based Macbook Air can already get very close to). The catalyst for this will be Haswell, Intel’s new microarchitecture on their 22nm process scheduled for 2013.
Intel also used the occasion to show off a couple new technologies that they’re working on for Ivy Bridge generation computers. We’ve heard the name Fast Flash Standby mentioned before, but as far as we know this was the first time it has been demoed. In a nutshell, Fast Flash Standby is hibernating to SSDs, another product Intel has a significant interest in. The basis for Fast Flash Standby is that while going into sleep is fast, it requires leaving the RAM powered up to hold its contents, which is why sleep is only good for a few days of standby versus weeks for hibernation. Hibernating to a SSD, particularly one with a very high sequential read and write throughput, allows hibernation to take place much quicker and to resume much sooner. Intel is doing more here than just writing a hibernation file to a SSD, but the concept is similar.
Longer term Intel is looking at what kind of markets they want to go after, and what architectures they need to reach them. Intel is talking – albeit nebulously – about a new 10-20W notebook design target to sit right above their sub-10W target for Atom/SoC. Currently Intel offers CULV Sandy Bridge processors in the 10-20W range, but Intel appears to want to go beyond CULV with this new target. Whether this is a bulked up Atom, or a further trimmed IB/Haswell/Skylake remains to be seen. Intel is throwing around some performance targets however: they’re looking to improve iGPU performance by 12x over today’s performance in that 10-20W envelope.
NVIDIA’s GF104 and GF114 GPUs have been a solid success for the company so far. 10 months after GF104 launched the GTX 460 series, NVIDIA has slowly been supplementing and replacing their former $200 king. In January we saw the launch of the GF114 based GTX 560 Ti, which gave us our first look at what a fully enabled GF1x4 GPU could do. However the GTX 560 Ti was positioned above the GTX 460 series in both performance and price, so it was more an addition to their lineup than a replacement for GTX 460.
With each GF11x GPU effectively being a half-step above its GF10x predecessor, NVIDIA’s replacement strategy has been to split a 400 series card’s original market between two GF11x GPUs. For the GTX 460, on the low-end this was partially split off into the GTX 550 Ti, which came fairly close to the GTX 460 768MB’s performance. The GTX 460 1GB has remained in place however, and today NVIDIA is finally starting to change that with the GeForce GTX 560. Based upon the same GF114 GPU as the GTX 560 Ti, the GTX 560 will be the GTX 460 1GB’s eventual high-end successor and NVIDIA’s new $200 card.
GTX 570 GTX 560 Ti GTX 560 GTX 460 1GB
Stream Processors 480 384 336 336
Texture Address / Filtering 60/60 64/64 56/56 56/56
ROPs 40 32 32 32
Core Clock 732MHz 822MHz >=810MHz 675MHz
Shader Clock 1464MHz 1644MHz >=1620MHz 1350MHz
Memory Clock 950MHz (3800MHz data rate) GDDR5 1002Mhz (4008MHz data rate) GDDR5 >=1001Mhz (4004MHz data rate) GDDR5 900Mhz (3.6GHz data rate) GDDR5
Memory Bus Width 320-bit 256-bit 256-bit 256-bit
Frame Buffer 1.25GB 1GB 1GB 1GB
FP64 1/8 FP32 1/12 FP32 1/12 FP32 1/12 FP32
Transistor Count 3B 1.95B 1.95B 1.95B
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 40nm
Price Point $329 ~$239 ~$199 ~$160
The GTX 560 is basically a higher clocked version of the GTX 460 1GB. The GTX 460 used a cut-down configuration of the GF104, and GTX 560 will be doing the same with GF114. As a result both cards have the same 336 SPs, 7 SMs, 32 ROPs, 512KB of L2 cache, and 1GB of GDDR5 on a 256-bit memory bus. In terms of performance the deciding factor between the two will be the clockspeed, and in terms of power consumption the main factors will be a combination of clockspeed, voltage, and GF114’s transistor leakage improvements over GF104. All told, NVIDIA’s base configuration for a GTX 560 puts the card at 810MHz for the core clock and 4004MHz (data rate) for the memory clock, which compared to the reference GTX 460 1GB is 135MHz (20%) faster for the core clock and 404MHz (11%) faster for the memory clock. NVIDIA puts the TDP at 150W, which is 10W under the GTX 460 1GB.
With that said, this launch is going to be more chaotic than usual for an NVIDIA mid-range product launch. While NVIDIA and AMD both encourage their partners to differentiate their mid-range cards based on a number of factors including factory overclocks and the cooler used, these products are always launched alongside a reference card. However for the GTX 560 this is going to be a reference-less launch: NVIDIA is not doing a retail reference design for the GTX 560. This is a fairly common situation for the low-end, where we’ll often test a reference design that never is used for retail cards, but it’s quite unusual to not have a reference design for a mid-range card.
As a result, in lieu of a reference card to refer to we have a bit of chaos in terms of the specs of the cards launching today. As long as you’re willing to spend a bit more in power, GF114 clocks really well, something that we’ve seen in the past on the GTX 560 Ti. This has lead to partners launching a number of factory overclocked GTX 560 Ti cards and few if any reference clocked cards, as the retail market does not have the stringent power requiements of the OEM market. So while OEMs have been using reference clocked cards for the lowest power consumption, most retail cards are overclocked. Here are the clocks we're seeing with the GTX 560 launch lineup.
GeForce GTX 560 Launch Card List
Card Core Clock Memory Clock
ASUS GeForce GTX 560 Top 925 MHz 4200 MHz
ASUS GeForce GTX 560 OC 850 MHz 4200 MHz
Palit GeForce GTX 560 SP 900 MHz 4080 MHz
MSI GeForce GTX 560 Twin FrozrII OC 870 MHz 4080 MHz
Zotac GeForce GTX 560 AMP! 950 MHz 4400 MHz
KFA2 GeForce GTX 560 EXOC 900 MHz 4080 MHz
Sparkle GeForce GTX 560 Calibre 900 MHz 4488 MHz
EVGA GeForce GTX 560 SC 850 MHz 4104 MHz
Galaxy GeForce GTX 560 GC 900 MHz 4004 MHz
This is why NVIDIA has decided to forgo a reference card altogether, and is leaving both card designs and clocks up to their partners. As a result, we expect every GTX 560 we’ll see on the retail market will have some kind of a factory overclock, and all of them will be using a custom design. Clocks will be all over the place, while designs are largely recycled GTX 460/GTX 560 Ti designs. This means we’ll see a variety of cards, but there’s a lack of anything we can point to as a baseline. Reference clocked cards may show up in the market, but even NVIDIA is unsure of it at this time. The list of retail cards that NVIDIA has given us has a range of core clocks between 850MHz and 950MHz, meaning the performance of some of these cards is going to be noticeably different from the others. Our testing methodology has changed some as a result, which we’ll get to in depth in our testing methodology section.
With a wide variety of GTX 560 card designs and clocks, there’s also going to be a variety of prices. The MSRP for the GTX 560 is $199, as NVIDIA’s primary target for this card is the lucrative $200 market. However with factory overclocks in excess of 125MHz, NVIDIA’s partners are also using these cards to fill in the gap between the GTX 560 and the GTX 560 Ti. So the slower 850MHz-900MHz cards will be around $199, while the fastest cards will be closer to $220-$230. Case in point, the card we’re testing today is the ASUS GTX 560 DirectCU II Top, ASUS’s highest clocked card. While their 850MHz OC card will be $199, the Top will be at $219.
For the time being NVIDIA won’t have a ton of competition from AMD right at $200. With the exception of an errant card now and then, Radeon HD 6950 prices are normally $220+; meanwhile Radeon HD 6870 prices are between $170 and $220, with the bulk of those cards being well under $200. So for the slower GTX 560s their closest competition will be factory overclocked 6870s and factory overclocked GTX 460s, the latter of which are expected to persist for at least a few more months. Meanwhile for the faster GTX 560s the competition will be cheap GTX 560 Tis and potentially the 1GB 6950. The mid-range market is still competitive, but for the moment NVIDIA is the only one with a card specifically aligned for $199.
May 2011 Video Card Prices
NVIDIA Price AMD
GeForce GTX 590
$700 Radeon HD 6990
GeForce GTX 580
GeForce GTX 570
$320 Radeon HD 6970
$260 Radeon HD 6950 2GB
GeForce GTX 560 Ti
$230 Radeon HD 6950 1GB
GeForce GTX 560
$180 Radeon HD 6870
GeForce GTX 460 1GB
$160 Radeon HD 6850
GeForce GTX 460 768MB
$150 Radeon HD 6790
GeForce GTX 550 Ti
Finally, I’d like to once again make note of the naming choice of a video card. I’m beginning to sound like a broken record here and I know it, but video card naming this last year has been frustrating. NVIDIA has a prefix (GTX), a model number (560), and a suffix (Ti), except when they don’t have a suffix. With the existence of a prefix and a model number, a suffix was already superfluous, but it’s particularly problematic when some cards have a suffix and some don’t. Remember the days of the GeForce 6800 series, and how whenever you wanted to talk about the vanilla 6800, no one could easily tell if you were talking about the series or the non-suffixed card? Well we’re back to those days; GTX 560 is both a series and a specific video card. Suffixes are fine as long as they’re always used, but when they’re not these situations can lead to confusion.
A Mac downloads store has been opened by Amazon.com even as it faces an Apple lawsuit for using the "app store" name for Android apps. Amazon's Mac offerings are a direct challenge to Apple's own Mac App Store, though with fewer offerings. An analyst said Amazon's Mac store was a "surprise," and Apple is likely to watch closely.
Even before a court decides whether Amazon.com can sell apps Relevant Products/Services for Android devices and call it an app store Relevant Products/Services, the biggest U.S. online retailer opened a second front this week by offering downloadable software Relevant Products/Services for Apple's Macintosh computers.
That's a direct challenge to Apple's own Mac App Store, first announced in October at the company's Back To Mac event and opened Jan. 6 with 1,000 offerings, mostly games, compared to about 250 for Amazon's store.
The Mac offerings aren't new but were assembled without fanfare as a Mac Software Downloads section of the site on Thursday, with another section for games, but Amazon avoided calling it an app store -- a phrase Apple wants to trademark.
In March Apple filed suit against Amazon, alleging that its App Store for Android software is used illicitly to solicit developers. Amazon insists "app store" is a generic term.
Amazon told news media the "Mac download store features an install-less download process where the customer gets just the product without any unwanted extras, making for faster and easier purchases. Plus, downloads are conveniently backed up in your Games and Software Library where you can download an unlimited number of times for personal use."
The Mac download store's initial offerings include Office Mac for home ($115) and businesses ($199), Aspyr's Call of Duty: Modern Warfare ($24.95), Individual Software's Logo Designer ($19.99), and H&R Block At Home Deluxe Federal and State eFile for 2010 ($31.98). Visitors can "celebrate the grand opening with a $5 discount on game or software products through June 1st."
Amazon, which had more than $34.2 billion in sales last year, will need to give loyal Mac customers an incentive to shop there.
The company kicked off its Android store by offering one free app per day for visitors, regardless of whether they buy anything.
Let the Battle Begin
"It's going to be interesting to see how consumers are going to react to this app store," said Michael Gartenberg of Gartner Relevant Products/Services Research. "Perhaps they'll try to undercut Apple's prices with promotions."
He said the emergence of the Mac download store "came as a little bit of a surprise. There was certainly no advance warning, it just kind of appeared, and now we're seeing Mac apps in direct competition, although they can't sell anything for the iPhone because that's a closed ecosystem."
After reporting more than a million downloads on the first day of the Mac App Store, Apple can probably expect some continued loyalty, but Gartenberg said Amazon's move is bound to be watched closely.
"It's something that if you're Apple, you don't want to ignore; you want to pay attention because Amazon has a lot of credit cards and it's fairly easy to purchase there," he said, referring to cards that award points toward gift cards for the site. "But their music sales haven't taken much business Relevant Products/Services from iTunes. So [Apple] may not lose a lot of sleep, but they probably want to keep an eye on it."
Sunday, May 29, 2011
Today we continue with the second part of our series of 750W power supplies. The Corsair TX V2 is the second 80 Plus Bronze certified PSU with non-modular cables that we're looking at for this range.
Corsair might be a leading manufacturer of RAM modules and SSDs, but power supply quality depends largely on the ODM and their design. The big question of the PSU source is easy to answer. Seasonic is the company behind many Corsair products—and they're definitely a good choice, much better than using CWT like the original TX750—but what about the internal design and components?
Corsair's TX V2 750W includes a power cord for the American power grid, four screws, one Corsair sticker (for your case), and a manual. The latter is more like a warranty agreement than a helping hand. There is an installation guide, but the warranty part shows some interesting limitations. "[Corsair] shall not be liable for any special, incidental, indirect or consequential damages [...] including [...] loss of profits, revenue, or data." So the 5-year warranty won't help you, other than getting a new PSU should the TX750 go belly up. Don't worry, though; these are common terms and are present with nearly every PSU.
While if it is not stated directly anywhere, the sticker indicates Corsair is using a single rail for +12V. We'll see later that this is not the case, so we'd like to see Corsair list both 12V rails on their sticker to avoid confusion. 744W combined power also indicates this is another PSU with DC-to-DC VRM. +3.3V and +5V are rated at 25A each and should be able to provide enough power for HDDs/SSDs and/or other peripheral components. 3A on +5VSB is also relatively strong.
Corsair uses a 140mm fan from Yate Loon with a ball bearing. The product number is D14BH-12 and it takes 0.70A. A Sanyo Denki fan would be better but the price could be a problem and Sanyo Denki is still a reason to buy the more expensive Corsair AX. Corsair differentiates between the two lines by modifications such as this, which helps them reach lower cost markets.
Today Lenovo brings thin and Sandy Bridge to your desks and your laps. Leaked last month, the Lenovo ThinkPad X1 will make a strong case for itself to corporate road warriors while also packing some features that might appeal to consumers.
Lenovo also has the newly revealed ThinkCentre Edge 91z, which introduces the Edge aesthetic to the ThinkCentre all-in-one (AIO) line. The ThinkCentre's space-saving form factor and mix of desktop and mobile components should appeal to the IT at home users, and makes a nice addition to your desk. As review units roll in we will see just how well these measure up against their competition.
We'll start with a look at the Lenovo ThinkPad X1, which we saw leaks of recently. Leaks or no leaks, this ultraportable notebook turns out to pack some suprises in its svelte frame. Measuring 16.5mm at its thinnest (the front edge) and 21.5mm at its thickest (the back edge) the X1 will not compete for thinnest laptop on the market, but it may just be the thinnest with a full voltage Sandy Bridge mobile processor. The 13.3" screen is optimized for travelers with a pane of Corning's Gorilla Glass covering its TN-panel (we confirmed this with Lenovo Product Manager Jason Parrish) and peaking at a reported 350 nits at a resolution of 1366x768. We'll reserve judgment on the panel till we get our hands on it; truth is, in a market bereft of IPS displays, even a 768p TN-panel can land near the top of our display charts. In a move that many of you have been asking for on Apple's MacBook Pro line, Lenovo provides both mini-DisplayPort and HDMI for connectivity along with Intel's Wireless Display technology.
The X1 adopts the chiclet keyboard we first saw in the ThinkPad Edge 13; while the previous ThinkPad keyboards have been lauded since before the line was acquired by Lenovo from IBM, it's refreshing to see that Lenovo has been able to drasticaly altered its keyboards form factor without diminishing its effectiveness. We are excited to note also that Lenovo has brought a backlight to the keyboard while keeping it spill resistant. Carried over from the rest of the line is the Trackpoint "nubbin" with mouse buttons featured just below the space bar, and just above an otherwise buttonless trackpad.
Though not a unibody design, the laptop is built around an interior roll cage with a magnesium body, contributing to a rigidity that meets a variety of military specifications. The layer of Gorilla Glass contributes to the device's overal rigidity, particularly to the display portion--an area where ultraportables have often been a bit flimsy. Enclosed within this rigid frame is the battery technology that makes it possible to power a standard voltage i5 without obliterating battery life. The new lithium-ion chemistry Lenovo is using provides higher power density, faster charging, and excellent durability. Lenovo is quoting 80% charge in just 30 minutes and 5.5 hours of life on a full charge. For all-day on-the-go computing, Lenovo will provide an external slice battery that doubles battery life to over 10 hours.
When it goes on sale today the X1 will pack an i5-2520m processor (dual-core 2.5 GHz, 3MB L3) and up to 8GB of DDR3. We'll have to confirm this, but based on materials provided it appears RAM will appear in a single DIMM slot, so the 8GB configuration will likely carry a hefty premium; how this affects performance will also be interesting to see. Before the end of summer we will be seeing i7 quad-core CPUs inside the X1's chassis, easily making it the thinnest quad-core computer we've encountered.
When we first laid hands on Lenovo's Edge 13, we weren't wowed by the aesthetic; hopefully the design will grow on us, as Lenovo intends to bring the Edge aesthetic to more than just a subnotebook. This begins with the ThinkCentre Edge 91z. Slotting in between the entry-level ThinkCentre A-series and the workstation-class M-series AIOs, the 91z brings a curious blend of desktop and mobile components to provide a midrange specification. From the press pictures provided, it seems the key characteristics brought over from the Edge 13 are the large screen bezel and matte black finish to everything aside from the screen. And that screen? It's a 21.5" 1920x1200 LED backlit TN-panel. We will once again reserve judgment on the screen quality till we get our hands on it, but it will be hard to compete with a certain IPS packing AIO that also comes in a 21.5" size.
Quad-core processors will be a fixture on these models with the options list peaking at the i7-2600S (2.8GHz, 8MB L3, 3.8GHz Max Turbo). Dual-core i3 options will start at $699; the pricing of the quad-core models could play a key role in their competitiveness. The VESA mountable chassis is designed to provide some user serviceabilty, so we expect to be able to access the RAM and hard drive without much difficulty.
An optional 80GB mSATA SSD is available, though we were not able to confirm the source of the drive. Lenovo is optimizing its boot times and hopes to hit a sub-10 second target when paired with the SSD. Graphics are driven by either Intel's integrated HD Graphics 3000, or the optional AMD Radeon HD 6650A, a desktop Turks variant sporting 1GB of DDR3. HDMI out is provided for external monitor support along with a treat for those worried about what happens when you want to upgrade from an AIO: video in is provided in the form of a VGA input, giving the 91z an afterlife when the pace of technology leaves the processor and graphics behind. (Some form of digital input would have been preferrable, but at least it's something.)
Contrary to its name, the ThinkCentre Edge 91z is not quite on the bleeding edge of computer technology. So what makes this more than just a product reveal? Lenovo will bring several Edge-branded devices to the ThinkCentre line. Small-form factor and mini-tower computers will be released carrying the Edge design language, most likely targeting the enterprise space. The 91z will be joined by other all-in-one devices, though the previously mentioned A- and M-series computers will maintain their own style. Lenovo sees 21.5" as the sweet spot for AIOs, but we can expect up to a 23" device to join the line at some point, and Lenovo is actively working on bringing touch to its Edge AIOs. HP has had some success with its TouchSmart line using a touch friendly skin, with its focus traditionally on enterprise clients it will be interesting to see what Lenovo does to make touch a more welcome feature in a desktop computer.
There is no need to shell out hundreds of dollars for commercial applications when perfectly good freeware often works as good or better. One example is office suites: LibreOffice is an open-source a replacement for Microsoft Office, and includes some impressive extras, including compatibility with Microsoft's latest file formats.
Your PC Relevant Products/Services is expensive. Windows is expensive. Your software Relevant Products/Services doesn't have to be. Almost every task you need to accomplish can be done with free software that's as good as or better than commercial applications. What are these gems? Read on to learn about a few.
Q: I've tried OpenOffice.org as a replacement for Microsoft Office, but I'm not satisfied. Is there an alternative?
A: Some of the developers of OpenOffice.org, the free, open-source office suite, left the OpenOffice project to create LibreOffice (http://www.libreoffice.org), another worthy contender in the open source office suite arena. Because LibreOffice has its roots in OpenOffice, you'll notice some similarities, but the newer suite offers impressive extras, including compatibility with Microsoft's latest file formats (docx, for example). LibreOffice is also available in many languages. The suite includes a word processor, spreadsheet, presentation tool, drawing package, an equation editor, and even a database. With LibreOffice, you can also export to PDF without installing a full-blown PDF creation package. LibreOffice bears watching, too, for it is backed by Google Relevant Products/Services, Red Hat, and some other heavyweights.
Keep in mind that if you have fairly regular access Relevant Products/Services to the Internet, Microsoft Office is free via its Web Apps suite, which is compatible with all of the latest Office formats. The Microsoft Web Apps offering contains slimmed-down versions of the desktop Relevant Products/Services suite's big three applications: Word, PowerPoint, and Excel.
Q: Are there good, free image editing programs that work with both photographs and vector drawings?
A: Vector-based graphics, for those that might not know, are those that are built upon geometric primitives rather than composed of millions of pixels, as digital photographs are. The latter are typically referred to as raster-based graphics. Because vector-based drawings are mathematically based, they can be resized without quality degradation. Raster-based drawings (and photographs) look best at the size at which they were created.
The top raster-based open source packages are Gimp (http://gimp-win.sourceforge.net) and Paint.net (http://www.getpaint.net), while for vector-based drawings, you'll want to try out Inkscape (http://inkscape.org) and Sodipodi (http://www.sodipodi.com). You'll probably be pleasantly surprised to find that the features in these applications rival those of their commercial counterparts.
Q: I need to upgrade my hard drive to a larger model. Is there a free program that will copy all of the data Relevant Products/Services so that I don't have to reinstall everything?
A: What you're looking for is called disk Relevant Products/Services cloning software, which essentially copies the contents of an entire hard drive, bit for bit, to another, larger hard drive. You will find this feature in a number of commercial programs, but the free Clonezilla (http://clonezilla.org) does a wonderful job, and the price is right.
Clonezilla does not have a graphical user interface, and it does not run under Windows, although it can clone a Windows hard drive as well as one used for just about any other purpose. Download the Clonezilla ISO file (http://clonezilla.org/downloads.php), and burn it to a recordable CD using a tool such as ImgBurn (http://www.imgburn.com) or Active ISO Burner (http://www.ntfs.com/iso-burning.htm). Windows 7, too, has a built-in ISO burning tool. If you run Windows 7, just insert a blank, recordable CD, right-click the ISO file, and select Record.
Once you've created the Clonezilla CD, which is bootable, shut down your computer Relevant Products/Services and connect Relevant Products/Services your new, larger hard drive. Leave in place the drive you wish to clone. With both source and target drives connected to your computer, re-start it with the Clonezilla CD inserted into your drive. Clonezilla should load from the CD and begin prompting you for input. Just choose the easy mode, and select drive to drive copying. One caveat: The new drive must be larger than the old drive. When the process is finished, you can remove the old drive and install the new one in its place, and Windows should boot up with the new drive, giving you more space than you had before.
Q: What are the best free tools for cleaning up after programs that leave behind lots of scraps after you uninstall them?
A: You should probably take a multi-pronged approach to this problem. First, either supplement or replace Windows' inconsistent built-in uninstaller with the free Revo Uninstaller (http://bit.ly/236r). Revo is not only better at removing programs than Windows' uninstaller is; it also allows you to more selectively uninstall certain components of applications, even if you've installed Revo after the installation of those programs.
Even Revo, though, can leave traces of applications in folders and the Windows registry. To clean up even more, download and run CCleaner (http://bit.ly/236r), which can scan your Windows system and remove all kinds of clutter, potentially freeing up significant amounts of hard drive space in the process.
Saturday, May 28, 2011
At a macro level, there really aren’t all that many viable gaming notebook options. These days, Sandy Bridge processors rule the roost in notebooks, with the quad-core variety handling everything from single-threaded to multi-threaded workloads with aplomb.
On the graphics side, you can try to get by with midrange mobile GPUs, but if you’re serious about mobile gaming you’ll want at least something from NVIDIA’s GTX line or AMD’s 6900M alternatives. Take the CPU and GPU; match them up with reasonable memory, storage, display, and other accoutrements; and you’re all set. That all works very well in the desktop world, even if it glosses over many of the finer points that separate the contenders from the pretenders. In the mobile world, however, the little things matter.
Modern computers are very modular by design. We have standards for power, memory, storage, and peripherals and you can generally choose what fits your needs. With notebooks, however, a lot of flexibility gets sacrificed in the name of making a reasonably sized chassis. Not coincidentally, profit margins tend to be quite a bit higher for notebooks than desktops, which is why so many companies want a piece of the pie. While it’s still pretty easy to upgrade memory and storage options, swapping out the CPU for something faster is more difficult and you need to make sure the cooling setup can handle any additional heat. Upgrading your GPU on the other hand is difficult at best, and frequently impossible. The issue with mobile GPUs is that despite MXM being something of a standard, chip locations are left up to the implementation, so there’s no guarantee that, for example, an HD 6770M could be installed in place of a GT 540M. And as far as the LCD, keyboard, touchpad, motherboard, and chassis are concerned, you’re stuck with whatever you buy with no chance of upgrading individual parts in the future. (Okay, perhaps you could upgrade the LCD panel in some cases, but you get what we’re saying.)
The point of all this is that you can’t simply compare notebooks based purely on features, components, and performance. Today’s head-to-head matchup between CyberPower’s Xplorer X6-9300 (aka, the Clevo P151HM) and MSI’s GT680R (also available as the CyberPower Xplorer X6-9400 and X6-9500) is a perfect example of this. On a pure performance and feature level, the two notebooks are very similar. They both came with an i7-2630QM processor and GTX 460M graphics card and 8GB of DDR3-1333 memory. The GT680R comes with two 500GB HDDs in a RAID 0 set while the X6-9300 supports a single 500GB HDD, but that’s the only major difference in terms of performance potential. Elsewhere, you get a 15.6” 1080p LCD, two USB 3.0 ports, and then all the miscellaneous bits like the keyboard, touchpad, speakers, chassis, etc.
If you just sit down and compare specs, MSI comes out on top, mostly by virtue of the second 2.5” HDD bay. In practice, however, determining which notebook is “best” requires a lot more work. Assuming potential buyers will actually use these as notebooks rather than portable boxes that they plug into an external LCD, keyboard, mouse, and speakers, the areas that often get the merest of lip service from the design departments matter most. The build quality and materials are frequently the difference between something that feels good in your lap and can last several years, or a cheap plastic notebook that can start to creak and wear out in less than two years. While I’d like to say LCDs are next in importance, the reality is that many users focus more on price and thus sacrifice quality in the one element that you look at constantly while using a computer. Last, there’s the rest of the user interface, the keyboard and touchpad. As someone who types a lot, this area matters as much as anything else in my day-to-day impressions of a notebook. If a keyboard is unpleasant for me to type on, all of the other elements end up being meaningless.
So with that introduction, let’s meet the two latest notebooks to cross our notebook test bench. Then we’ll investigate performance and other objective test results before wrapping up with our subjective evaluation. Will one of these laptops float to the surface of the mobile gaming ocean, or will both sink together? Perhaps they might be seaworthy, as long as you steer clear of the occasional iceberg or two. (Okay, no more sea analogies, I promise.)
I have to admit that Intel's Z68 launch was somewhat anti-climactic for me. It was the chipset we all wanted when Sandy Bridge first arrived, but now four months after Sandy Bridge showed up there isn't all that much to be excited about - save for one feature of course: Smart Response Technology (aka SSD caching).
The premise is borrowed from how SSDs are sometimes used in the enterprise space: put a small, fast SSD in front of a large array of storage and use it to cache both reads and writes. This is ultimately how the memory hierarchy works - hide the latency of larger, cheaper storage by caching frequently used data in much faster, but more expensive storage.
I believe there's a real future with SSD caching, however the technology needs to go mainstream. It needs to be available on all chipsets, something we won't see until next year with Ivy Bridge. Even then, there's another hurdle: the price of the SSD cache.
Alongsize Z68 Intel introduced the SSD 311, codename Larson Creek. The 20GB SSD uses 34nm SLC NAND, thus pricing the drive more like a 40GB MLC SSD at $110. Intel claims that by using SLC NAND it can deliver the write performance necessary to function as a good cache. Our benchmarks showed just that. The 20GB SSD 311 performed a lot like a 160GB Intel X25-M but with half of the NAND channels thanks to SLC NAND's faster write speed and some firmware tweaks. In fact, the only two complaints I had about the 311 were its limited capacity and price.
The capacity issue proved to be a problem as I found that after almost a dozen different application launches it wasn't too hard to evict useful data from the cache. The price is also a problem because for $100 more you can pick up a 120GB Vertex 2 and manage your data manually with much better performance overall.
Yesterday a friend pointed me at a now defunct deal at Newegg. For $85 Newegg would sell you a 40GB SF-1200 based Corsair Force SSD. That particular deal is done with and all that remains is the drive for $110, but it made me wonder - how well would a small SandForce drive do as an SSD cache? There's only one way to find out.
Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO
Intel Z68 Motherboard
Intel 22.214.171.1245 + Intel RST 10.5
Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: Intel HD Graphics 3000
Video Drivers: Intel GMA Driver for Windows 126.96.36.1992
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64
An ARM-based tablet running a new version of Windows may be shown by Microsoft executives. Windows 8 is expected to be for tablets, and Microsoft CEO Steve Ballmer may have spilled the beans. Microsoft retracted his "misstatement" that Windows 8 could be out next year. Microsoft still needs to compete with Apple and other tablet makers.
Get ready for Windows tablets. According to a new report, next week Microsoft will show a tablet Relevant Products/Services-oriented version of its Windows operating system.
Citing three unnamed sources with knowledge of the plans, Bloomberg News said the device Relevant Products/Services will contain Nvidia's ARM-based Tegra chip and have a touchscreen interface. Although the location for the demonstration isn't known, Windows President Steven Sinofsky will be presenting at the All Things D conference Relevant Products/Services in California next week, and Vice President Steve Guggenheimer is giving a speech to the Computex show in Taipei.
At the Consumer Electronics Show in January, Microsoft announced it was developing a version of Windows for ARM-based devices. ARM chips are hugely popular in the world of mobile Relevant Products/Services devices, used in Apple's iPad Relevant Products/Services and Android tablets, among other products.
Microsoft CEO Steve Ballmer may have jumped the gun earlier this week when he said machines running Windows 8 would come out next year. The company, eager to avoid having buyers wait till next year, then issued a retraction.
"It appears there was a misstatement," the software Relevant Products/Services giant wrote to the Los Angeles Times on Wednesday. It said "we are eagerly awaiting the next generation of Windows 7 hardware Relevant Products/Services that will be available in the coming fiscal year," and added that "we have yet to formally announce any timing or naming of the next version of Windows."
But Windows 7 doesn't run on ARM chips, so an OS for ARM devices would need to be a new version.
In a note to its clients, Citigroup has predicted that Microsoft could release a tablet version of its new OS before it comes out with the same OS for PCs. Citigroup analysts expect a beta version by September, with shipping to start in 2012 or early 2013.
'Room for Microsoft'
Whether the new tablet-oriented OS can put a dent in the tablet category currently dominated by Apple remains to be seen. Citigroup noted Microsoft still has to deliver a "competitive operating system," competitive pricing, and, most important, a compelling user interface.
Sarah Rotman Epps, an analyst with industry researcher Forrester Relevant Products/Services, noted that Microsoft's challenges also include convincing OEM partners to invest in Windows tablets when many of them are already releasing Android tablets.
In the tablet category, she pointed out, Apple "owns that world," although Research In Motion, Google Relevant Products/Services's Android, Hewlett-Packard, and others are trying to get a significant piece. But, Epps said, there is "absolutely room for Microsoft at this party."
A Forrester survey found that 46 percent of consumers would consider buying a tablet that runs Windows, compared to 16 percent for Apple's iOS platform and nine percent for Android. Epps pointed out that "consumers are familiar with Windows, and compatibility is very important to them."
The survey indicated that a big concern for buyers is that they want their tablet "to work with the stuff they already have," she said, like printing to existing printers.
"So far," Epps added, "we haven't seen any tablet that really does that."
Friday, May 27, 2011
Brooke Crothers broke a very important story today - he published the name Silvermont. Atom's first incarnation came to us in 2008 as a Pentium-like dual-issue in-order microprocessor. The CPU core was named Bonnell, after the tallest point in Austin at around 750 feet. Small mountain, small core. Get it?
Bonnell and the original Atom were developed on a 5-year cadence, similar to how Intel ran things prior to the Core 2 revolution (the P6 to Netburst/Pentium 4 move took 5 years). With the original chip out in 2008, five more years would put the next major architecture shift at 2013, which happens to be exactly when the Cnet report mentions Silvermont will be introduced.
When I first met with the Atom design team they mentioned that given the power budget and manufacturing process, the Bonnell design would be in-order. You get a huge performance boost from going to an out-of-order architecture, but with it comes a pretty significant die area and power penalty. I argued that eventually Intel would have to consider taking Atom out of order, but the architects responded that Atom was married to its in-order design for 5 years.
Intel's Moorestown - same Atom core, just more integrated
Since 2008, Atom hasn't had any core architecture changes. Sure Intel integrated the GPU and memory controller, however the CPU still communicates with both of them over an aging FSB. The CPU itself remains mostly unchanged from what we first saw in 2008. Even Intel's 32nm Atom due out by the end of this year doesn't change its architecture, this is the same dual-issue in-order core that we've been covering since day 1. The 32nm version just runs a bit quicker and is paired with a beefier GPU.
Intel Atom "Diamondville" Platform 2008
Intel Atom "Pine Trail" Platform 2009-2010
Silvermont however changes everything. It is the first new redesign of the Atom architecture and it marks the beginning of Atom being on a tick-tock cadence. Say goodbye to 5 year updates, say hello to a new architecture every 2 years.
Given what Intel said about Atom being in-order for 5 years, I think it's safe to say that Silvermont is an out-of-order microprocessor architecture. The other big news is that Silvermont will be built using Intel's 22nm transistors. What may not have been possible at 45nm gets a lot easier at 22nm. Assuming perfect scaling, a chip built on Intel's 22nm process would be a quarter the size of the same chip built at 45nm. With Apple paving the way for 120mm2+ SoCs, Silvermont can be much more complex than any Atom we've seen thus far.
Intel's 22nm transistors offer huge gains at low voltages, perfect for Silvermont
By 2013 Intel's 22nm process should be very mature, which maintains Intel's sensible design policy of only moving to a new architecture or a new process, but not both at the same time in order to minimize risk. With 22nm debuting in Ivy Bridge at the end of this year (with availability sometime in 1H 2012), this puts Silvermont at a full year behind
Intel isn't talking core counts at this point, but for 2013 I'd expect both monolithic dual and quad-core variants. If we use history as any indicator, Intel will likely drop the FSB in favor of a point-to-point bus interface between Silvermont and its cohorts.
The big question is about GPU technology. Intel has historically used GPUs licensed from Imagination Technologies in its smartphone/tablet/MID line, while opting for its own in-house GPU solutions for nettops/netbook versions of Atom. At 32nm the rumor is that may change to an all Imagination lineup, but at 22nm I do wonder if Intel will keep licensing 3rd party IP or switch to its own.
Intel is expected to announce more details about its Atom roadmap at an analyst event next week. While the expectation is that we'll see Atom based Android smartphones this year, I'm personally quite interested in Silvermont.
Single and dual-core 32nm Atom designs should be able to hold their own in a world dominated by dual-core ARM Cortex A9s, but an out-of-order Atom on an aggressive roadmap is something to be excited about.
By 2013 we should be seeing smartphones based on Tegra 3 and 4 (codename Wayne and Logan) and ARM's Cortex A15. GPU performance by then should be higher than both the PS3 and Xbox 360 (also implying that Silvermont needs Sandy Bridge level graphics performance to be competitive, which is insane to think about).
When Microsoft launched Windows Phone 7 late last year, it was readily evident that they had a solid platform on their hands, but it was missing some critical details. Chief among them?
Copy-and-paste, bringing back a frequent grievance from the early iPhone days. The platform’s first update, codenamed “NoDo” (there’s a story behind that), is now out, with copy-paste support in tow, along with much improved application loading and some general performance tune-ups in the UI.
At the same time, we decided to take a look at HTC’s HD7, since it’s been my day-to-day phone for a couple of months now. It’s still the only Windows Phone you’ll find in a T-Mobile store, so it’s worth taking a look at, especially now that a very similar variant is due to hit AT&T in the coming weeks.
Meet the HD7
When HTC launched the HD2 in late 2009, it was one of the most technologically advanced phones on the market, with a 800x480 4.3” display and a 1GHz Qualcomm Snapdragon processor. This was well before everyone had a 1GHz phone—the HD2 was just the second Snapdragon-based device to hit market, after the European Toshiba TG-01, and was the first 1GHz phone on the American market. Unfortunately, for all the amazing hardware, it was saddled with Windows Mobile 6.5. Even with HTC’s advancements to WM (capacitive touch, multitouch, the first implementation of Sense UI) it simply wasn’t a viable platform at the time. A good comparison would be the N8—great hardware running a dead-in-the-water OS.
When Microsoft started releasing details about Windows Phone 7, it became clear that while fully capable of running the new OS, the HD2 would not be getting an upgrade. Microsoft indicated that the button layout did not conform to the guidelines given for WP7 handsets, which was true, but Microsoft really wanted to have a clean break from the previous generation of products, and that left the HD2 in a cold place (officially, at least—XDA and other developers have gotten everything from Froyo to WP7 to Ubuntu and Windows XP running on the HD2). HTC did the next best thing, and relaunched it’s flagship Windows device as the HD7, with 7 indicating it’s status as a WP7 device. In the process, it got a mild ID refresh and a few more features, but overall, it’s very, very similar.
Gallery: HTC HD7
The first noticeable difference is the buttons—the five button array has been replaced by three backlit capacitive buttons—back, Windows, and search, as dictated by the WP7 design guide. The speaker and mic have been expanded to span the width of the device at the top and bottom edges, with an attractive metal grille covering them. The matte plastic on the sides of the phone has been replaced with a dark chrome band, which has an attractive but understated look. It’s much more subtle than normal chrome brightwork.
The curved back of the HD7 is rendered in a mildly rubberized plastic, with a brushed aluminum stand surrounding the camera. The stand, while a clever detail, is not actually that great—like the EVO 4G, it’s prone to tipping over if not on a perfectly level surface, due to the location and shape of the stand itself. I like the Thunderbolt’s stand better, it’s sturdier and works in both portrait and landscape modes. But the biggest problem here is the battery cover. The soft touch plastic is great, but the cover itself feels cheap, and for good reason. The plastic itself is ridiculously thin, and there are large panel gaps—with the battery cover mounted, you can actually see a bit of the SIM card. In addition, the panel gaps let in a lot of dust. It’s a blemish on an otherwise well designed device.
The interesting part hidden in all this is that given HTC’s current lineup, the HD7 feels a little bit last generation. HTC has moved to a very consistent design language across all of it’s devices, as noted by Brian in his Thunderbolt review. It started with the Desire HD, which was an update of the Nexus One/Desire industrial design, and HTC has since expanded the design language to (long list alert!) the Mozart, the Surround, the Trophy, the Thunderbolt, the Sensation, and the Desire S (or the Incredible S for VZW). The HD7 is a holdover from the old days, when HTC had more slab sided designs, with less curves and less metal. Most of the recent HTC designs have basically pointed towards aluminum unibody construction, and the HD7 unashamedly differs from that. I’d expect the next revision of the HD7 to fall much more in line with the rest of HTC’s recent phones, possibly based on the Desire HD chassis.
HTC HD7 HTC Surround LG Optimus 7 Samsung Focus Dell Venue Pro
Height 122 mm (4.80") 119.7 mm (4.71") 125 mm (4.92") 122.9 mm (4.84") 121.0 mm (4.76")
Width 68 mm (2.68") 61.5 mm (2.42") 59.8 mm (2.35") 65 mm (2.56") 64.4 mm (2.54")
Depth 11.2 mm ( 0.44") 12.97 mm (0.51") 11.5 mm (0.45") 9.9 mm (0.39") 14.9 mm (0.59")
Weight 162 g (5.71 oz) 165 grams (5.82 oz) 157 grams (5.54 oz) 119 grams (4.2 oz) 176 grams (6.21 oz)
CPU 1GHz Qualcomm QSD8250 1 GHz Qualcomm QSD8250 1 GHz Qualcomm QSD8250 1 GHz Qualcomm QSD8250 1 GHz Qualcomm QSD8250
GPU Adreno 200 Adreno 200 Adreno 200 Adreno 200 Adreno 200
RAM 576MB LPDDR1 (512 system, 64 GPU) 512 MB LPDDR1 (448 system, 64 GPU) 512 MB LPDDR1 (448 system, 64 GPU) 512 MB LPDDR1 (448 system, 64 GPU) 512 MB LPDDR1 (448 system, 64 GPU)
NAND 16GB integrated 512 MB integrated, 16 GB (Internal Class 4 microSD) 16 GB integrated 8 GB integrated 8 or 16 GB integrated
Camera 5MP with autofocus, LED flash, 720P video recording 5 MP with autofocus, LED flash, 720P video recording 5 MP with autofocus, LED flash, 720P video recording 5 MP with autofocus, LED flash, 720P video recording 5 MP with autofocus, LED flash, 720P video recording
Screen 4.3" LCD 800 x 480 3.8" LCD 800 x 480 3.8" LCD 800 x 480 4" Super AMOLED 800 x 480 4.1" AMOLED 800 x 480
Battery Removeable 4.55 Whr Removable 4.55 Whr Removable 5.55 Whr Removable 5.55 Whr Removable 5.18 Whr
At 162 grams (5.7 ounces), the HD7 is a pretty hefty phone, and the size and weight give it a substantial feel in-hand. Battery cover notwithstanding, it feels very well put together and very familiar at the same time. The HD7’s similarities with the EVO and HD2 are not just limited to physical attributes; all three share variants of Qualcomm’s first generation Snapdragon SoC. The GSM-only HD7 has the QSD8250, as with all other WP7-based handsets, to go along with 576MB RAM, 512MB ROM, and a built-in 16GB SD card. It’s no longer cutting edge, but Microsoft is fairly limiting as far as what they allow handset makers to use—you can get a WP7 device with any CPU as long as it’s a 65nm Scorpion, Ford Model T-style. We’ll get to the application performance in a bit, specifically the NoDo update and the performance improvements it brings, but for now, let’s take a look at the display and camera performance.
A lawsuit filed by PayPal alleges two Google executives who formerly worked at PayPal used trade secrets to create Google Wallet. Google pledged to fight PayPal's suit. The Google executives, Osama Bedier and Stephanie Tilenius, are also accused of luring away PayPal employees. An analyst said Google Wallet will probably get more lawsuits.
While Google Relevant Products/Services was outlining Google Wallet for mobile Relevant Products/Services payments, PayPal was readying a lawsuit against the search giant. eBay-owned PayPal filed suit against Google and two executives for allegedly stealing trade secrets that helped Google develop Google Wallet and push for a piece of the multibillion-dollar mobile-payments pie.
PayPal sued Google in the Superior Court of the State of California in Santa Clara County. The suit names Google and former PayPal employees and now Google execs Osama Bedier and Stephanie Tilenius.
Bedier previously was vice president of PayPal's mobile platform and came aboard as Google's vice president of payments in January 2011. Tilenius served in various executive roles at PayPal and eBay, including vice president of PayPal Merchant services, from 2001 to 2009, when she joined Google as vice president of commerce.
"Silicon Valley was built on the ability of individuals to use their knowledge and expertise to seek better employment opportunities, an idea recognized by both California law and public policy," Google said in a published statement. "We respect trade secrets, and will defend ourselves against these claims."
The PayPal complaint argues that Bedier had intimate knowledge of PayPal's capabilities, strategies, plans and market intelligence Relevant Products/Services regarding mobile payments and related technologies -- information constituting in part PayPal's trade secrets. "In the course of his work at Google, Bedier and Google have misappropriated PayPal trade secrets by disclosing them within Google to major retailers," the complaint alleges.
PayPal also asserts that Tilenius solicited and recruited Bedier to Google. By doing so, PayPal argues, Tilenius violated her contractual obligations to eBay. PayPal said Bedier also violated his obligations to eBay by soliciting and recruiting PayPal employees to jump ship to Google.
"In addition, from 2008 to 2011, Google and PayPal were negotiating a commercial deal where PayPal would serve Relevant Products/Services as a payment option for mobile-app purchases on Google's Android Market. During that time, PayPal provided Google with an extensive education in mobile payments," PayPal said in the complaint.
"Bedier was the senior PayPal executive accountable for leading negotiations with Google on Android during this period," it added. "At the very point when the companies were negotiating and finalizing the Android PayPal deal, Bedier was interviewing for a job at Google -- without informing PayPal of this conflicting position. Bedier's conduct during this time amounted to a breach of his responsibilities as a PayPal executive."
The movement of executives and other workers from company to company is hardly big news in Silicon Valley, said Charles King, principal analyst at Pund IT Relevant Products/Services, but if Tilenius sent Bedier an alleged Facebook note before she left PayPal, he's not sure how that could violate an order enforceable after she left PayPal for Google.
"The note could suggest that Tilenius was acting for Google prior to actually joining the company. But the reality of the workplace is people who are working for companies are actively talking about working for other companies before they leave. A lot of that has to do with the dates of when these events occurred and whether or not the no-solicitation order was enforced at the time that she sent this Facebook note."
The bottom line: Talk of electronic wallets and portable payments has been around for the better part of a decade. No company has made a truly viable business Relevant Products/Services based on the concept so far, but PayPal has made its moves and is clearly intent on protecting its intellectual property.
"As Google expands its area of interest and expertise beyond online advertising to enable electronic purchases of one sort or another, it's going to be touching live wires with any number of competitors with a major footprint in this area," King said. "It would be hardly surprising if future suits related to different kinds of electronic payments were filed against Google."
Thursday, May 26, 2011
Two days ago I flew out to VIA's Centaur headquarters in Austin, Texas to be briefed on a new CPU. When I wrote about VIA's Dual-Core Nano I expected the next time we heard from VIA about CPUs to be about its next-generation microprocessor architecture.
While Nano still holds a performance advantage over Atom and Bobcat, it's still missing a number of key architectural innovations that both Intel and AMD have adopted in their current generation hardware (e.g. GPU integration, power gating).
VIA's dual-core Nano, faster than AMD's E-350 and Intel's Atom
Much to my surprise, the meeting wasn't about VIA's next-generation microprocessor architecture but rather the last hurrah for Nano: a quad-core version simply called the VIA QuadCore.
VIA's QuadCore architecture is nothing too surprising. At a high level the chip is composed of two dual-core die connected by a shared 1333MHz FSB, very similar to the old dual-core Pentium D processors. Each dual-core die has two 1MB independent L2 caches, for a total of 4MB of L2 on-package.
VIA's QuadCore, in production today
Going a little deeper there's an AMD-like 64KB L1 instruction and 64KB L1 data cache per core. The Nano is of course fully 64-bit x86 compatible, supporting up to SSE4. Each core is a 3-issue out-of-order design, giving it a general throughput and performance advantage over Intel's Atom and AMD's Bobcat. Remember that Atom is a 2-issue in-order architecture and Bobcat is 2-issue out-of-order. The wider front end for Nano gives VIA the ability to perform well in more complex workloads.
In the past power consumption has been an issue for VIA's Nano, however the QuadCore is built on a 40nm process which helps reel in power consumption. At 1.2GHz, VIA's QuadCore still carries a 27W TDP. Add another 5W for the integrated graphics chipset and you're talking about 32W, nearly double of AMD's dual-core E-350 Brazos platform. VIA claims that at lower clock speeds it can significantly reduce TDP, however the 1.2GHz QuadCore is the only part being announced today.
VIA is calling the 1.2GHz part a 1.2GHz+ QuadCore since it can use available TDP headroom to overclock itself by up to another two bins (133MHz per bin - 1.46GHz max). The chip doesn't support power gating, just aggressive clock gating.
Like all Nano parts, the QuadCore features a hardware AES encryption engine. VIA has added support for SHA-384 and SHA-512 as well.
Although there are still a considerable number of dual-core platforms sold in the market today, designs with four beefy processor cores seem to be where the world as a whole is headed. With its 2011 15/17-inch MacBook Pro and iMac updates, Apple no longer offers a dual-core option in those systems. By the time we move to 22nm I wouldn't be too surprised if the 13-inch MacBook Pro was quad-core only as well.
VIA moving to four cores makes sense and the QuadCore design was an obvious step. Even Intel used a dual-die approach to make the most of its existing microprocessor design before starting from scratch for Nehalem.
As odd as it sounds, VIA's QuadCore actually has a small but viable position in the market. At 27.5W the TDP is too high for a tablet like the iPad, and its performance will be too low to compete with ultra portable Sandy Bridge designs. What VIA could offer however is a a higher performing alternative to Brazos but at a better price than an ultraportable Sandy Bridge notebook.
The bigger issue VIA has to face is the lack of OEM adoption. The QuadCore will launch with whitebox and motherboard designs, not with slick design wins from companies like ASUS or Samsung. With less than 1% of the x86 market, VIA can't command the sort of attention that Intel or even AMD can. That being said, I do believe there's a small window of opportunity here. A clever OEM could put out a system priced similarly to a Brazos (if not lower than), with better performance based on VIA's QuadCore. I haven't looked at the current state of VIA's graphics drivers but when we previewed the dual-core Nano I came away pleasantly surprised. I suspect there will still be issues there going forward, but I remember something an old friend once told me: there are no bad products, just bad pricing. At the right price, in the right system, VIA's QuadCore could work.
Just recently we had a chance to lay hands on SilverStone's FT03 enclosure, and it was impressive enough to earn a Bronze Editors' Choice award. It wasn't the quietest case we've ever reviewed, but it had strong thermal qualities and a slick-looking design.
Now DigitalStorm has taken SilverStone's eye-catching little number, custom-painted the grills, and turned it into a double-shoebox-sized monster. The Enix we're looking at today boasts the highest overclock on an Intel Core i7-2600K we've yet seen and pairs it with not one but two EVGA GeForce GTX 580's.
The red trim and black shell do a lot of favors for SilverStone's FT03 enclosure, but we're really interested in how well the Enix sings. Our last visit with DigitalStorm was a mixed one: the BlackOps Assassin we reviewed was a performance demon to be sure, but we were a bit turned off by some of the component choices coupled with the price tag. When we received the press release for the Enix, it was just too good to resist, and DigitalStorm was game to send us one. So how much power is crammed into this little box?
DigitalStorm Enix Specifications
Chassis SilverStone FT03 (custom paint)
Processor Intel Core i7-2600K @ 4.7GHz
(spec: 4x3.4GHz, 32nm, 8MB L3, 95W)
Motherboard ASUS P8P67-M Pro Motherboard with P67 chipset
Memory 2x4GB Corsair Dominator DHX DDR3-1600 (expandable to 16GB)
Graphics 2x EVGA GeForce GTX 580 1.5GB GDDR5
(512 CUDA Cores, 772/1544/1002MHz Core/Shaders/RAM, 384-bit memory bus)
Hard Drive(s) Corsair Performance 3 128GB SATA 6Gbps SSD
Western Digital Caviar Black 1TB SATA 6Gbps HDD
Optical Drive(s) Optiarc BD-ROM/DVD+-RW Slimline Combo Drive
Networking Realtek PCIe Gigabit Ethernet
Audio Realtek ALC892 HD Audio
Speaker, mic, line-in, and surround jacks for 7.1 sound
Front Side Optical drive
Top 2x USB 3.0
6x USB 2.0
Speaker, mic, line-in, and surround jacks for 7.1 sound
Back Side -
Operating System Windows 7 Home Premium 64-bit
Dimensions 11.18" x 9.25" x 19.17"
Weight 14.77 lbs (case only)
Extras SilverStone Strider Gold 1000W PSU 80 Plus Gold Certified
Corsair H70 Liquid Cooler
Warranty 3-year limited warranty with life-time customer care
Pricing Enix starts at $1,149
As configured $3,612
We start out with both the DigitalStorm Enix's curse and its saving grace: a heavily souped-up Intel Core i7-2600K water-cooled using Corsair's H70 kit (a testament to both the kit's performance and the FT03's surprisingly roomy interior). DigitalStorm has overclocked the i7-2600K to a screaming 4.7GHz, making it not only the fastest processor we've ever tested in a boutique system but also among the most power hungry as you'll see later.
As if to reassure everyone that splitting the i7-2600K's sixteen PCI-Express 2.0 lanes between two cards isn't really a big deal, DigitalStorm has packed the Enix with two EVGA GeForce GTX 580s running at stock speeds in SLI. If every single frame matters to you, then the P67 chipset and inherent limitations of using the processor's PCIe lanes may put you off, but between the variability in performance of running a multi-GPU setup and the absurdly high performance of two GTX 580s in SLI paired with an overclocked i7-2600K, it's hard for anyone to reasonably take issue.
Based on our last experience with DigitalStorm, they've also opted to use a higher-end name-brand memory kit and power supply. This was a source of some contention in the comments of that review, where some readers argued that if the memory works, it works, and there's no need to ding the vendor for using cheaper stuff. That's true, but at the same time, if I'm paying over $3,000 for a desktop I'm going to want parts from vendors that have a history of reliability, and there's something miserly about putting discount memory in a premium gaming machine.
To round out the system, DigitalStorm bumped the slot-loading optical drive up to a Blu-ray reader/DVD-writer, added the requisite 1TB Western Digital Caviar Black, and then chose to employ the new Corsair Performance 3 SSD.
All told, the Enix looks to be, at least on paper, the fastest system we've ever tested (a dubious honor when a new contender is always just around the corner). Ready to break some of our system benchmark records?
David Einhorn, whose Greenlight Capital owns .11 percent of Microsoft's stock, wants CEO Steve Ballmer to step down. Einhorn called Ballmer "stuck in the past" and wants to "give someone else a chance." Under Ballmer, Microsoft's stock dipped 58 percent while IBM, Apple and Google exploded. But an analyst said Ballmer has made strong moves.
Should he stay or should he go? That's the question surrounding Microsoft CEO Steve Ballmer.
David Einhorn, a hedge fund tycoon perhaps best known for his connections to Lehman Brothers, kicked off the debate. He accused Ballmer of being "stuck in the past" and suggested he step down, according to Reuters. Indeed, Einhorn had strong words about Ballmer, comparing his management style to Charlie Brown from the Peanuts comic strip.
"His continued presence is the biggest overhang on Microsoft's stock," Einhorn, president of Greenlight Capital, said at the Ira Sohn Investment Research Conference in New York on Wednesday.
Microsoft's Valuation Slide
Although some observers expect a battle that Ballmer won't back down from, other news reports suggest the Microsoft chief has something that will guarantee him success: Support from the board of directors. CNBC cited an unnamed board member who said Ballmer has the board's support Relevant Products/Services.
Greenlight Capital is a Microsoft investor and owns about nine million shares of Microsoft stock. That equals .11 percent of the company's outstanding shares, according to Thomson Reuters data Relevant Products/Services. Although he feels Microsoft stock is undervalued, Einhorn nevertheless wants Ballmer to "give someone else a chance."
Microsoft has lost ground under Ballmer's watch. Last May, Apple officially overtook Microsoft as the most valuable technology Relevant Products/Services company in the world, thanks to its mobile Relevant Products/Services devices. And just last week, IBM surpassed Microsoft in value.
Then there's Google Relevant Products/Services. Despite Microsoft's Bing search alliance with Yahoo, Google continues to dominate the search market.
Looking at sheer numbers, Microsoft's stock has dipped 58 percent since the early part of the century while IBM, Apple and Google stocks have exploded.
Don't Fix What Ain't Broken
Michael Disabato, managing vice president of network Relevant Products/Services and telecom for Gartner Relevant Products/Services, isn't sure the notion of a Ballmer exit is a worthy debate. As he sees it, Microsoft has made some strong moves under Ballmer's watch.
"Microsoft brought out SharePoint -- arguably one of their best products -- and they brought out Windows 7, which is probably the best operating system they've developed," Disabato said. "Windows Phone has a better shot at becoming popular than Windows Mobile ever did. Wall Street shouldn't be running these companies."
Wall Street valuations are not a reason to get rid of CEOs, Disabato said. If Ballmer started to perform incompetently and consistently launched poor products, he added, that would be a reason to call for his resignation.
"Ballmer has fixed Windows -- and I can't even believe I'm saying that," Disabato said. "Wall Street can complain about the stock price. Let's see what the technicians have to say about the products."
Wednesday, May 25, 2011
We are at Google IO 2011 and the focus today is on the Chrome browser and new Chromebooks running the Chrome OS. Google's core focus has been the creation of a seamless web experience, and to that end they have their cloud network.
Sundar Pichai, Senior VP of Chrome, mentioned there were 70 million active users of Chrome in 2010 and that more than doubled in 2011 with 160 million. With the success of their browser, the Chrome OS seems like a logical follow up. Here are the highlights of today's presentations.
Google is now adopting a six week release cycle in their software updates for Chrome, with the goal of providing users even better performance with HTML5 and WebGL support. Previous web animations that relied upon drawing on the web canvas and running on software are predicted to run almost 100x faster by using WebGL. Google is also focused on GPU acceleration within Chrome and a demo of the speed with animations clearly showed how 1000 objects easily were rendered. How much of this will carry over to Chrome OS isn't clear, but it would make sense to keep the browser and OS versions more or less in sync.
Gallery: Google IO 2011 Chrome OS
Introducing the Chromebook
As if the world needed yet another name for a mobile laptop-like device, we now have Google's Chromebook to contend with. The core of a Chromebook is a standard laptop/netbook design, with the primary difference being the OS and applications. It's possible to go the DIY route and give Chromium OS a shot, but Google is partnering with Samsung and Acer initially to provided a more integrated and painless experience. We had a few moments to talk with Sundar and some of his key points were the design decisions associated with the architecture behind Chrome OS. Sundar said, "We wanted to create fundamentally the most out of the box experience with minimal user input to get started."
The initial Chromebook offerings will come in two flavors: WiFi only, or WiFi + 3G. These Chromebooks are not like a typical notebook computer, in that all of a user's photos, music, games, apps, and documents reside within Google's cloud. The default install includes Gmail, Google Docs, and Google Calendar, with other applications available via Google Apps. Chromebooks should be able to boot almost instantly, taking just eight seconds from power on to log in. They are always connected and have a battery that should last most of the day (Samsung is quoting 8 hours for their model while Acer targets a lower 6.5 hours of run-time), providing access to the web anywhere you need it. With regular updates, it has the potential to get better over time, and it's built with security in mind.
Samsung and Acer will be the initial two notebook providers, and Verizon will be the wireless provider within the US. The program stems from the original CR-48 pilot, and now Google has taken all that rich user feedback and ramped things up for the retail product.
The idea of a computer getting better over time is almost a foreign concept in our modern computer world. We tend to see performance degradation as apps are installed, drivers get updated, the OS adds new features and bloat, and a slew of other problematic hurdles. The goal is for Chromebooks to take care of all this behind the scenes, automatically delivering the most recent version directly to your laptop. How that ends up playing out and how long the initial hardware will continue to receive updates isn't clear yet, but we would expect something similar to the current state of Android smartphones as the bare minimum (without the need to have your carrier push out OS updates).
Availability and Pricing for Chromebooks
The release date for Chromebooks is June 15 in the US, UK, Germany, Netherlands, Spain, and Italy. We hope to see more countries added to the list in the latter part of 2011. Amazon and Best Buy are on board as resellers for the Samsung and Acer models. Samsung's Series 5 Chromebooks are slated to start at $429 for the WiFi only model and $499 for WiFi + 3G, and they'll be available in black or white at launch. The Acer models will start at $349 for the base model, with 3G versions also available. The current models use Intel's Atom Pine Trail platform, with dual-core N570 processors, and 16GB of mSATA flash storage. Samsung is going with a 1280x800 LCD while Acer will have 1366x768 panels. You can see additional images and specification details on the Amazon Chromebook page.
Google is also making a heavy push towards the corporate world by targeting a cost effective model. Businesses are targeted with a competitive $28/user monthly subscription rate, and educational institutions and government clients start at $20/user monthly.
Chromebooks for business will deploy a web console, support, warranty and replacements, and hardware auto-updates. IT admins will be empowered with a robust configuration panel that will allow adding of users, apps, and granular control over policies and other access control lists. If all of this sounds like something you might find useful, you can read more at Google's Chromebook site.
Chrome Web Store
One final item to mention is in relation to the Chrome web store. Google announced a new 5% fee for web store applications, which could be a big boon to developers. There are no fixed, monthly, signup, or licensing fees. Developers are encouraged to deploy more applications and Google will help cater to their needs by expanding the Chrome web store and branching out to 41 languages.
Google demonstrated the popular game Angry Birds running within Chrome's browser, and it includes a special Chrome level for your enjoyment. It is available starting today. Whether we'll see more interesting gaming content (and just how far they can push the anemic GMA 3150 GPU in Pine Trail) remains to be seen.