Tuesday, March 29, 2011

Miss March (2009) Dvdrip 300MB MKV

Subtitles : English
Format : MKV
File size : 407 MiB
Duration : 1h 33mn
Width : 976 pixels
Height : 528 pixels
Frame rate : 23.976 fps
Audio Format : AAC
Bit Rate : 128 Kbps



Slipstream (2007) Dvdrip 300MB MP4

Subtitles : English
Format : MP4
File size : 340 MiB
Duration : 1h 36mn
Width : 608 pixels
Height : 256 pixels
Frame rate : 23.976 fps
Audio Format : AAC
Bit rate : 128 Kbps



She's the Man (2006) Dvdrip 300MB rmvb

Subtitles : English
Format : rmvb
File size : 351 MiB
Duration : 1h 45mn
Width : 512 pixels
Height : 272 pixels
Frame rate : 23.000 fps
Audio Format : Cooker
Bit rate : 128 Kbps



The Intel SSD 510 Review

The X25-M was a tremendous first attempt by Intel to get into the SSD market. In our review of the SSD I wrote that Intel just Conroe’d the SSD market, and if it weren’t for the pesky 80MB/s sequential write speed limitation the X25-M would’ve been given the title: World’s Fastest Drive.

Its successor, the X25-M G2, was a mild update that brought prices down through the use of 34nm NAND. Remember that Intel is also 49% owner of the IMFT joint venture and as a result can be quite competitive on NAND pricing (and quite early to adopt new NAND technologies).

Intel’s goal all along was to drive down the cost of SSDs. Looking at the history of MSRPs with the X25-M (not to mention the M, which stood for Mainstream in the product name) this shouldn’t come as a surprise:
Intel X25-M Pricing History
2008 2009
40GB - $125
80GB $595 $225
160GB $1000+ $440

The third generation X25-M was to drive down costs even further, this time thanks to Intel’s 25nm NAND. You’d be able to get twice the capacity at the same price point as the X25-M G2. The value drive would be an 80GB offering, the mainstream drive would be 160GB and the high end drive would be 320GB.

The drive would offer higher performance. The controller was to be completely redesigned, with the “oversight” that limited sequential write speed to only 100MB/s corrected entirely. In addition, the third generation Intel SSD would add full disk encryption - making it even better suited for enterprise customers. Going after the enterprise market was Intel’s plan to really make money on SSDs in the long run. Instead of just selling corporations a CPU, chipset and wireless controller in a notebook, there would be an SSD on top of all of that. Perhaps eventually even have some security software courtesy of McAfee.

The third generation X25-M was originally due out in the middle of 2010. As is usually the case with schedules, the “G3” slipped. The middle of the year became the end of the year and the end of the year became Q1 2011.

To make matters worse, the specifications Intel was talking about for its third generation drive/controller weren’t all that competitive. We published the details last year knowing that the competition would do better. Intel’s redesigned controller was late and underperforming. Internally, Intel knew it had a problem.

Intel aimed for the majority of the market with the X25-M, it had set its sights on lowering cost, but it left the high performance enthusiast market entirely uncared for. A void that SandForce filled quite nicely with its unique brand of controllers.

With a hole in the roadmap and an unwillingness to cede complete control of the high end market to SandForce, Intel did the unthinkable: developed a new SSD based on a competing controller technology.


Correction: OCZ Vertex 3 Random Read Performance Data

The 4KB random read numbers for the Vertex 3 were supiciously high. The reality? They were incorrect.

I was just alerted to the error and quickly powered up the SSD testbed to recreate the test. It looks like the original numbers were either run at a queue depth of 32 or accidentally copied from one of the runs of 4KB random write tests. Either way the number was incorrect and has been fixed in all affected articles.

The updated numbers don't change our conclusions. The Vertex 3 is still the fastest next-generation SSD we've tested thus far and it still maintains a random read performance advantage over the Intel SSD 510.

The integrity of our test data is something we all take very seriously here. Errors like these do you a disservice and hurt the reputation I've worked so hard over the past 14 years to build. I do hope this oversight hasn't negatively impacted your opinion of AnandTech - we aren't perfect, but we strive to be. I do apologize to all of you for the error and I will be restructuring how I run and record my Iometer tests to avoid this particular issue from cropping up again.

My sincere thanks goes out to Andrei and the XS folks who helped track down the error and inform us of its existence.


Amazon Launches Locker To Store Streaming Music

A cloud-based streaming-music locker has been launched by Amazon.com, and Apple may be forced to respond. Amazon launched a trio of services -- Amazon Cloud Drive, Amazon Cloud Player for Web, and Amazon Cloud Player for Android. An analyst said Amazon's move and possibly a service from Google could hurt Apple's iPhone.

Amazon.com on Tuesday did something not even Apple and Google have done yet -- launched a cloud-based streaming-music locker. Although rumors have swirled around an Apple cloud-based streaming service since the iPod maker acquired Lala in 2009, a streamable iTunes service has yet to appear. And although news reports signal Google's music service is in the testing phase and could launch soon, Google is still not competing in the market.

Amazon launched Amazon Cloud Drive, Amazon Cloud Player for Web, and Amazon Cloud Player for Android (an Apple version is notably not part of the launch). Amazon says this trio of services sets the stage for customers to securely store Relevant Products/Services music in the cloud and play it on any Android phone, Android tablet Relevant Products/Services, Mac or PC. Uploading music to the Amazon Cloud Drive is free.

"Our customers have told us they don't want to download music to their work computers or phones because they find it hard to move music around to different devices," said Bill Carr, vice president of movies and music at Amazon. The launch of Amazon's new products, he added, eliminates the need for constant software updates as well as the use of thumb drives and cables to move and manage music.

Getting To Know Cloud Drive

Users get 5GB of Cloud Drive storage Relevant Products/Services to upload existing digital music libraries. If a user purchases an Amazon MP3 album, the storage is upgraded to 20GB. New Amazon MP3 purchases saved to Cloud Drive are stored free.

Users can store files in AAC or MP3 formats and the digital music will be uploaded in the original bit rate. Users can pick particular songs, artists, albums or playlists to upload or upload an entire music library.

PC users with Internet Explorer, Firefox, Safari for Mac, or Chrome browsers can listen to their music. Amazon MP3 customers can still use iTunes and Windows Media Player to add music to an iPod or MP3 player.

Cloud Player for Android is bundled into the new version of the Amazon MP3 App. It includes the full Amazon MP3 Store and the mobile version of Cloud Player. Users can search and browse by artist, album or song; create playlists; and download music from Cloud Drive.

Forcing Apple's Hand

"The most significant conclusion I come away with is that this will probably force Apple to move forward with its yet-unannounced plans to do the same sort of thing," said Phil Leigh, a senior analyst at Inside Digital Media. "Apple can't let this move go unanswered because it will impact the demand for iPhones."

If the rumors about a Google music service are true, it would amplify Leigh's view. As he sees it, Amazon's Cloud Drive is a good idea that consumers are going to appreciate. If a cloud-based streaming-music service is only available on Android devices, it's eventually going to have a negative impact on Apple's iPhone, iPod and iPad Relevant Products/Services sales.

"This also underscores the need for continued progress in wireless broadband," Leigh said. "It will put even more wood behind the arrow for the FCC's movements to get spectrum allocated toward wireless Internet as opposed to television and other legacy wireless licenses."


Monday, March 28, 2011

NEC PA301w: The Baddest 30-inch Display Around

It's been a busy and short two weeks since Anand and I thoroughly covered smartphones, tablets, and SoCs at MWC. I got back to an ever-growing pile of monitors that need reviewing, he got back to the Xoom and new SSDs.
In the monitor space, it's been an interesting couple of months because everyone has been updating their 30-inch display. We reviewed the HP ZR30w back when that happened and came away impressed but wanting more in a couple of areas - more input options, an OSD, and a better scaler. Then came Dell U3011, which brought equally decent performance, a wealth of input options, and that OSD we wanted.

Today we're looking at NEC's latest and greatest, the MultiSync PA301w. I've been playing with a pre-production unit which is identical to what will be shipping, and have put it through the usual paces of our monitor testing suite. First up is the specification table:
NEC MultiSync PA301w
Video Inputs 2xDVI-D w/HDCP, 2xDisplayPort 1.1a
Panel Type P-IPS 10-bit with CCFL Backlight - 0Z100223UW
Pixel Pitch 0.251 mm
Colors 1.07 billion (30 BPP color w/appropriate GPU)
Brightness 350 nits (typical)
Contrast Ratio 1,000:1 (typical)
Response Time 7 ms (GTG), 12 ms (max)
Viewable Size 29.8" (75.6 cm)
Resolution 2560 x 1600 at 60 Hz
Viewing Angle 178 degrees horizontal and vertical
Power Consumption (operation) 165 watts (typical), 90 watts (eco)
Power Consumption (standby) 1.7 watts (standby), 0.2 watts (off)
Screen Treatment Matte/Anti-Glare
Height-Adjustable Yes: 5.9 inches (landscape), 1.2 inches (portrait)
Tilt Yes: +30 degrees, -10 degrees
Pivot Yes: Landscape and Portrait
Swivel Yes: +/- 30 degrees
VESA Wall Mounting Yes: 100 mm x 100 mm
Dimensions w/ Base (WxHxD) 27.1" (688.0 mm) x 18.4-24.3" (466.4-646.4 mm) x 11.9" (301.6 mm)
Weight 41.5 lbs (18.8 kg) with stand
Additional Features Integrated USB 2.0 Switch (2 upstream, 3 downstream, 500mA), self calibration support with i1D2, 10-bit color, quick warmup
Limited Warranty 4 years
Accessories Power, DVI-D, DisplayPort, and USB cables, 4x VESA screws
Price PA301w: $2299, PA301w-BK-SV: $2549

Right away you can tell the PA301w is in a different class of professional displays than the two other counterparts I've already mentioned. It's priced accordingly, and has the professional-oriented inputs and other features. There's no HDMI or component in, nor is there audio pass through. The PA301w is aimed at the professional that demands uncompromising performance and has the budget to satisfy that need. The PA301w is built around a 10-bit P-IPS (Professional-IPS) panel likely sourced from LG. Like the current crop of 30-inch monitors, that means you can drive 30 bit per color (10-bit per channel) content over DisplayPort if you have a capable GPU and the right software. I'm still searching for software that actually uses 30-bit color (if you know of any, I'd honestly love to hear about it), but when that time comes, it'll be supported.

The PA301w also has a few features that I haven't seen in a monitor before. Chief among them is onboard support for some basic calibration with an X-Rite i1D2 colorimeter. Plug it into a USB port on the side of the monitor, go into the OSD, and the display will automatically calibrate white point, brightness, and the color tristimulus values. It doesn't displace software calibration, but gets the monitor to a very workable initial starting position you can tweak from, which (if you've ever calibrated a display before) can be a huge timesaver. The other interesting hardware feature is something of a built-in KVM USB switch. There are two USB downstream ports, which can be associated with particular display inputs and switch peripherals accordingly. Lastly, there are a number of green/power-saving features to both show power use and offer power savings by doing some auto brightness adjustment when applicable.

On to the hardware itself - the PA301w is seriously a beast. It's the biggest, baddest monitor around in terms of just sheer size. I don't usually start off talking about boxes, but there's just no other way to really demonstrate the magnitude of the PA301w's size without doing so.

The box is easily two times the size of the HP ZR30w box, and almost three times how big I remember the Dell U3011 box being. It doesn't fit underneath any table or surface in my living room. It fills a good chunk of that room just sitting around, empty, even right now. I've never seen something like it on a monitor box (maybe a refrigerator?), but there's even a wince-inducing typo-ridden warning message encouraging that the monitor be remobed[sic] by two people.

Luckily, I have superhuman strength (yeah right...) and managed to get the thing out intact just by myself. It's a heavy monitor, at just over 41 lbs (18.8 kg) including stand. The PA301w comes fully assembled and in the upright position. There was more than adequate packing to keep the whole thing safe during transit.

That's a nice segue into the hardware features of the PA301w. At the bottom of the display arm is a locking switch for holding the monitor at its lowest position.

The display ships with it locked in position so you can lift the thing without the height adjustment arm fully extending. It's seriously surprising how many displays lately lack locking height adjustment arms, which makes transportation a pain, so it's nice to see it here.

Also on the back of the display is a cable routing guide, which NEC calls a "cable cover." You route the cables underneath on each side, then slide the plastic cover down and hide everything. In practice, it's really only useful if you don't rotate the monitor 90 degrees, since doing so will always create cable flex and demand more slack than you've got back there. It works if you keep things strictly landscape, however.

The backside of the PA301w is on the whole very spartan. There's an NEC logo up top, the two hand grips for transport if you're lucky enough to have a friend handy, and as we'll get to in a second the standard fare kensington lock and ports on the underside. There's a total of 5.9 inches of travel in the vertical direction when the monitor is in landscape mode, which is a huge amount, but necessary for portrait mode.

The PA301w has a generous amount of tilt in the upward direction (30 degrees), and only a slight 5-10 degree tilt in the downward direction. There's a lot of monitor to move around, but in spite of that, the springs on the PA301w are nicely preloaded. Tilt can still be somewhat challenging to manipulate though, and definitely requires use of both arms to manipulate the display.

There's also +/- 30 degrees of swivel support on the base. You can see the circular section at the back which the display arm rotates around. The base is weighted and there's no chatter as the huge thing rotates around, which is awesome. Thankfully, swivel is easily accomplished with one arm.

You'll notice so far that there's really no superficial aesthetic extras tacked onto the PA301w; this is a serious professional display. There's no metallic bezel running around the whole thing, no fancy shiny chrome parts, no gigantic self-aggrandizing logos on the back, no stickers plastered everywhere selling you what you've already ostensibly purchased and taken out of the box. It's just one huge monolithic slab of serious business.

The front of the PA301w is also understated. The OSD control buttons are at the bottom right, and they're actual real clicky buttons, nothing capacitive. At the far left is the ambient light sensor, followed by power, followed by the power/status LED which glows blue when on, amber when in standby. The rest of the buttons are self explanatory and as we'll show in the OSD section get their own LCD driven labels when you actually jump into the OSD.

I've already shown it, but on the right side of the PA301w is one of the three downstream USB 2.0 ports. This port is rather special, however, since it's the one you can plug the X-Rite i1D2 in for onboard calibration. Of course, the port also works like normal when you're not in that special OSD section. The opposing side is spartan. I should also mention that the PA301w actual display panel is super thick, around 5" (12.7 cm). It's not a big deal, but it's just absolutely huge in comparison with other displays, so be prepared. The remainder of the ports are along the bottom of the lip underside. From left to right, the two remaining downstream USB 2.0 ports, two USB 2.0 upstream ports, 2x DisplayPort 1.1a, 2x DVI-D, and power.

What really sets the PA301w apart in my mind, however, is pivot support. That's right, you can use it in portrait orientation without using your own VESA mount. I should mention that although I didn't do it, you can get to the VESA mounting holes by pushing on the metal quick release lever right at the bottom of the mount.

Getting to portrait orientation is a bit of a challenge, however, as you can't simply just rotate the display 90 degrees - doing so crashes one side down into the base. The display also only rotates in the clockwise direction a significant amount. Counterclockwise rotation is limited to just a few degrees when facing the display from the front.

First, the display has to be tilted all the way out, then it can be rotated 90 degrees, and tilted back into perpendicular position. Rotate orientation in the display driver, and you're good to go. Vertical travel is limited to 1.2 inches in portrait, but hey, it's something, and it totally works.

Build quality of the PA301w isn't wanting for improvement at all. Though the exterior is entirely plastic, the entire display feels beefy and doesn't vibrate or chatter around when being adjusted. If you vibrate your desk while typing, the whole thing doesn't shudder either. Again, the spring preloads are just perfect for buttery smooth adjustment and damping on essentially every degree of freedom.


Lucid's Virtu Enables Simultaneous Integrated/Discrete GPU on Sandy Bridge Platforms

We first met LucidLogix (now just Lucid) 2.5 years ago at IDF. The promise was vendor-agnostic multi-GPU setups with perfect performance scaling. The technology was announced at a very important time. Intel and NVIDIA were battling out support for SLI on Nehalem motherboards.

NVIDIA didn't want SLI enabled on any non-NVIDIA chipsets, and Intel wasn't about to let NVIDIA build any chipsets for Nehalem. Lucid's Hydra technology seemed to be exactly what we needed to get around the legal holdup that kept Nehalem users from enjoying SLI.

Three things made Lucid's technology less interesting as time went on. Hydra took two years to come to market, NVIDIA enabled SLI on Intel platforms and single GPU performance got really, really good.

What made Lucid's Hydra tech possible was a software layer that intercepted OpenGL and DirectX calls from the CPU and directed them to a GPU of Lucid's choosing. While Hydra saw limited success, parts of the technology had another application.
Sandy Bridge's Platform Issues

Although we came away impressed by Intel's Sandy Bridge CPU and GPU, it was the platform that really let us down. SATA controller errata aside, Intel's 6-series chipset lineup had a huge problem. At launch the P67 was the only chipset that supported CPU overclocking, however P67 doesn't support SNB's on-die GPU. Enter the H67 chipset, which does support processor graphics but it doesn't support overclocking. It gets worse.

One of the biggest features Sandy Bridge has to offer is the support for hardware assisted video transcoding (Quick Sync). In our review we found Intel's Quick Sync to be the absolute best way to transcode video for use on portable devices. There's just one issue: Quick Sync only works when the on-die GPU is active.

If you pair Sandy Bridge with a discrete GPU on the desktop, you lose the ability to use one of the CPU's biggest features.

Intel will address the overclocking/processor graphics exclusion through the upcoming Z68 chipset, however that doesn't solve the problem of not being able to use Quick Sync if you have a discrete GPU installed. Intel originally suggested using multiple monitors with one hooked up to the motherboard's video out and the other hooked up to your discrete GPU to maintain Quick Sync support, however that's hardly elegant. At CES this year we were shown a better alternative from none other than Lucid.

Remember the basis of how Hydra worked: intercept API calls and dynamically load balance them across multiple GPUs. In the case of Sandy Bridge, we don't need load balancing - we just need to send games to a discrete GPU and video decoding/encoding to the processor's GPU. This is what Lucid's latest technology Virtu, does.

The name Virtu is short for GPU Virtualization and the setup is pretty simple at a high level.

Start with a platform that supports Sandy Bridge's processor graphics (H6x or Z68) and connect your display to the motherboard's video out. Add in a supported discrete GPU, supply power but don't connect your monitor to it.

Virtu behaves a lot like Hydra. It intercepts API calls and passes them along to a GPU of its choosing. Unlike Hydra however, the goal here isn't to spread the load across multiple GPUs. Instead, Virtu aims to match each task with the GPU best suited to it.

Video output is handled by SNB's GPU, data is simply copied from the dGPU's frame buffer to the iGPU's frame buffer for output. There should be some overhead in this process however Lucid claims it's minimal.

What we end up with is a system that should run all 3D games on your discrete GPU, and run all video decoding and encoding on SNB's GPU. Since this isn't switchable graphics but rather a form of GPU virtualization you can actually run iGPU and dGPU applications at the same time (e.g. you can watch a movie in one window on the iGPU and play a game in another on the dGPU).

Virtu relies on profiles and hard coded GPU support. Currently there are around 100 games/benchmarks that are supported by Virtu. Eventually you'll be able to manually add your own titles but for now we have to rely on what Lucid has validated and enabled. GPU support is broad but limited to anything from the AMD 4xxx, 5xxx and 6xxx series as well as the NVIDIA 2xx, 4xx and 5xx series. Lucid pledges to always ensure the top games are tested/supported as well as the previous two generations of AMD and NVIDIA GPUs.

The Virtu software will be bundled with motherboards. The business arrangements will take place between the motherboard manufacturers and Lucid itself, the end user shouldn't have to worry about licensing the software.

Lucid gave us a copy of the software it shared with motherboard manufacturers: a Virtu release candidate. The software is still not mass production and there are some limits (e.g. can't define our own game profiles, there's a Virtu logo plastered randomly on the screen when you're gaming) but it's enough to give us a brief look at the technology.

Installing Virtu was very simple. Just go through the installer application, reboot and you're good to go. The only requirements are that you're using a compatible video card and that your display is connected to the SNB video out and not the discrete GPU.

Once loaded the first thing I noticed was AMD's Catalyst Control Center and NVIDIA's control panel refused to load. As far as they were concerned, I was running an Intel HD 3000 GPU and they weren't needed. The appropriate AMD and NVIDIA drivers did load however.

Other than the irate control panels, the rest of the experience was completely seamless. I ran games, browsed the web and even transcoded a video - each application behaved as if the only GPU available was the one best suited for the task. Quick Sync even came up as an option under Arcsoft's Media Converter 7.

I measured performance with Virtu and natively off of the dGPU itself in four games to see how much overhead the frame buffer copying and Virtu interception posed:
AMD Lucid Virtu Performance Impact - 1920 x 1200, 4X AA, High Quality
Civilization V DiRT 2 Metro 2033 World of Warcraft
AMD Radeon HD 6970 39.6 fps 76.4 fps 34.7 fps 111.5 fps
AMD Radeon HD 6970 (Virtu) 36.5 fps 74.4 fps 32.3 fps 102.8 fps

NVIDIA Lucid Virtu Performance Impact - 1920 x 1200, 4X AA, High Quality
Civilization V DiRT 2 Metro 2033 World of Warcraft
NVIDIA GeForce GTX 460 38.8 fps 69.4 fps 18.7 fps 85.4 fps
NVIDIA GeForce GTX 460 (Virtu) 35.8 fps 48.0 fps 18.0 fps 79.7 fps

I generally saw a 2 - 8% drop in performance compared to a standalone discrete GPU without Virtu. The only exception was a big 30% drop on the GeForce GTX 460 running the DiRT 2 benchmark. Given the relatively consistent performance everywhere else, I'm guessing this is an early-software-artifact rather than a normal occurrence.

I also ran a Quick Sync test both with and without a discrete GPU attached - performance remained unchanged:
Lucid Virtu Performance Impact
Quick Sync Nikon D7000 (1080p24) to iPhone 4
AMD Radeon HD 6970 + Intel HD Graphics 3000 (Virtu) 199.3 fps
Intel HD Graphics 3000 199.3 fps

Finally I decided to run a Quick Sync test while I ran our Metro 2033 benchmark to see how running two tasks, each on an independent GPU, impacted each other:
Lucid Virtu Performance Impact (Metro 2033 + Quick Sync)
Quick Sync Nikon D7000 (1080p24) to iPhone 4 Metro 2033 Benchmark
Peak Theoretical Performance 199.3 f[s 36.5 fps
AMD Radeon HD 6970 + Intel HD Graphics 3000 (Virtu) 72.0 fps 32.1 fps

While Metro didn't lose much performance, the Quick Sync task ran considerably slower. Remember that the Quick Sync engine shares resources with the Sandy Bridge CPU cores (mainly the ring bus and L3 cache). Having the CPU working on feeding the dGPU vertex data definitely impacts Quick Sync performance.

Finally I measured power consumption:
Lucid Virtu Power Consumption
Idle Load (Metro 2033)
Intel HD Graphics 3000 34.7W N/A
AMD Radeon HD 6970 (Virtu) 126W 265W
NVIDIA GeForce GTX 460 (Virtu) 52.0W 191W

Here we see that there are still some kinks that need to be worked out. With the Radeon HD 6970 idle power is still quite high, even with the dGPU idle. The GeForce GTX 460 paints a different picture as Lucid manages to mostly power down the NVIDIA GPU when it's not in use. Note that even in this case there's a power penalty over a purely integrated setup - the dGPU is still active to a certain extent.
Final Words

Intel is slowly correcting the issues with the Sandy Bridge platform situation. The first B3 stepping 6-series chipsets are now in the hands of OEMs and motherboard manufacturers and Z68 boards are coming in the next quarter. Lucid's Virtu is a key part of the strategy however, at least on the desktop. In mobile it's a non-issue as everyone supports some form of switchable graphics there, but for desktops we need a universal solution. While the Virtu release candidate still needs some work, it's far more polished than I expected it to be.

Once setup there's no user intervention necessary - the software just works. Fire up a game and it'll run on your discrete GPU. Visit YouTube or transcode a video and your discrete GPU powers down leaving Sandy Bridge's on-die graphics to handle the workload.

There is definite overhead to Virtu - I measured 2 - 8% on average, however I did see a 30% figure pop up in DiRT 2 on NVIDIA hardware. I'd expect the performance hit to be less than 10% in most cases.

Board makers and OEMs should have their hands on the RC of Virtu now, meaning we should see it show up in motherboard boxes in the not too distant future. Of course this still doesn't take care of those users who wish to overclock their CPU, pair it with a discrete GPU and use Quick Sync as well. We'll have to wait until Z68 for that to happen. Even then, Lucid's Virtu will still likely play a role in those systems.


New Intel SSDs Boost Performance at Lower Prices

New solid-state drives from Intel use 25-nanometer technology to deliver performance for less. The Intel SSD 320 Series with SATA II interfaces come in sizes up to 600GB. Intel said users of the new SSD 320 Series can expect a boost in system responsiveness of up to 66 percent. The Intel SSD 320 Series devices also can recover from a power outage.

Intel has refreshed its solid-state drive offerings based on the chipmaker's advanced 25-nanometer semiconductor technology. The goal is to lower the cost of employing SSD technology by up to 30 percent in comparison with the 80GB and 160GB models of the company's existing X25-M SATA SSD product line, Intel said Monday.

The new Intel SSD 320 Series devices are aimed at mainstream consumers and corporate IT Relevant Products/Services buyers who want a substantial performance boost over the conventional hard disk Relevant Products/Services drives deployed in most desktop Relevant Products/Services and notebook PCs as well as corporate servers in data Relevant Products/Services centers. Available in 40GB, 80GB, 120GB, 160GB and new higher-capacity 300GB and 600GB versions, the new 1.8-inch and 2.5-inch multilevel-cell NAND flash devices also integrate 3Gb/s SATA II interfaces.

"We see the Intel SSD 320 as a solid advancement to our SSD road map, and will continue to upgrade and refresh our SSD product line as we add more enterprise Relevant Products/Services options for our business Relevant Products/Services customers throughout the year," said Tom Rampone, vice president and general manager at Intel NVM Solutions Group.

Performance Enhancements

PC users upgrading to a new Intel SSD can expect a performance boost of up to 66 percent in overall system Relevant Products/Services responsiveness, according to Intel. This is because SSD technology is able to speed up critical PC processes, such as bootup times as well as the opening of applications and large user files.

What's more, the new drives have been designed to store Relevant Products/Services the user's ongoing data processes in temporary buffers for a very short period of time. In the event of an unexpected power Relevant Products/Services outage Relevant Products/Services, small capacitors aboard the device Relevant Products/Services can deliver enough power to enable the SSD to make a recoverable copy of the buffer content.

Users deploying Intel's new SSD 320 Series should also benefit from enhanced multitasking capabilities. The chipmaker has more than doubled sequential write speeds to 220 MB/s, more than sufficient for users working on a document to also download a video Relevant Products/Services or play background music without any perceivable slowdown.

Available for free download, Intel's SSD Toolbox utility eases the complexity of replacing any existing HDD on a desktop or laptop Relevant Products/Services PC. Among other things, the software gives users the ability to clone the entire content of any existing storage Relevant Products/Services drive to any Intel SSD. Also on tap is a powerful set of management, information and diagnostic tools for optimizing the performance and health of the new drive.

Data Security

To help protect personal data in the event of theft or loss, Intel SSD 320 Series models offer 128-bit AES encryption. And as a second layer of protection, the new drives require the user to enter a password each time the devices are powered on. "Protecting user data today has never been move critical -- it's about protecting the things you store on your computer's drive," said Intel Technical Marketing Engineer Charles Foster.

According to Intel, the new drives are equipped with a unique AES encryption key, which means the device is ready to roll out of the box. However, security Relevant Products/Services-conscious users also can generate their own unique AES encryption key by using Intel's SSD Toolbox utility to perform a secure erase of the device.

Intel SSD 320 Series devices are slated to become available from retailers such as Best Buy, Fry's Electronics, Amazon.com and Newegg. Amazon's retail prices range from about $127 for the 40GB model and $192 for the 80GB device to $1,041 for Intel's top-of-the-line 600GB drive.


Sunday, March 27, 2011

Screamers: The Hunting (2009) Dvdrip 300MB MKB

Subtitles : English
Format : MKV
File size : 254 MiB
Duration : 1h 34mn
Width : 624 pixels
Height : 352 pixels
Frame rate : 23.000 fps
Audio Format : Cooker
Bit rate : 96 Kbps



Role Models (2008) Dvdrip 300MB MKV

Subtitles : English
Format : MKV
File size : 319 MiB
Duration : 1h 41mn
Width : 624 pixels
Height : 328 pixels
Frame rate : 23.976 fps
Audio Format : AAC
Bit Rate : 128 Kbps



Righteous Kill (2008) Dvdrip 300MB MKV

Subtitles : English
Format : MKV
File size : 348 MiB
Duration : 1h 40mn
Width : 640 pixels
Height : 272 pixels
Frame rate : 23.976 fps
Audio Format : AAC
Bit Rate : 128 Kbps



NVIDIA Announces CUDA 4.0

NVIDIA had just recently launched their lineup of Fermi-powered Tesla products, and was using the occasion to announce the 3.2 version of their CUDA GPGPU toolchain.

And though when we’re discussing the fast pace of the GPU industry we’re normally referring to NVIDIA’s massive consumer GPU products arm, the Tesla and Quadro businesses are not to be underestimated. An aggressive 6 month refresh schedule is not just good for consumer products it seems, but it’s good for the professional side too.

Even against the backdrop of a 6 month refresh schedule, quite a bit has changed in the intervening period. NVIDIA’s Parallel Nsight – which we only first discussed in depth back in September – has gone free, with NVIDIA realizing that charging for the software wasn’t going to sell as many GPUs and that no one likes doing software licensing. Meanwhile the first (and thusfar only) Mac Fermi card was launched in the form of a Quadro card, helping NVIDIA go after the all-important niche of Mac desktop *nix programmers. Even the financial side of things is showing some change, with NVIDIA having just closed out Fiscal Year 2011 with nearly $100mil in Tesla sales, which at around 2.8% of NVIDIA’s revenue is the highest Tesla revenue has ever been. In fact the only thing we haven’t seen surprisingly enough is a Tesla refresh – we had GF110 pegged as an obvious upgrade for the Tesla line, which under GF100 continues to ship with only 448 SPs enabled to help meet the necessary 225W power envelope.

Meanwhile the CUDA team has been hard at work developing the next version of CUDA after CUDA 3.2, which brings us to today’s announcement. Today NVIDIA is announcing CUDA 4.0, the next full version of the toolchain. As is customary for CUDA development given its long QA cycle, NVIDIA is making their formal announcement well before the final version will be shipping. The first release candidate will be available to registered developers March 4th, and we’d expect the final version to be available a couple of months later based on NVIDIA’s previous CUDA releases.

CUDA 4.0 ends up being an interesting release as it breaks with NVIDIA’s previous release schedules somewhat. Previous CUDA releases were timed with the launch of hardware: CUDA 1.0 was released to go with G80/G9x (albeit nearly a year after they launched), CUDA 2.0 was released for GT200 in 2008, and CUDA 3.0 was released for Fermi in 2010. In the case of CUDA 4.0 there’s no new hardware to talk about at the moment, so it’s the first independent software-only major CUDA release. I’d expect that NVIDIA will still be on CUDA 4.x by the time Kepler launches, but that’s still several months out.

So what’s new in CUDA 4.0? As an independent software release NVIDIA’s biggest focus is on multi-GPU GPGPU performance of existing Fermi products. This is the next logical step for the company, as previous CUDA releases have continuously drilled down, starting with the basic CUDA framework which was suitable for embarrassingly parallel tasks that didn’t require intra-GPU communication, to CUDA 3.x which introduced GPUDirect thereby giving 3rd party devices direct access to CUDA memory. CUDA 4.0 in turn is the next step on that long path, and will be enabling multiple GPUs within the same system/node to more closely work together by making it easier for GPUs to access each other’s memory.

Specifically NVIDIA is doing a few things here. On the software side NVIDIA is introducing a new unified virtual address space mode (aptly named Unified Virtual Addressing), which puts all CUDA execution – CPU and GPU – in the same address space. Prior to this each GPU and the CPU used their own virtual address space, which required a number of additional steps and careful tracking on behalf of CUDA software to copy data structures between address spaces. This would seem to be riskier on the driver side in order to keep GPUs and CPUs from stomping on each other(and hence the long QA cycle), but for CUDA developers the benefit is going to be very straightforward due to the easier memory management.

Meanwhile on the hardware side NVIDIA is introducing GPUDirect 2.0. While GPUDirect 1.0 gave 3rd party devices direct memory access, it was primarily for network/infiniband communication purposes; GPUs within a node were still isolated in most cases, requiring data structures to be copied to system RAM first before any additional GPUs could access the data. GPUDirect 2.0 resolves this issue, introducing the ability for GPUs within a node to directly access each other’s memory without requiring a system memory copy first. And while system memory is by no means slow this is still much faster; for fully fed PCIe x16 slots this gives each GPU 8GB/sec of low latency full duplex bandwidth to use between the CPU and other GPUs. From our impressions we’d categorize GPUDirect 2.0 as being very NUMA-like (Non-Uniform Memory Access), however there’s still an important distinction between local and remote memory as PCIe bandwidth is still a fraction the speed of local memory – 8GB/sec versus 148GB/sec for a Tesla card, for example.

The addition of UVA on the software side and GPUDirect 2.0 on the hardware side are NVIDIA’s primary tactics to improving intra-GPU performance. PCIe’s limited bandwidth means that intra-GPU communication speeds will not be approaching intra-CPU communication speeds in the near future, so SMP-like operation is still some time off, but it should be fast enough to allow developers to work on new classes of problems that were too slow without UVA/GPUDirect.

Along with multi-GPU performance, NVIDIA is of course giving considerable focus to single/overall GPU performance. CUDA 4.0 follows up on CUDA 3.2’s additional libraries with yet another set of performance-optimized libraries. Thrust – an open source CUDA template library that mimics the C++ Standard Template Library (STL) – is being integrated into CUDA proper. Thrust has been available for a couple of years now as an external library that NVIDIA developed as a research project, and is now being promoted to a member of the CUDA family. C++ programmers used to the STL stand the most to gain, as Thrust is nearly identical and can automatically handle assigning work to GPUs or CPUs as necessary.

CUDA C++ is also getting some further improvements by introducing some C++ features that were absent under CUDA 3.x. Virtual functions are now supported, along with the New and Delete functions for dynamic memory. NVIDIA noted that with CUDA 4.0 they’re shifting to working on developer requests, with both of these features being highly requested. We had also asked NVIDIA about what C++ adoption by developers had been like – C++ being an important part of the Fermi hardware – but unfortunately NVIDIA doesn’t have the means to precisely track which languages developers are actually using. However it sounds like adding C++ was an appropriate choice for the company.

Finally, the last set of improvements NVIDIA is focusing on is on the developer tools themselves. Coming back again to the Mac/*nix market, NVIDIA had added CUDA debugging support to Mac OS X; *nix CUDA developers doing their development on Macs will now be able to debug their code right on their machines. Meanwhile NVIDIA’s Visual Profiler performance profiling tool is getting an upgrade of its own: previously it could identify bottlenecks in code, now it can offer hints on how to improve performance at those bottlenecks. Finally, the CUDA toolkit will now include a binary disassembler, for use in analyzing the resulting output of the CUDA compiler.

Wrapping things up, as we mentioned before the first release candidate of CUDA 4.0 will be available to registered developers on March 4th. NVIDIA doesn’t have a commitment date for the release version, but expect it to be available a couple of months later based on NVIDIA’s previous CUDA releases.


AVADirect's Clevo P170HM with GeForce GTX 485M: High-End You've Been Waiting For

When we reviewed the Clevo W880CU and, by extension, NVIDIA's GeForce GTX 480M, we were perplexed. Certainly NVIDIA had reclaimed the mobile graphics crown and no one could dispute that, but at what cost? The 480M was a cut-down mobile version of the already dire desktop GeForce GTX 465M. We even begged the question:

"Wouldn't the prudent thing to do have been to let ATI have their cake for the time being and try and push GF104 into laptops?" Today we have a better answer. AVADirect has been kind enough to send us a Clevo P170HM notebook outfitted with NVIDIA's latest and greatest, the GF104-based GeForce GTX 485M.

We're not going to lie; this is the mobile part we've been waiting for from NVIDIA ever since the GeForce GTX 460 was released. GF104 has been recently succeeded by the GF114 powering the GeForce GTX 560 Ti, but in a strange twist of fate, only notebooks will ever see a full-blown GF104. While the desktop version was trimmed down to 336 shaders, the mobile one powering the 485M features all 384.

AVADirect was gracious enough to provide us a Sandy Bridge notebook during a period when Sandy Bridge has been incredibly scarce from any angle, and the Clevo P170HM is the successor to the previously-reviewed W880CU. This is the configuration they shipped us:
Clevo P170HM Review Unit Specifications
Processor Intel Core i7-2720QM
(4x2.2GHz, 32nm, 6MB L3, Turbo to 3.3GHz, 45W)
Chipset Intel HM67
Memory 2x4GB DDR3-1066 (Max 4x4GB)
Graphics NVIDIA GeForce GTX 485M 2GB GDDR5
(384 CUDA cores, 575MHz/1150MHz/3GHz Core/Shader/RAM Clocks)
Display 17.3" LED Glossy 16:9 1920x1080
(LG LP173WF1-TLC1 Panel)
Hard Drive(s) Crucial RealSSD 300 128GB SATA 6Gbps SSD
Seagate Momentus XT 500GB 7200RPM SATA 3Gbps Hybrid HDD/SSD
Optical Drive BD-ROM/DVD+-RW Combo Drive
Networking JMicron PCIe Gigabit Ethernet
Realtek RTL8188CE 802.11b/g/n
Bluetooth 2.1+EDR
Audio Realtek ALC892 HD Audio
Five speakers and subwoofer
5.1 audio jacks with S/PDIF
Battery 8-Cell, 10.8V, 77Wh battery
Front Side Infrared port
Wireless switch
Indicator lights
Left Side CATV jack
Ethernet jack
2x USB 3.0
USB 2.0
4-pin FireWire port
MMC/SD/MS reader
Right Side Optical drive
Headphone jack
Microphone jack
Line-out/digital out jack
Line-in jack
USB 2.0
Back Side Kensington lock
Exhaust vent
AC adaptor jack
Exhaust vent
Operating System Windows 7 Home Premium 64-bit
Dimensions 16.22" x 10.87" x 1.65"-1.79" (WxDxH)
Weight 8.60 lbs
Extras 2MP Webcam
Flash reader (MMC, SD/Mini SD, MS/Duo/Pro/Pro Duo)
Blu-ray drive
USB 3.0
5.1 integrated sound
103 key keyboard with 10-key
Warranty 1-year limited warranty
Pricing Available starting at $1611
Price as configured: $2614

The two banner items in our review unit are going to be the Sandy Bridge-based Intel Core i7-2720QM and the shiny new NVIDIA GeForce GTX 485M. The Cougar Point chipset bug took a major bite out of the industry, so we appreciate AVADirect being willing to send us a review unit despite that. The Core i7-2720QM is one of those moronically fast new Sandy Bridge processors, built on a 32nm process and equipped with four Hyper-Threaded cores and 6MB of L3 cache. Turbo speeds are impressive to say the least: the i7-2720QM has a nominal stock clock of 2.2GHz, but is able to turbo up to 3GHz on all four cores, 3.2GHz on two cores, and 3.3GHz on just one core. A 3GHz mobile quad core, especially with the kind of performance Sandy Bridge yields, would've been unheard of not too long ago.

Of course, the really big news item here is NVIDIA's GeForce GTX 485M. Since the GF104 was released (and later refreshed by the GF114), I felt like that was the core that really should've gone mobile. The GF100-based GeForce GTX 480M was too horribly trimmed, essentially a lower-clocked version of the disappointment that was the desktop GeForce GTX 465, and as a result it wasn't the homerun AMD Mobility Radeon HD 5870 killer we wanted. That changes with the 485M.

NVIDIA has kept the stratospheric 100W TDP of its predecessor and used that thermal budget more efficiently. The 485M is a full-fledged GF104, something we never saw on the desktop. It sports 384 CUDA cores, with reasonable clocks of 575MHz on the core and 1150MHz on the shaders. The anemic 2.4GHz clock of the 480M's GDDR5 has been supplanted by a more reasonable 3GHz clock (though still a healthy distance from the 3.6GHz AMD is able to achieve on the Radeon HD 6970M), and that GDDR5 enjoys a healthy 256-bit memory bus. The net result is a chip that sports substantially higher clocks and a higher shader count than its predecessor. Note that the price to upgrade from an already expensive GTX 460M is $504 at AVADirect, so the GTX 485M does not come cheap!

Rounding out our review configuration of the Clevo P170HM is a Crucial RealSSD C300 128GB solid state drive, currently one of the only SSDs on the market supporting SATA 6Gbps, along with a Seagate Momentus XT 500GB hybrid drive pulling mass storage duties. The SATA 6Gbps functionality of the HM67 chipset is certainly in full effect, too, and looks to be supported in both drive bays. AVADirect also included a Blu-ray reader and the Clevo has a healthy amount of connectivity: USB 3.0, eSATA, FireWire, and gigabit and wireless networking are all accounted for. In fact the only curious omission is the lack of an ExpressCard slot, though given how everything else—even digital 7.1 audio—is accounted for, it's hard to imagine what you'd need to add.

The base price starts at $1611 from AVADirect, but once you start adding extras you'll get well above $2000 quite fast. With the GTX 485M, the bare minimum you'll pay is $2115. The i7-2720QM is actually a fairly cheap upgrade from the base i7-2630QM at only $45, though performance isn't all that different. The 128GB SSD tacks on another $175 compared to the typical Seagate 7200.4 500GB, and you'll want at least 2x2GB RAM ($18), but with a system this potent why not go for 2x4GB ($70). Outside of the hugely expensive GPU upgrade, all of the prices at AVADirect are very competitive with what you'd pay on Newegg.com, so we can't fault them here. Whether it's NVIDIA raking in the bucks on the 485M, or vendors taking a nice cut—or very likely some of each!—the fact is that you're getting a very capable gaming notebook. Whether you're willing to spend over two large on such a system is a different matter.


With Firefox 4 and Opera, It's Raining Browsers

Firefox 4 was recently unleashed, boasting many new features including the promise of dramatic speed and performance advancements across the board. In addition, Opera Mini 6 (for J2ME, Android, BlackBerry and Symbian/S60 phones) and Opera Mobile 11 (for Android, Symbian, Windows 7 desktop, MeeGo and Maemo) also came out for handheld devices.

A slew of new (free, of course) Web browsers have come out recently, highlighted by [the] launch of Firefox 4 for Windows, Mac OS X and Linux. The developers at Mozilla included many new features, but the biggest draw for users will likely be the promise of faster performance:

"With dramatic speed and performance advancements across the board, Firefox is between two and six times faster than previous releases. Major enhancements to the JavaScript engine make everything from startup time to page load speed to graphics and JavaScript performance screaming fast in Firefox."

Go get it now: www.mozilla.com/en-US/firefox/new/

In addition, Opera Mini 6 and Opera Mobile 11 also came out Tuesday for handheld devices.

Opera Mini 6 browser is available for J2ME, Android, BlackBerry and Symbian/S60 phones.

Meanwhile, Opera Mobile 11 is for Android, Symbian, Windows 7 desktop Relevant Products/Services (labs release), MeeGo (labs release) and Maemo (labs release) platforms.

Just point the browser on your phone to m.opera.com to get the new hotness.

And last but not least, Microsoft Relevant Products/Services about a week ago pushed out Internet Explorer 9. However, that browser is only available for Windows Vista and 7. If you're still rocking an XP machine, no soup for you.


Saturday, March 26, 2011

Synology DS211+ SMB NAS Review

Synology is one of the rapidly rising players in the SMB (Small to Medium Businesses) / SOHO (Small Office & Home Office) NAS market. This market is a highly competitive one with many players like QNAP, Thecus, Netgear, Drobo, LaCie, Seagate and Western Digital. Consumers with a necessity to store and backup their home media collection are also amongst the customers in this market.

Synology has a sensible model number nomenclature in which the last two digits refer to the year through which the model is intended for sale. The first set of digits refer to the maximum number of bays supported. Some models have a + at the end, signifying higher performance. Today, we have the DS211+ for review. The DS refers to the product category, Disk Station. 2 indicates a 2 bay model, and the 11 indicates a 2011 model. It is supposed to have a higher performance compared to the DS211 which was released in November 2010.

The purpose of any NAS is to serve as a centralized repository for data while also having some sort of redundancy built in. The redundancy helps in data recovery, in case of media failure of any other unforeseen circumstances. Along with the standard RAID levels, some companies also offer custom redundancy solutions. The OS on the NAS also varies across vendors.

In addition to manual support for the standard RAID configurations, Synology also provides the SHR (Synology Hybrid Raid) option. The OS on the DS211+ is the Disk Station Manager 3.0 (DSM), a Linux variant. Most of its features for day-to-day operations can be accessed over a web browser.

The last SMB NAS that we reviewed was the LaCie 5big Storage Server, a 5 bay model running Windows Storage Server 2008. We introduced our new NAS benchmarking methodology in that review. In addition to repeating the methodology on the DS211+, we also checked up a little bit on the Linux performance. Before we get to that, however, let us devote a couple of sections to the hardware and software that make up the DS211+.


Netgear 3DHD Wireless Home Theatre Networking Kit

When I first saw the NETGEAR 3DHD product on the showroom floor of CES, there was the device sitting on the table and a video playing on a screen. I was confused as to the nature of the device; was it a wireless HDMI solution?
The product model is 3DHD, with 3D, HD 1080p, video, etc. plastered all over the packaging, and the product name is 3DHD Wireless Home Theatre Networking Kit. As we spoke with a Netgear representative, it became clear that this was not a wireless HDMI implementation but an 802.11 networking solution. To put it in clear terms, despite the 3D buzzwords plastered all over the box, to the technical user; this product is a network bridge device.

NETGEAR isn't wrong to focus some of their 802.11 products directly at multimedia applications, as moving video wirelessly and reliably to high definition TV sets is a feature that many people are looking for. For cost, ease of use, and superior reliability, it is always recommended to simply run a cable. Sometimes however, a wireless solution is the only answer. Maybe you rent and your landlord doesn't want any holes punched in the walls, or maybe there is more complexity and cost in getting the wiring in the wall or across the house than by simply using a wireless solution.

NETGEAR's whitepaper documentation identifies bandwidth and interference as the two major challenges to getting reliable, bandwidth intensive video applications to work properly over wireless. To get the required amount of bandwidth, the 3DHD utilizes 4x4 MIMO antenna technology. MIMO systems offer significant increases in data rates, range, and reliability by exploiting the spatial dimension associated with the multiple antennas. The 4x4 MIMO configuration provides two extra transmit antennas for beamforming, which allows significant focusing of the energy in two directions. This is done to improve reliability as well as to reduce interference with existing wireless systems, steering the energy directly in the required direction. We are eager to see if the technical features built into this product provide any advantage over other 5GHz networking devices.


MS Offers Tools To Build and Manage Private Clouds

A new beta of System Center 2012 from Microsoft offers tools to build private cloud services. Microsoft also introduced Windows Intune to manage PCs. The new System Center uses Concero to let department-level managers handle applications while keeping IT in control. An analyst called System Center a "great tool" for Microsoft shops.

At the Management Summit in Las Vegas this week, Microsoft Relevant Products/Services introduced beta 2 of its System Center 2012, offering tools for IT Relevant Products/Services managers to deliver private cloud services, and Windows Intune, providing PC management through the cloud. Intune is generally available in 35 countries, and System Center, available now as a beta, is expected to be released later this year.

Corporate Vice President Brad Anderson said "virtualization Relevant Products/Services and server consolidation are important steps toward cloud computing," but management tools such as those provided by the newest System Center must offer intelligence about apps' performance, in addition to "management of virtual Relevant Products/Services machine black boxes." He added that Intune "is the perfect example of how customers and partners of all sizes can take advantage of cloud computing for easy-to-use PC management and security Relevant Products/Services."

Concero for Department-Level Managers

With the newest System Center, IT managers can build private clouds using their current infrastructure Relevant Products/Services as well as other vendors' platforms and virtualization technologies. In his keynote address at the summit, Anderson demonstrated System Center 2012's capability, code-named Concero, to allow department-level managers to deploy and manage their applications on private and public clouds.

System Center 2012 is designed to enhance the existing Microsoft Hyper-V cloud programs for private cloud computing, as well as provide employees with applications running on a wider range of devices.

Microsoft said a recent study it commissioned from IDC showed that Intune could result in a total savings of $702 per PC per year, including IT labor reduction, user productivity Relevant Products/Services savings, and cost recovery Relevant Products/Services from not having to use other tools. The study also said Intune represents an opportunity for Microsoft's partners to reach more customers and start or expand managed-services businesses.

IDC said partners can support more customers using the Intune cloud-based system Relevant Products/Services than with on-site management -- without additional staff.

'A Clear Signal To Customers'

Laura DiDio, an analyst with industry research firm Information Technology Intelligence Corp, said Microsoft's emphasis on private cloud management in System Center 2012 "is something we've been seeing a lot of vendors do." She noted that last year public clouds were the flavor, but IT departments "had a lot of questions about public clouds," not the least of which was security.

She said Security Center 2012 keeps the IT department in control while "maintaining an overarching view of application performance." From a strategic point of view, DiDio said, System Center 2012 will provide the tools needed to "standardize your infrastructure, which is the key to simplification, so you can get your machine spread under control."

For Microsoft shops, she said, System Center 2012 will be "a great tool to have," and it is "sending a clear signal to customers that Microsoft has a cloud strategy" for management.


Friday, March 25, 2011

MacBook Pro 2011 Refresh: Specs and Details

As expected, Apple today unveiled a range of speed and functionality improvements for its MacBook Pro lineup. The update was unusually quiet for Apple. There was no scheduled press event and nothing more than a press release announcing the specs and availability. Apple retail stores received stock prior to today and began selling product immediately. The Apple online store also has immediate availability.

No mere speed bump, these new MacBooks bring Intel’s new Sandy Bridge processors chipsets to the entire line, replacing the previous Arrandale processors and finally retiring the aging Core 2 Duo from service in the 13-inch model.

Contrary to earlier reports, there are no default SSD configurations although the solid state offerings are still optional. The big new feature (outside of Sandy Bridge) is support for the first incarnation of Intel’s Light Peak interface technology, now called Thunderbolt.
The Facts

2011 MacBook Pro Lineup
13-inch (low end) 13-inch (high end) 15-inch (low end) 15-inch (high end) 17-inch
Dimensions 0.95 H x 12.78 W x 8.94 D 0.95 H x 12.78 W x 8.94 D 0.95 H x 14.35 W x 9.82 D 0.95 H x 14.35 W x 9.82 D 0.98 H x 15.47 W x 10.51 D
Weight 4.5 lbs (2.04 kg) 4.5 lbs (2.04 kg) 5.6 lbs (2.54 kg) 5.6 lbs (2.54 kg) 6.6 lbs (2.99 kg)
CPU 2.3 GHz dual-core Core i5 2.7 GHz dual-core Core i7 2.0 GHz quad-core Core i7 2.2 GHz quad-core Core i7 2.2 GHz quad-core Core i7
GPU Intel HD 3000 Graphics Intel HD 3000 Graphics Intel HD 3000 + AMD Radeon HD 6490M (256MB) Intel HD 3000 + AMD Radeon HD 6750M (1GB) Intel HD 3000 + AMD Radeon HD 6750M (1GB)
RAM 4GB 1333MHz DDR3 (8GB max) 4GB 1333MHz DDR3 (8GB max) 4GB 1333MHz DDR3 (8GB max) 4GB 1333MHz DDR3 (8GB max) 4GB 1333MHz DDR3 (8GB max)
HDD 320GB 5400 RPM 500GB 5400 RPM 500GB 5400 RPM 750GB 5400 RPM 750GB 5400 RPM
Display Resolution 1280x800 1280x800 1440x900 (1680x1050 optional) 1440x900 (1680x1050 optional) 1920x1200
Ports Gigabit LAN, Firewire 800, Thunderbolt, 2x USB 2.0, SDHC slot, combined audio in/out jack Gigabit LAN, Firewire 800, Thunderbolt, 2x USB 2.0, SDHC slot, combined audio in/out jack Gigabit LAN, Firewire 800, Thunderbolt, 2x USB 2.0, SDHC slot, separate audio in/out jacks Gigabit LAN, Firewire 800, Thunderbolt, 2x USB 2.0, SDHC slot, separate audio in/out jacks Gigabit LAN, Firewire 800, Thunderbolt, 3x USB 2.0, separate audio in/out jacks, ExpressCard 34 slot
Price $1,199 $1,499 $1,799 $2,199 $2,499

When Apple moved its MacBook Pro lineup to Arrandale, the poor 13-inch model lost out - it remained with an older Core 2 Duo CPU. The move to Sandy Bridge is different - all models got an upgrade.

Sandy Bridge is used across the board and interestingly enough only the 13-inch model uses a dual-core CPU. Both the 15-inch and 17-inch MacBook Pros now feature quad-core CPUs. This makes these two MacBook Pros ripe for a desktop replacement usage model, particularly if paired with an SSD.

Sandy Bridge obviously integrates Intel’s HD 3000 graphics on die, which is used by all of the new MBPs by default. The 15-inch model and 17-inch model add switchable dedicated graphics from AMD, ousting the NVIDIA chips that powered the previous lineup. I wouldn’t read too much into this – Apple is always going back and forth between NVIDIA and AMD graphics, usually based on whoever happens to be offering the best or most efficient chip at the time of refresh.

Per usual, this refresh sees Apple offering customers more computer for the same money, rather than giving out any substantial price cuts. This is nothing specific to Apple but rather a benefit of buying in an industry driven by Moore's Law.

One number on this spec sheet sticks out like a sore thumb from the rest, and that is Apple's decision to offer 5400RPM SATA hard drives as the default storage option across the line. The price differential between 5400 RPM drives and 7200 RPM drives is negligible these days, and for these prices, the company could certainly afford to address this performance bottleneck. I would hope that Apple would at least consider Seagate’s hybrid drive as an alternative until we get Intel enabled SSD caching.

Upgrades to 128GB, 256GB, and 512GB solid state drives available but predictably costly ($250, $650, and a whopping $1,250, respectively). It is worth noting that at $250 for a 128GB SSD, Apple’s upgrade pricing isn’t too far off what the market value is for the lowest end SSD. The 256GB pricing is a bit insane.

Apple has finally standardized on 4GB of memory across the board, although I would’ve liked to have seen 8GB offered on the higher end configurations.

Also new is what Apple calls a "FaceTime HD camera," which looks to be a high definition version of Apple's standard webcam - not much more that's noteworthy about this, except that the iSight moniker is continuing its slow disappearance from Apple's spec sheet one model at a time.

It is disappointing that Apple makes no mention of QuickSync in its announcement. The hardware video transcoding engine is a key part of Sandy Bridge, however it looks like OS X support for the technology may not be ready quite yet.

It’s worth noting that Apple’s new laptops were apparently not delayed much by the SATA bug discovered in the 6-series chipsets last month – this likely means that Apple is shipping the affected B2 stepping parts but only using the 6Gbps ports.

There’s no change in chassis size or weight with the new MacBook Pros, this is an internal upgrade. Well, mostly...


Intel's Codename Light Peak Launches as Thunderbolt

Back at IDF 2010, we wrote about Intel Light Peak nearing its eventual launch in 2011. Back then, the story was a 10 Gbps or faster physical link tunneling virtually every protocol under the sun over optical fiber. Though an optical physical layer provided the speed, in reality the connector and physical layer itself wasn’t as important as the tunneling and signaling going on beneath it.

The dream was to provide a unified interface with enough bandwidth to satisfy virtually everything desktop users need at the same time - DVI, HDMI, DisplayPort, USB, FireWire, SATA, you name it. Daisy chain devices together, and connect everything with one unified connector and port. At IDF, we saw it moving data around between an Avid HD I/O box, a Western Digital external RAID array, and simultaneously outputting audio and video over HDMI. Intel also had another live demo working at over 6.5 Gbps.

That dream lives on today, but sans optical fiber and under a different name. Intel’s codename “Light Peak” is now named Thunderbolt. In addition, instead of optical fiber, ordinary copper does an adequate enough job until suitably cheap optical components are available. It’s a bit tough to swallow that optical fiber for the desktop still isn’t quite ready for mainstream consumption - issues like bend radius and the proper connectors were already mitigated - but copper is good enough in the meantime. Thunderbolt launched with the 2011 MacBook Pro, and though the interface isn’t Apple exclusive, will likely not see adoption in the PC space until 2012.

Although Thunderbolt in its launch instantiation is electrical, future versions will move to and support optical connections. When the transition to optical takes place, legacy electrical connector devices will work through cables with an electro-optical transceiver on the cable ends so there won’t be any need to use two separate kinds of cables. The optical version of Thunderbolt is allegedly coming later this year.

Thunderbolt shares the same connectors and cabling with mini DisplayPort, however Thunderbolt cables have different, tighter design requirements to fully support Thunderbolt signaling. DisplayPort is an interesting choice since it’s already one of the fastest (if not the fastest) desktop interfaces, topping out at 17.28 Gbps in DisplayPort 1.2 at lengths of under 3 meters. At longer distances, physics rears its ugly head, and throughput drops off over electrical links. Of course, the eventual advantages of moving to photons instead of electrons are greater distance without picking up much latency.

Thunderbolt is dual-channel, with each channel supporting 10 Gbps of bidirectional bandwidth. That’s a potential 20 Gbps of upstream and 20 Gbps of downstream bandwidth. The connection supports a daisy chain topology, and Thunderbolt also supports power over the cable, 10W to be precise. We aren't sure at this time what the breakdown on voltage/amperage is though.

Back when it was Light Peak, the goal was to tunnel every protocol under the sun over a common fast link. Multiplex everything together over one protocol-agnostic link, and then you could drop relevant data for each peripheral at each device in the daisy chain. Up to 2 high-resolution DisplayPort 1.1a displays and 7 total devices can be daisy chained. Thunderbolt instead carries just two protocols - DisplayPort and PCI Express. Tunnel a PCIe lane over the link, and you can dump it out on a peripheral and use a local SATA, FireWire, USB, or Gigabit ethernet controller to do the heavy lifting. Essentially any PCI Express controller can be combined with the Thunderbolt controller to act like an adapter. If you want video from the GPU, a separate dedicated DisplayPort link will work as well. Looking at the topology, a 4x PCI Express link is required in addition to a direct DisplayPort connection from the GPU.

Apple learned its lesson after FireWire licensing slowed adoption - the Thunderbolt port and controller specification are entirely Intel’s. Similarly, there’s no per-port licensing fee or royalty for peripheral manufacturers to use the port or the Thunderbolt controller. iFixit beat Anand and me to tearing down the 2011 MacBook Pro (though I did have one open, and was hastily cramming my OptiBay+SSD and HDD combo inside) and already got a shot of Intel's Thunderbolt controller, which itself is large enough to be unmistakable:

Thunderbolt Controller IC on 15" 2011 MacBook Pro - Courtesy iFixit

In addition, you can still plug normal mini Display Port devices into Thunderbolt ports and just drive video if you so choose.

Though there aren’t any Thunderbolt compatible peripherals on the market right now, Western Digital, LaCie, and Promise have announced storage solutions with Thunderbolt support. Further, a number of media creation vendors have announced or already demonstrated support, like the Avid HD I/O box we saw at IDF.

Thunderbolt already faces competition from 4.8 Gbps USB 3.0 which has already seen a lot of adoption on the PC side. The parallels between USB 2.0 / FireWire and USB 3.0 / Thunderbolt are difficult to ignore, and ultimately peripheral availability and noticeable speed differences will sell one over the other in the long run. Moving forwards, it’ll be interesting to see Thunderbolt finally realize the “light” part of Light Peak’s codename, and exactly how that transition works out for the fledgling interface.


Parature Spring '11 Boosts Social CRM Competition

The "Wild West" of social CRM has become more competitive with Parature Spring 'll, designed to leave a better impression on customers and engage with target audiences. Parature's Duke Chung said Parature differentiates by integrating social channels that customers prefer to use. Client Vovici said Parature helps resolves issues quickly.

Cloud-based customer engagement just got more competitive. That's because Parature has unveiled its latest social CRM Relevant Products/Services software.

Parature Spring '11 is designed to help customer support teams leave a better impression on customers and engage with the company's target audiences through vehicles like Twitter, Lithium communities, Facebook, chat, phone, e-mail and, of course, the web. The new version lets companies monitor, manage and resolve service issues across those channels.

Even as Duke Chung, chief strategy officer for Parature, launches Spring '11 and prepares to roll out Parature for Facebook as a stand-alone product, he sees plenty of competition on the horizon.

The Wild West of Social Software

"The social-software marketplace is still undefined to a large part -- it's like the Wild West out there," Chung said. "There are hundreds of companies that position themselves as social-software providers. Companies looking to make a purchase have to educate themselves so thoroughly and then make the decision that best fits their strategies and organization."

As Chung sees it, this process can be exhausting, confusing and lead to folks wanting to wait and see how the game pans out and who emerges as the leader. He is betting that by integrating social channels into the experience, Parature will stand out from the social-applications crowd.

With Spring '11, Parature expands its direct social-engagement model to Twitter and Lithium. The new version also offers auto-suggest and improved search features. Customers can automatically receive auto-suggested answers while submitting tickets. Parature also beefed up the analytics; developed the ability to view, report and automatically remove profane content from a Facebook page for brand security Relevant Products/Services; and added rich media support and unified management of multiple Facebook pages.

"Since last year, Parature has been focused on integrating social networks and channels into its overarching cloud-based support solutions," Chung said. "The reason being that as the web evolves to be more social and collaborative, consumers and customers are looking to their preferred channel to find the answers they need. Facebook has over 500 million members, and members spend hours at a time on the network. Customers don't want to leave their preferred channel to head elsewhere to get the answers they are looking for."

Resolving Issues More Quickly

Vovici, a survey Relevant Products/Services-software and enterprise Relevant Products/Services-feedback-solutions company, is tapping into Parature to meet the increasing demand it sees for direct communication with its clients through social channels.

"Vovici is all about customers; it's not only our business Relevant Products/Services but our internal philosophy as well," said Nancy Porte, Vovici's vice president of customer experience. "Delivering service with Parature's customer-support software enables our customers to quickly and independently resolve issues or directly engage with our support team via chat or a social network, which helps us differentiate ourselves, retain existing customers, and attract new ones."


Thursday, March 24, 2011

Poison Ivy: The Secret Society (TV 2008) Dvdrip 300MB MP4

Subtitles : English
Format : MP4
File size : 346 MiB
Duration : 1h 31mn
Width : 528 pixels
Height : 296 pixels
Frame rate : 23.976 fps
Audio Format : AAC
Bit rate : 128 Kbps



Poison Ivy: The New Seduction (Video 1997) Dvdrip 300MB rmvb

Subtitles : English
Format : rmvb
File size : 238 MiB
Duration : 1h 34mn
Width : 640 pixels
Height : 352 pixels
Frame rate : 23.000 fps
Audio Format : Cooker
Bit rate : 96 Kbps



Poison Ivy II (1996) Dvdrip 300MB rmvb

Subtitles : English
Format : rmvb
File size : 272 MiB
Duration : 1h 47mn
Width : 560 pixels
Height : 304 pixels
Frame rate : 23.000 fps
Audio Format : Cooker
Bit rate : 96 Kbps



Poison Ivy (1992) Dvdrip 300MB rmvb

Subtitles : English
Format : rmvb
File size : 236 MiB
Duration : 1h 32mn
Width : 592 pixels
Height : 320 pixels
Frame rate : 23.000 fps
Audio Format : Cooker
Bit rate : 96 Kbps



OCZ Vertex 3 Preview: Faster and Cheaper than the Vertex 3 Pro

Last week OCZ pulled the trigger and introduced the world’s first SF-2000 based SSD: the Vertex 3 Pro. Not only was it the world’s first drive to use SandForce’s 2nd generation SSD controller, the Vertex 3 Pro was also the first SATA drive we’ve tested to be able to break 500MB/s on both reads and writes. Granted that’s with highly compressible data but the figures are impressive nonetheless. What wasn’t impressive however was the price. The Vertex 3 Pro is an enterprise class drive, complete with features that aren’t exactly in high demand on a desktop. As a result the V3P commands a premium - the drive starts at $525 for a 100GB capacity.

Just as we saw last round however, if there’s a Vertex 3 Pro, there’s bound to be a more reasonably priced non-Pro version without some of the enterprisey features. Indeed there is. Contained within this nondescript housing is the first beta of OCZ’s Vertex 3 based on a SandForce SF-2200 series controller. The price point? Less than half of that of the V3P:
Pricing Comparison
128GB 256GB 512GB
OCZ Vertex 3 Pro $525 (100GB) $775 (200GB) $1350 (400GB)
OCZ Vertex 3 $249.99 $499.99 N/A

At an estimated $250 for a 120GB drive the Vertex 3 is more expensive than today’s Vertex 2, but not by too much nor do I expect that price premium to last for long. The Vertex 2 is on its way out and will ultimately be replaced by the V3. And SSD prices will continue to fall.

What sets a Vertex 3 apart from a Vertex 3 Pro? Not all that much, but SandForce has grown a lot in the past year and instead of just a couple of SKUs this time around there are no less than seven members of the SF-2000 family.

You should first know that SandForce only produces a single die, the differences between all of the members of the SF-2000 family are strictly packaging, firmware and testing.

The main categories here are SF-2100, SF-2200, SF-2500 and SF-2600. The 2500/2600 parts are focused on the enterprise. They’re put through more aggressive testing, their firmware supports enterprise specific features and they support the use of a supercap to minimize dataloss in the event of a power failure. The difference between the SF-2582 and the SF-2682 boils down to one feature: support for non-512B sectors. Whether or not you need support for this really depends on the type of system it’s going into. Some SANs demand non-512B sectors in which case the SF-2682 is the right choice.

You may remember that our Vertex 3 Pro sample used a SF-2682 controller. That’s because initially all SandForce made were SF-2682s. Final versions of the V3P will likely use the cheaper SF-2582.

The SF-2200/2100 series are more interesting because of their lower target price points. You lose support for the supercap but that’s not as big of a deal on the desktop since you’re not working with mission critical data. The big difference between the 2200 and 2100 is support for 6Gbps SATA, the former supports it while the latter doesn’t. This is a pretty big difference because as we’ve seen, when paired with a 3Gbps controller the SF-2000 isn’t too much better than what we had with the SF-1000.

The other big difference is the number of byte lanes supported by the controller. The SF-2181 and above all support 8 NAND flash channels, however only the SF-2282 supports 16 byte lanes. Each NAND device is 8 bytes wide, supporting 16 byte lanes means that each channel can be populated by two NAND devices. This lets a single SF-2282 controller talk to twice as many NAND devices as a SF-2281.

There’s no performance difference between the 8 and 16-byte lane versions of the chip, it’s just a matter of pure capacity. Thankfully with 25nm NAND you can get 8GB of MLC NAND on a single die so both the 2281 and 2282 should be able to hit 512GB capacities (the 2281 simply needs higher density NAND packages).

The Vertex 3 sample we have here today uses the SF-2281. Our sample came configured with sixteen 16GB Micron 25nm ONFI 2.0 NAND devices. Remember that while both Intel and Micron own the 25nm fabs, the two companies are providing different specs/yields on 25nm NAND. The 25nm Micron stuff is rated at around 3,000 p/e cycles from what I’ve heard, while the Intel 25nm is rated at 5,000. The main difference here is that the Micron is available in great quantities today while the Intel 25nm isn’t.
RAISE: Optional

One other difference between the SF-2500/2600 and the SF-2100/2200 is the optional nature of RAISE. You'll remember that in order to allow for lower quality NAND SandForce stripes a small amount of redundant data across the array of NAND in a SF-1000/2000 drive. SandForce never stores your actual data, rather a smaller hash/representation of your data. When your data is compressed/deduped for storage, SandForce's controller also generates parity data equal to the size of a single NAND die in the array. This process is known as RAISE (Redundant Array of Independent Silicon Elements) and it allows you to lose as much as a full NAND die worth of data and still never see a bit of data loss from the user's standpoint. At 25nm however a single die can be as large as 8GB, which on a lower capacity drive can be a significant percentage of the total drive capacity.

With the SF-2100/2200, SandForce allows the manufacturer to disable RAISE entirely. At that point you're left with the new 55-bit BCH ECC engine to do any error correcting. According to SandForce the new BCH ECC engine is sufficient for dealing with errors you'd see on 25nm NAND and RAISE isn't necessary for desktop workloads. Drive makers are currently contemplating what to do with RAISE but as of now the Vertex 3 is set to ship with it enabled. The drive we have here today has 256GB of NAND, it'll be advertised as a 240GB drive and appear as a 223.5GB drive in Windows.
Here We Go Again: 4KB Random Write IOP Caps

With the SF-1200 SandForce capped the peak 4KB random write speed of certain drives while negotiating exclusive special firmware deals with other companies to enable higher performance. It was all very confusing as SandForce shipped initial firmware revisions with higher performance and later attempted to take that performance away through later firmware updates.

If you pay attention to the table above you’ll notice that there are two specs for 4KB random write IOPS: burst and sustained. The burst value is for around 15 seconds of operation, the sustained is what happens when the firmware initiated performance cap kicks into action. By default the SF-2100/2200 drives have a cap of 20,000 IOPS for 4KB random writes. After a period of about 15 seconds, the max performance on these drives will drop to 20K. The SF-2500/2600 controllers are uncapped, max performance can remain at up to 60K IOPS.

The beta Vertex 3 review sample I have here today manages around 45K IOPS in our 4KB random write test. That test runs for 3 minutes straight so obviously the cap should’ve kicked in. However it didn’t.

I asked SandForce why this was. SandForce told me that the initial pre-release firmwares on the SF-2200 drives don’t have the cap enabled, but the final release will put the cap in place. I also asked SandForce if it was possible for one of its partners to ship with a special firmware build that didn’t have the cap in place. SandForce replied that anything was possible.

I asked OCZ if this meant the drive I was testing wasn’t representative of final, shipping performance. OCZ stated very clearly that performance will not change between the drive I have today and the drive that goes on sale in the next 2 months. To me this sounds like SF and OCZ have struck another exclusive firmware deal to ensure slightly higher performance on the Vertex 3 compared to a standard SF-2200 based drive.

SandForce wouldn’t comment on any existing agreements and OCZ said it couldn’t get SandForce to confirm that the V3’s performance wouldn’t change between now and its eventual release. Based on what we saw last time I expect SandForce to offer the 60K IOPS firmware to all partners that meet certain order size commitments. Order enough controllers and you get a special firmware, otherwise you’re stuck with the stock SF-2200 firmware.

Of course this makes things very confusing for those of you looking to shop around when buying a SF-2200 drive. I do wish SandForce would just stick to a single spec and not play these sorts of games but that’s just how business works unfortunately.

The good news is that for most desktop workloads you don’t really benefit from being able to execute more than 20K IOPS, at least in today’s usage models.