Saturday, September 25, 2010


Do you want the gadgets in your pocket to help you make decisions or monitor aspects of your daily life? Intel believes context-aware computing will take hardware to the next level of intelligence, but there are privacy issues to consider, too.

If you have a fairly current smart phone, it has some sensors built in. It likely has a digital camera, a motion sensor, a GPS radio, and possibly even a tiny gyroscope. Right now, though, your phone is just a collection of hardware with various bits of software running on it. Need to get somewhere? Fire up the GPS app and get directions. Or, use the GPS locator to automatically check you in on Facebook Places or Foursquare.

But what if your smart phone could be really...well, smart? What if your phone always had software running in the background keeping track of what you do? We’re not talking about giving up privacy. Maybe the data on what you’re doing is kept locally, or in a personal cloud, rather than a big aggregator like Google or Facebook.

So, over time, for example, you may have a preference for low-cost Chinese restaurants. If you travel somewhere new, you’re phone will pop up recommendations for cheap Chinese food. Oh, and you’ve always shown a preference for spicy food, so you get a list of cheap Szechwan or Hunan food. You won’t be locked into those choices, either. If you feel like a pizza, you can change the preferences.

Context-aware computing, then, is a combination of sensors that monitor what you do, databases that collect information on what you like, and even post it on your blog or on Facebook, if you choose.

Now, you’re probably thinking that the potential for privacy abuse is legion. Already, electronic kiosks in Japan will tailor advertising to you personally as you walk by their locations. Is that intrusive? Perhaps.

Let’s take a somewhat more benign application: monitoring your elderly parent. As sensors become more compact, they can be woven into clothes or built into shoes or slippers.

How Does This All Work?

Justin Rattner, who runs Intel’s research arm, and Lama Nachman, a senior researcher at Intel’s Interaction and Experience research, dove into the details of context aware computing and how works under the hood at this fall's IDF.

The keys to making context-aware computing work are low power, low cost, and flexible sensors: accelerometers, GPS locators, cameras, and so on. Note that these sensors don’t have to be built into a smart device. They can have radios (WiFi, for example) that communicate with a personal area or local area network.

Imagine a sensor build into a small device that mounts on the foot or shoe.

Zoom

The sensor measures the strike time, stride time, and other data. The sensor would have to collect data for a fairly extended period of time. After it has that data, the system can detect if the user’s gait starts to stutter or change in a drastic way, and issue an alert that the user might fall. Alternatively, that could be communicated over the network to a care provider, who can intervene as needed.

Another example mentioned by Nachman is a TV remote control augmented with a sensor, which monitors what buttons are pushed and also picks up characteristics about how the remote is used. It could tell who the user is, because everyone moves or handles the remote just a little differently. Then it could make recommendations for shows to watch, based on what you’ve watched before.

Having small, low-power sensors with radios is one thing, but you need software that’s smart enough to do something with that data. This is where the inference pipeline comes in.

The chart shows different sensors extracting different types of data, how they might be classed and what might be inferred from the data. Note that it’s not just about the sensors. How the sensors are used and what the user is doing with other applications (“soft sensing”) also becomes part of the user’s data stream, and these are fused together by something Nachman called the “Activity Fusion Algorithm.”

Zoom

If you want the systems in your life to really be smarter about what you do, and make recommendations or give you useful information, then you’ll want all of the sensors in all of your systems (portable or not) to aggregate that data. The inference engine can then be much smarter about what predictions it makes about what you’ll do or like. If that sounds creepy, it could be. But if it’s done right, it could be a huge productivity enhancer.

Zoom

Now comes the tricky part. All of this data collection and aggregation requires tremendous compute and storage capability. It also implies that some of this data is either stored in the cloud, on distant servers, or passed through systems on the Internet. So, users need granular control over what data is collected and where it’s stored.

All of this is still in very early stages of development. For example, if you want to have sensors up and running 24x7, the power draw needs to be extremely low, and you need the ability to quickly and easily recharge. Anyone who’s used a GPS radio on a smartphone can understand how challenging this might be for certain classes of sensors.

Of course, the biggest question is social: will people really want it? Different users might have different feelings about being monitored constantly. Just look at the controversy surrounding the recent launch of Facebook Places.

In the end, context-aware computing will likely become prevalent a decade or so from now. However, it’s really difficult to predict what form it will take and how it will be actually be implemented and used. That’s the nature of research: what shows up in the lab today may affect what products we see a decade later, but predicting what those will be is a much harder nut to crack.


No comments:

Post a Comment

If you have any Doubt..kindly let me know