Intel Chief Technology Officer Justin Rattner closed IDF 2010 with a keynote address that delved into context-aware computing. In layman's parlance Rattner described computers that sense and react to their immediate environment - clever machines, if you will.
Rattner pointed out that many of the devices we have today have incredible power and a mass of applications - Apple's iPhone springs immediately to mind - but accessing the apps requires user intervention, insofar as one needs to load up the apps manually. In the future, context-aware computing ought to provide the devices with a basic sense of intelligence, he said.
This means that future devices will collect information about you at all times, building up data of who you're with, your likes and desires, and then predicting what your needs may be by providing recommendations based around the collected data. Rattner showed a demonstration of such technology by way of a 'learning' personal vacation assistant program installed on a MID that was taken around San Francisco.
We already have many of the hardware sensors needed for context-aware computing - GPS, geo-tagging, accelerometers, fast Internet connections, et al - so it's a question of engineering the software to run on the hardware.
Context-aware - it's all personal
But context-aware computing is more than just a real-time aid for your holidays, Rattner said: it can be applied practically anywhere. Intel's CTO demonstrated a TV remote control equipped with a special sensor that used something termed unsupervised learning to personalise your TV-viewing experience. The control's sensors identified the user and then liaised with the TV to load up personal program lists. We've heard that the PC is personal again, but this kind of context approach takes it to another level.
Drilling down, how does it work as a broad concept, then? The answer is surprisingly simple. Hardware is equipped with sensors that collect what are termed hard and soft data, where hard data refers to physical sensing while soft data is accumulated from passive sensing such as your Twitter or Facebook postings. The software portion of the device then aggregates the data to learn more about the user - increasing the context data - and then provides various recommendations around it.
I'm watching you
Learning software and multifarious sensors make it all sound a bit Big Brother-ish to us. I don't want people to know where I am, how I'm feeling, what I'm doing at all times, and I certainly don't want all by data to be uploaded to servers. Heck, what happened to just using some common sense? We're sure that companies designing context-aware systems will build in some level of security, yet, sometimes, too much information is a bad thing.
Whipping out an 'iPhone 15' in a few years' time may be a scary experience. It might just know more about you than you do: what you like to eat, where you go, how often you drive after going to bars, just where you are at any moment, and predicting what you might be doing later on in the afternoon.
Device-equipped sensors are all well and fine, we think, but it gets rather more surreal than that. The ultimate interpretation of context-aware computing would be to tap directly from the human brain, predicting your thoughts. It may be possible sooner than you think.