‘Context awareness’ has been a key area of research at Fjord recently. Some of our recent work has helped us reassess our definitions of what ‘context’ really means, and how it’s becoming increasingly important to digital design.
One area of particular interest has been the ‘containers’ or levels within which context-sensitive experiences take place. What follows is a summary of how we’re looking at different levels of context-sensitive experiences, illustrating the current situation and where things might be going.
Context-sensitive apps are here
We may not have reached the utopian visions of a magical world where everything is exactly the way a user wants it to be, it is interesting to highlight that on a certain level we already have pretty good context-sensitive experiences. That level is currently centred on apps.
At the level of individual apps, there is already a lot of ‘context sensing’ going on, whether it is as basic as using the accelerometer to determine device orientation, or using GPS to know where you are.
But what I think we are heading towards is new kind of apps that can adapt to context to a far greater degree. These apps will adapt depending on where I am, who I am with, what time of day it is, and so on.
For example, if I could open a transport app while I happen to be sitting on a bus already, surely the app should tailor my experience accordingly? Apps in general should start to change depending on context: How might a news app be different on a weeknight when I am sitting on the couch, compared to its behaviour during a lunch break around the coffee table in the office?
Whether that kind of behaviour creates a better experience we have yet to find out. There are a lot of risks of getting it wrong, like taking away options or alienating users through wrong assumptions.
However, I believe the bigger question is whether the concept of ‘apps’ itself needs to change.
The concept of apps should gradually change into something more fluid, local and temporary. Rather than installing and then hording them, I can imagine apps to be streamed locally depending on a location that I am in. To use the example above, rather than opening a transport app, the bus would make available an app when I board, and make it disappear again when I leave.
Concepts in this direction are currently gaining momentum and there are other people who think and talk about similar or related ideas.
The Device Level
The next level up from the app level when it comes to context-sensitivity is the Device Level. In this area we find far less context-dependent service design, although there are projects like Nokia Situations or Nokia Bots.
And of course there is the vision that a device would fully understand how you would like it to behave in different contexts, like automatically switching to silent mode when you don’t want to be bothered. But we are quite a long way from that.
Besides the Nokia Labs projects mentioned above, there is also an interesting app for Android phones called Tasker. Similar to ifttt.com, this app lets you set up behaviours triggered by events or state changes. It may be tedious to set up at the moment – but when I tried it, I loved how I can teach my phone simple things like turn on silent mode when I place it face down, start the music player when I plug headphones or change the ringtone and volume when I’m in a certain location.
The next leap might be devices that really understand wider contexts and how to use them for a better experience. For example, when is the best moment to grab a user’s attention to convey a piece of information? It could be a crucial update about the traffic or transport situation or the weather forecast: wouldn’t it be great if the phone automatically knew the best moment during your morning routine to alert you?
The way humans interact involves constantly picking the correct moment to convey the right piece of information. For example, a study on phone usage while driving showed that ‘passenger conversations differ from cell phone conversations because the surrounding traffic becomes a topic of the conversation, helping driver and passenger to share situation awareness, and mitigating the potential effects of conversation on driving.’
To me this makes perfect sense because the shared context allows the passenger to understand when it is best to pause, because of a tricky traffic situation or when to cut an answer short to minimise distraction to the driver. This is the kind of ability our devices need to acquire.
The System Level
What comes after that? Looking at it on a system level. Rather than being limited to a device, this means context-driven experiences that span across different touchpoints and configurations.
At the moment when I’m at my desk, I am being alerted about new emails or upcoming meetings on three devices at roughly the same time. Phone, tablet and laptop ‘ping’ shortly after each other. So the obvious example would be for these devices to understand how their configuration appears while I am at my desk, and use this to provide a better experience.
To me personally this would mean such alerts and reminders only popping up on my laptop, since this is what my attention is focused on most of the time and where I would react to the alerts. If I then go into a meeting, only taking the tablet with me, the configuration is changed and the tablet should now be the main output device.
The challenge here is that everybody has slightly different preferences for behaviours like that.
Going back to the idea of more fluid and dynamic app experiences mentioned above, I could also imagine how such a system level might alter the experience on different devices dynamically. If I am using a certain app and then switch on the TV, could this affect the experience of the app, changing it into something that is more appropriate to the lowered level of attention I’m now giving it?
Again, it’s a fascinating topic with some inspiring thinking emerging, and I’m particularly excited to see how digital design will cope with these challenges. Context-sensitive experiences across the levels of app, device, and system are set to dominate how we interact with objects and services in the future.
It will take a while before we get there though. After all, these visions have been around a while and there have been attempts to make them happen that failed (I’m thinking of Bluetooth). I just hope we will get there sooner rather than later.