It seems to be a bit of a buzz term these days, as I hear all sorts of people mentioning contextual design. But in general it feels that its only being described at a fraction of it’s full scope, often focussing on what might be happening on a particular interactive screen, or the screen before, and it feels we’re failing to harness all of its potential.
I asked a number of people who work in the industry what they believed “Contextual design” to mean, and it was commonly understood that it referred to interactive systems that considered a user’s more immediate real-world motivations, when offering them a call-to-action or possible user journeys. In most cases it was changing the label on a Call-to-action that re-enforced what was already happening.
Some opinions were about systems reacting to tangible variables. Mostly location (environment or surroundings) or time of day, along with desires, or needs; What someone at a particular location might want at a particular time of day.
I agree with these ruminations, but also think contextual design should be informed by wider behavioural patterns. Similar to what Machine Learning is promising us today.
I do a fair bit of ebay’ing. But before I put my item up for sale I’ll take a look at examples already up there. I’ll “Watch” a couple of bids and see how much they eventually sell for. Ebay will see this “watching” as an sign of interest, and even after I’ve sold my item, will continue pushing onto me, the product I’ve just sold. I wonder how many more people do this.
Levels of connections and sub-contexts while important, are often logistically difficult in the real world. I’m a rather deeply embedded Google user; Android phone, GMail, Google Docs, Maps… you name it. So when I go abroad and my Chrome browser (running on my own tablet) gives me the Spanish version of Google, I’m inclined to clench a little. Especially as Google Now is already reminding me of my return flight home; Ah right, so you know I haven’t suddenly turned Spanish then?..! As it turns out, someone told me that Google Now is a different division. Acceptable excuse?
These are examples of digital forensics, that I often talk about when discussing contextual interactive solutions. Tiny pieces of cross-referencing information that triangulate intent through deduction. User is at X and has Y in calendar and therefore Z.
Intent can also be gauged in single slices of activity simply using pace, timing and deliberation. My tablet, that I often leave at home, takes ages to reboot, so rather than turn it off (to save power) when I got to work, I’ll put it in flight mode (it literally goes to sleep and looses no battery). Now to do so, I…
- pull down from the top of the screen
- Hit the extended menu button
- Then hit airplane mode
- Inevitably there’s that bloody warning message about how my devices will be pretty much useless without the interwebs. Which I also have to click; “OKAYYYYY!!!!!”
That is a several stage sequence with a mix of swipes and taps. If I manage to do that in under a second and a half then either; A) I have deliberately done something (that I’ve done a billion times before, around that same time of day, and don’t need to see the message) or B) A small child has pinched my tablet in which case the internet should be turned off immediately.
Contextual intelligence isn’t just about tracking who you are and what you’re doing, it should also speculate the real-life cost of error. What’s the worst that could happen?… per function.
There is quite a bit of effort in formulating these behaviours and algorithms. Like all good UX design, the cost and benefits should be measured. We don’t necessarily want to design every interaction and circumstance. We can formulate the measurement of intent for a user, or use Machine Learning (ML) to track likelihoods based on previous actions (personal or public).
It would likely mean that we create a syntax for intent – much like our information taxonomies. But that’s something for another day.