Everything is a fight for the users attention; not at least from the users themselves. As everything is becoming increasingly mobile, so our way of interacting with technology is being influenced by these new behavioural patterns. People use their cell phones/tablets/etc on the toilet, in waiting rooms or public transit and increasingly while doing other things that were previously regarded as entertainment by themselves; e.g. playing games on the cell phone while watching TV or tweeting from the talks of a conference.
According to Chris Atherton, a UI-designer with a phD in neuroscience, the user’s attention is finite. Every time the user switches context, e.g. from the t.v. to the phone, it has a cost. Every time a user focuses their attention on one thing, they immediately become less aware of other things; attention is finite.
As software designers we should take these behavioural patterns seriously. The expectations and habits being formed by the users of these mobile devices follow them into their interactions with supermarket checkout counters, work tools and even other people!
An easy, practical take away that will work for most types of software design is to assume your user is just tuning in and then evaluate what is the most important thing in her view right now? How much information can they reasonable take in? However it is important not to oversimplify; find out what it is necessary to be able to do and then do that – split it up if you have to.
Other easy attention-grabbers are color changes and moving objects. They capture attention quickly and sometime subtly. However a lot of things are competing for our user’s attention, including the users themselves.
Technology as an extension of ourselves
Atherton also briefly touched upon the amount of outsourcing people do, users are becoming used to relying on technology to ease the cognitive loads. People extend parts of their memory and bodily functions to technology. People expect their technology to help them.
I would have liked her to touch upon the dualism of technology that she presents here. Some times technology is a voluntary distraction and other times technology is an extension of the person; a person’s technical abilities. The distraction technology should be easily accessable and easily noticeable. The extension technology should be there when we need it. Mobile technology is an excellent source of at-hand distraction technology. And the increasing amount of interaction with these distraction technologies affect how we interact with extension-technologies. We should design for these new interaction patterns, even when desigining serious and critical software – perhaps especially when we’re designing critical software.
Less briefly, I think the reach of our ‘at hand’ tools is no longer even vaguely comprehensible for digital tools. Most digital tools these days are a kind of ‘hyperobject’ – that is, they’re *massive* both in terms of scale and time. Plus, it’s not always obvious what is ‘input’ and what is ‘output’ (and, for this reason, what is ’cause’ and what is ‘effect’). And, therefore, I think we should try to emphasise consent and, at the very least, make things ‘interrogable’. I can’t think of many instances where making things frictionless ‘as a default’ is in the *long-term* interests of users.
However, in the short term and the medium term …
Not everyone can be (or should be) a programmer:
Sorry, no conclusion. Just an infinite deferral – which probably implies we need to design stuff with care, think about things on a case by case basis and follow basic UX principles for frictionlessness. I’ve stolen these from Jakob Nielsen’s ’10 Usability Heuristics’.
“Visibility of system status”
What data is this frictionless interaction grabbing and sharing?
“Match between system and the real world”
‘Openness’ (a la Facebook) isn’t necessarily a default.
“User control and freedom”
Support undo on every interaction, even the frictionless.
“Consistency and standards”
We probably need some standard metrics for data sharing and system / social boundaries.
Prevent oversharing and actions which ‘cascade’ across system / social boundaries.
“Help users recognize, diagnose, and recover from errors”
– yes, frictionless but help users add the friction back in if they want to
I think its an interesting distinction between what the users do and what the users want. On the one hand we see an increasing number of users constantly “multi”tasking in ways that would be deems stressors to their systems. On the other hand their is a rising movement of people seeking contemplation and mindfulness; a sense of being present in the moment.
These two forms of interaction are fluid the same user might prefer different interactions at different times. So how do we, as designers, help manage those seemingly contradictory demands in our software? The discussion quickly becomes normative of how our users ought to live. On the one hand we have the power to design technology with certain features and behaviors, on the other hand users will quickly appropriate and (ab)use the designs any way they like. That’s not to say that we as designers don’t have a responsibility in the products that we create to create something that makes the lives of people better. However, what actually makes people’s lives better is sometimes a big question…sometimes not. But most often the answer is really just; it depends…
I’m afraid my answer became just a jumble of thoughts as well 😉