This site runs best with JavaScript enabled.

Where The Action Is

The Foundations of Embodied Interaction

by Paul Dourish

Last tended to February 25, 2021

Why I'm Reading This
Book info on Library Thing

I first came across Dourish's work through his writings on

, and his research on Ubiquitous Computing in
Divining a Digital Future
with Geneieve Bell. As I began reading more of the foundational literature around
HCI
this year, I was surprised to see him pop up again. This book appears to be one of the modern classics of the field. It was clearly a key piece of literature to read – highly relevant to my academic interests and an insightful exploration of how we should design computers that align with our embodied knowledge of the world.

takes us on a historical and philosophical exploration of how we interact with machines, and how our evolving understanding of embodied cognition is changing how we think about designing digital interfaces.


A History of Interaction Design

Our notion of what a computer is, what it does, and how it works hasn't changed for decades.

We're still living with the legacy of a trade off made fifty years ago; computer processing time used to be enormously expensive. It was worth making humans transform their data and instructions into formal, rigid input languages that optimised for the machine's experience, rather than for the human experience. At the time, most computers were used for military or business calculations and no one minded too much.

We now have the odd contradiction that our machines have more power than we're able to leverage - 95% of the time they're doing basic tasks at low computational capacity. While we perch in front of them slowly deciding what we want to do next.

We're stuck in the historical paradigm of 'desktop computing' – the idea computers are a static workstation we plop in the corner of the room, and go to in order to execute specific tasks that occupy the whole of our attention.

The dream of Ubiquitous Computing tries to subvert this notion. Ubicomp is a paradigm of computing where our machines are embedded in everything around us. The point is to get us up and moving around in the world, and bring the computational power with us.

This was what we were promised with the 'internet of things,' which has so far turned out to be the internet of impractical, invasive surveillance objects.

Research institutes like

are reportedly exploring the idea of 'the room as a computer', but it all still seems like a hypothetical prototype.


Dourish proposes the concept of Embodied Interaction

"Embodied Interaction is interaction with computer systems that occupy our world, a world of physical and social reality, and that exploit this fact in how they interact with us." (3)

Traditionally, computational systems are thought of as procedures - step by step models of sequential behaviour. The last two decades have seen us turn towards interaction. We are instead paying attention to the interplay of different components. An ecosystem of interlinked elements instead of a rote sequence of tasks.

The system focuses on many diverse elements with specific roles rather than generalised monolithic processes.


The Four Phases of HCI History

Dourish presents four historical phases of HCI; electrical, symbolic, textual, and graphical

Electric

  • Early computers had their logic physically wired into the circuits. You couldn't change the programme without resoldering connections.
  • "the critical development in digital computing was that of the stored program computer... a machine whose operation is not directly encoded in its circuits, but rather is determined by a sequence of instructions held in its memory".
  • Hardware and software were inherently and obviously tied together - wires, plugboards, patch cables were visible. Programming required an understanding of electrical design. There was very little distinction between the two "worlds" of hard/soft

Symbolic

  • As computing matured, the way we communicated with them moved from numeric machine language to higher level symbolic language. This was the beginning of the programming language.
  • First we developed assembly languishes which were one step removed from machine language. They still weren’t very portable between systems. We then developed FORTRAN and lisp which were one more level of abstraction. Interaction with computers became primarily symbolic at this stage - higher level programming languages are easier to write and debug.
  • “Symbolic interaction is a much more natural and intuitive form of interaction for us than the electronic form that had previously been necessary” 9

Textual

  • Computer interaction moved into a primarily text-based medium. It created the dynamic where a user sits at a terminal entering commands and receiving responses - a Feedback Loop.
  • “Although the notion of interaction with computers had important predecessors before this period such as Ivan Sutherland's hugely influential work on Sketchpad it was arguably from the paradigm of text-based dialogue that people drew the idea of interacting with the machine. ” 11

Graphical

  • The move to graphical interfaces began with Ivan Sutherlands sketch pad in 1963, and was expanded by Alan Kays work on DynaBook at Xerox PARC
  • “The move from textual to graphical interaction did not simply replace words with icons, but instead opened up whole new dimensions for interaction” 11
  • Graphical interfaces took us from a one dimensional stream of characters into two dimensional space. Rather than dealing with a linear stream of words flowing up the screen, interfaces now involved managing space as well as information (arranging windows, focusing on multiple areas at once).
    • This allows us to arrange information in hierarchies of importance, and put content in our peripheral vision. “By placing them in the periphery, the application exploits my ability to focus on one area while passively attending to other activity in the edge of my visual field.” 12
  • Graphical interfaces also allows us to use spatial reasoning and Spatial Memory - our ability to recognise patterns and Gestalt Principles help us organise information.
  • Graphical interfaces allow us to create new Visual Metaphors, such as the Desktop Metaphor. “Information management tasks are based around a metaphorical model incorporating filing cabinets and trashcans” Digital Metaphors
    • “General Magic's Magic Cap interface, used a metaphorical depiction of an office featuring a desk (along with various desktop tools), a telephone, and a door open to a world outside; notetaking applications often feature graphical depictions of notebooks or index cards;” 13
  • These kind of visual metaphors enable direct manipulation of data. Making data into discrete entities we can select, drag, drop, and delete. “From these separate elements, the designer builds an inhabited world in which users act.” 13
  • “In 1981 Xerox's Star was the first personal computer to ship with the features of a graphical user interface as we recognize them today windows, menus, and a mouse and the Macintosh, three years later, was the first to ship in volume at an affordable price” 14
    • From that point on, mice and graphical interfaces were considered the obvious way we would interact with computers, thirty years later this is still the industry standard.

Tangible and Social Computing

Tangible Computing is when we “distribute computation across a variety of devices, which are spread throughout the physical environment and are sensitive to their location and their proximity to other devices.” (15)

This is the same dream as Ubiquitous Computing and Internet of Things - baking computational logic into everyday objects

Another way to think about this is creating environments where the physical objects in the room act as the interfaces, rather than graphical interfaces and mice. This is the

, Bret Victor version of the dream.

“Mice provide only simple information about movement in two dimensions, while in the everyday world we can manipulate many objects at once, using both hands and three dimensions to arrange the environment for our purposes and the activities at hand.” (16)

Social Computing is focused on “incorporating social understandings into the design of interaction itself. (16) It focuses on interfaces as conversations, and draws on more social science/anthropological theory – trying to recreate social relations and social meaning in the computer interface.

Both Tangible Computing and Social Computing draw on our familiarity with the everyday world – they're "more than simple the metaphorical approach used in traditional User Interface Design."

Rather than focusing on imitating the physical world of objects in computers, they focus on bringing computing into our social, embodied experience of the world. “They share an understanding that you cannot separate the individual from the world in which that individual lives and acts.” (17-18)

In many ways the Human-Computer Interaction community is stuck in the world of Logical Positivism and Cartesian Dualism.

It's a view that “makes a strong separation between, on the one hand, the mind as the seat of consciousness and rational decision making, with an abstract model of the world that can be operated upon to form plans of action; and, on the other, the objective, external world as a largely stable collection of objects and events to be observed and manipulated according to the internal mental states of the individual” (18)

Dourish argues Embodiment is central to this approach. "Interaction is intimately connected with the settings in which it occurs" - in recent years interactions designers have realised the value of anthropological Ethnography to understand the environment and context of interactions.

Early work in the field tried to create abstract models of the people they were designing for – hypothetical users – rather than exploring interaction design with real people in real contexts.


Human-Computer Interaction is not the only field recently captivated by Embodiment. Across disciplines, more consideration and attention is being paid to Phenomenology.

  • Phenomenology is the study of how we perceive, experience, and act in the world around us. It's more concerned with our direct experiences than constructing abstract models about them.
  • Phenomenology argues that the divide between mind and body that begin with Cartesian Dualism has no grounding in reality - "thinking does not occur separately from being and acting." (21)

A History of Touch and Tangibility

Dourish argues that over the last three decades, the way we interact with computers has barely changed at all – we use the same physical inputs of mice, keyboards, and screens, and the same digital patterns of dialogue boxes, windows, and files. We still need to go to a desk and input with both hands

This is what

means when he says the computer revolution
hasn’t happened yet
- our dominant paradigm of how computers should be used and what role they play in our lives is stuck in the 1990's.

Mark Weiser's dream of Ubiquitous Computing in the 1990's tried to draw focus away from the computing device itself, and spread computational logic out into our existing environments.

Weiser wanted "computationally enhanced walls, floors, pens, and desks, in which the power of computation could be seamlessly integrated into the objects and activities of everyday life" (29). The goal was to make computers invisible - so pervasive they disappeared into the wallpaper.

Xerox PARC developed a strategy known as computation by the inch, foot, and yard.

  • Inch-level computation would by tiny electronic tags (like RFID chips) - "computational post-it notes". We would wear them as "activity badges" and have the computational environment around us respond to our location in space – routing phone calls to us, displaying relevant information. Objects like books would be locatable in space - Internet of Things narratives have been calling these "smart" objects
  • Foot-level computation focused on "computationally enhanced pads of paper"
  • Yard-level computation looked at wall-size devices. They developed the "LiveBoard" – a whiteboard that supported multiple pens. One user would have tens of inch-sized screen devices dotted around a room, plus a few foot-sized ones, and then one or two wall-scale ones.

How is this different to the multiple LCD screens we already have in our microwaves, washing machines, Nintendo switches, smartphones and smartwatches? Ubiquitous computing has already happened to some degree. Smartphones have brought computing into the world

The key difference between this dream and us now is they imagined information would be able to freely move around between the devices. Movement is the difference

Making devices at a wide variety of sizes was not the point of Ubiquitous Computing. It was figuring out how they would operate as part of a holistic system, and fit into the everyday world of activities and interactions. Interoperability was key.


While Xerox PARC was developing Ubicomp, EuroPARC (a satellite research institute) in Cambridge was exploring how to combine the affordances of physical paper and digital documents into one medium. Moving between the two meant you lost a bit of each in the translation process.

Pierre Wellner developed a "Digital Desk" where a camera was positioned above a physical desktop and recorded what was on it – it was able to read documents and make calculations based off it. It could also project digital documents down onto the surface and track user's hand movements.

The two "killer design features" of the Digital Desk were support for manipulation, and the way its electronic and physical worlds were integrated. Interactions with objects were direct interactions with real world objects, not the imitation of them we do in current

Rather than being limited to the inputs of a keyboard and one mouse, you had access to two hands and ten fingers which allowed for more complex inputs

A document could exist both physically and digitally at the same time. Printers and cameras allowed documents to move between the two worlds.


Virtual Reality vs. Enhanced Reality

Mark Weiser wanted a computationally augmented reality, as opposed to the Virtual Reality crowd who dream about replacing reality.

Virtual Reality gained popularity in the 1990's as the technological capacity for data gloves and head trackers came in around the same time as Cyberspace. Howard Rheingold wrote a comprehensive history on it.

Ubiquitous Computing and Virtual Reality have fundamentally different approaches to the relationship between people, computers, and the world - it’s the difference between making the world invisible and making the computer invisible. VR is all computer. Ubicomp is all world.

Ubiquitous Computing is “a technology of context; where traditional interactive systems focus on what the user does, ubiquitous computing technologies allow the system to explore who the user is, when and where they are acting, and so on.” (39)

Ubicomp prototypes focus on being reactive - automatically switching modes based on the location of people and things

Hiroshi Ishii at MIT's

have been exploring "Tangible Bits"

So far, the Cultural Narratives we tell about The Computer Revolution focus on turning the physical world into virtual representations. Cash into Bitcoin. Paperless offices. Books into eBooks. We're still caught in the dream of Cyberspace where the laws of physics do not apply.

The Tangible Bits research challenges the assumption that turning atoms into bits is a universal good. While "digital and physical media might be informationally equivalent, they are not interactionally equivalent" (44)

The goal of the Tangible Bits research is to put physicality back into digital experiences that support natural interaction in the real world.

Thinking in 'inputs' and 'outputs' is unhelpful for Tangible Computing. In our everyday environment, these are coupled. They're interconnected and coordinated - movement in a space affects a display on a wall. Moving an object changes information. It's a Feedback Loop.

Tangible Computing differs from Ubiquitous Computing - it doesn’t think the computing should disappear into the world, but instead be present and deeply integrated into artefacts.
“Tangible Bits provides some balance to the idea that a transition from atoms to bits is inevitable and uniformly positive”... “it observes that while digital and physical media might be informationally equivalent, they are not interactionally equivalent” (44)

Traditional interfaces put only one element in focus at a time - one cursor, one window, doing one task. In the embodied real world we achieve things with multiple limbs coordinated together to make stuff. Think about the numbers of physical touch points when someone plays a piano - feet, fingers, arms, eyes, ears.
“Not only is there not a single point of interaction, there is not even a single device that is the object of interaction.” (51)


Social Computing

Social Computing is the application of Cultural Anthropology and Sociology to designing interactive systems. Computers are obviously integrated into the larger social fabric of our lives and civic structures - social sciences help us explore those relationships.

Anthropologists believe we need to do more than simply describe what the members of a culture do. Through Thick Description and Deep Hanging Out, we need to find out what they experience while doing it, why they do it, and how it fits into the fabric of their daily lives.

Want to share?

Join the newsletter

For weekly notes on visual thinking, ethical technology, and cultural anthropology.

Maggie Appleton © 2022