hci: the human as appendage

Virtual Realities - Real Concussions


When I was about 12 years old I went to a trade fair near my town. Years later the only memory I keep is the following anecdote.

While walking around with my grandfather I saw that the local bank had a very flashy and futuristic booth where you could experience a virtual reality environment. In it there was a text that said "VIRTUAL (n) that which is real, but doesn't exist". There were three people wearing head mounted displays and looking around in slightly encroached body poses, as if they were amongst a swirling flock of birds attacking them. I had seen these VR systems in science magazines and I knew about the lore surrounding them, but I had never before experienced one.

I waited in line for what seemed an eternity and finally I got to the VR station. Every set had a stewardess assisting the public, my stewardess for this experience handed over the HMD and gave me a few explanations, I couldn't really listen to her, I was too overexcited by the presence of the Silicon Graphics stations, glistening under the theatrical effect of the spotlights.

After the stewardess reset the trip, the HMD showed what seemed like a church, with stained glass high up on some walls and very ample spaces. I couldn't really see my arms or any part of my body as I had seen in movies. The graphics were slow, I could see the frames changing and the viewpoint catching up to the position of my head after a considerable delay. I moved my head around fast, trying to explore the whole virtual space in the little time I had for living this experience. I was not aware of what was going on outside of the HMD, the outside world was entirely locked out for me.

As I moved my head around I had no perception of the location of the stewardess. While I queued I had seen her before helping people to not get caught up in the cables that connected the HMD, but I had no idea where she was with respect of me, or even which way I was looking at in the real world. At some point she must have got pretty close to me because I felt a dry and strong bang, followed by a muffled shriek. Next thing I remember was my grandfather talking to me from my back, saying that I had to come out that I had been there for too long and had hurt the stewardess. I was very confused.

When I took my HMD off the young woman was sitting on a chair, just behind the computer setup being assisted by a colleague. My grandfather approached her to apologize in my stead. Her hand clasping her forehead. Apparently I banged her head pretty badly with one of the hard-edges of the HMD. I can't say that I was very worried about her, the excitement of the experience and the intensity of my disappointment about such crude technology, was stronger than my compassion for the stewardess at the time.

I remember the whole experience as intensely disappointing.

In retrospect and with the knowledge I have gained since, that incident has contributed considerably to shaping the way I understand the discipline of HCI (Human-Computer Interaction). With the advent of every new human-computer interaction technology there's always a human having to make an adaptive leap and a trail of people with some kind of physical side-effect that results from maladaptation. Every new development in HCI is marketed as "easier to use", "more natural". From the mouse to the Kinect, every controller has claimed to revolutionize the way we interact with the machine and every single one of them has left a trail of injured humans along the way.

New, newer, newest


Humans seem to be quite ready to adopt new forms of interaction irrespective of the ignorance on the impact these technologies have on the perception they have of their own bodies.

Wireless networking for example is a technology that has quickly spread to every corner of the planet and it was not until it was widely spread that studies were conducted to understand the impact it has on the human. This is in fact an ongoing experiment.

This general attitude of the human on the face of new technologies, opens a realm of almost infinite possibility for the machine. The field of HCI parasitizes this special status that humans concede to technology. Humans seem to have no trouble to subject the body to untested technologies for the sake of novelty.


Psycho-physical modalities


“One of the things I always liked about the Moviola is that you stand up to work, holding the Moviola in a kind of embrace [...] Editing is a kind of surgery -- and have you ever seen a surgeon sitting to perform an operation? Editing is also like cooking -- and no one sits down at the stove to cook. But most of all, editing is a kind of dance -- the finished film is a kind of crystallized dance -- and when have you ever seen a dancer sitting down to dance?”

A particular form of dysfunction comes from forms of interaction that lock the human body into a single modality of use for extended periods of time. Modalities are psychophysical in the sense that both the psychological state and the physical use together conform a modality. A person can be said to be listening or speaking, or constructing and irrespective of their concrete activity we can make assumptions about the state they find themselves in.

A healthy human subject in a wakeful state of full awareness, is multimodal in state and potential. Not only is the subject fully engaged psychologically and physically, but the subject is free to change these states effortlessly in a natural flow from one modality to the next. In this sense there are no interruptions, simply because they do not exist. When something else calls for attention a person in a wakeful state can shift modalities without ever fully abandoning their activity, only the modality and the subject of engagement change but the person never abandons a state of full engagement. This is a natural modality.

Multimodality is a word often used in HCI parlance but I argue that its meaning is perverted in discussing human forms of engagement as it is derived from a machine centric view. An interface is said to be multimodal when it provides several distinct means for input and output of data. This is a good thing as it supposedly increases the usability of a system. However this apparent beneficial effect only takes into account total data throughput between human and machine and assumes that redundancy and synergy are beneficial to the human. This definition of multimodality relies on how much attention the machine can have from the human.

An example of this is the head-mounted display (HMD), by providing visual and aural feedback as well as tactile means of navigation there is a high data throughput between the human and the machine. But the human loses awareness of the world around it. A person trying to engage somebody else that is wearing a HMD is sure to be interrupting. What one earns in immersion in a multimodal interface one loses in awareness. In this sense even so-called multimodal HCIs are in fact, locking the human into one modality of use, this is why I resist the notion of multimodality in HCI and would rather call these interfaces multi-channel instead. These types of interfaces make full conscious awareness practically impossible. The human becomes absorbed in a single modality of use, the one which is established by the interface.

Technobondage


I call this locking the human into a modality, a relationship of Technobondage. This kind of relationship is applicable to the chainsaw as well as to the computer. All technologies bring with them implicit propositions for bondage of the human. The machine is needy and the human forgiving.

Technobondage is a purely intellectual exercise, a kind of fetishism, that relies in the psycho-physical subjugation of the other for a momentary sensation of pleasure derived from a sense of efficiency. It is a mechanised form of Sadism in which the catharsis is eternally delayed.

It is no wonder that humans derive pleasure from the destruction of machines. Catharsis is only possible when the dependency is broken. Only then can consciousness return. At the same time this catharsis is necessary for the death of the machine, evolution can only exist where death discards the obsolete and inadequate. It is by the destruction of technological forebears that new technologies get made.

The least worthy way for a machine to die is to become a museum piece. A mere display item, a historical sample, for it is then that the logic of the technobondage gets turned in its head, as it can no longer subjugate the human.

HCI is obsolete


HCI as a discipline is based on a principle that no longer holds true, that human and machine ought to be distinct. While one performs computations the other performs interactions. This is a dualistic view, consistent with the Cartesian machine and oppressive of the body.

Most efforts in HCI have in one way or another produced gadgets that require the learning of a somatic grammar. By somatic grammar I mean a set of bodily movements that when combined can convey to the machine the intention of its user. Whether this somatic grammar involves dragging a piece of plastic over a rough mat and clicking on buttons, or calibrate thoughts with a device that picks up brainwaves, is besides the point, in either case there is a grammar of use. This grammar is composed by an ever growing set of somatic verbs, drag, drop, click, swipe, pinch, tap, rotate, shake, bump, think up, think down. None of which are part of a human’s natural use, they have to be learned, and the gadgets often need to be trained or calibrated. The phrase: “move to last photograph, select it and zoom in to see a detail” could be translated into the somatic grammar of some smartphones as: “swipe left, swipe left, swipe left, double tap, pinch, swipe”. This succession of verbs forms a sentence that expresses an intention to the machine.

There is a reason why they have to be learned, let the latest gadget be an example for why this is so.

At the time of this writing a new gadget called MYO is being advertised that uses electromyography as a gestural interface to a computer. Electromyography is a technique that picks up electrical pulses sent by the motor control nervous system to individual muscles. MYO can understand these signals and translate them into a model of tensional patterns in the arm and fingers, allowing for the recognition of very detailed and very specific gestures. It is in MYOs website, in the FAQ section that one can find a succinct expression of why MYO is more of the same. “We use a unique gesture that is unlikely to occur normally to enable and disable control using the MYO.” It is this “unlikely to occur normally” that pervades all the somatic verbs that enable interaction between human and machine. It is this trying to distinguish a gesture that wants to communicate an intention to the machine, from a gesture that would occur naturally, that directly opposes the possibility of integration.


fig 4. Moviola film editing station.