Thursday, June 22, 2006

MILESTONE in ubiquitous-computing

All watched over by machines of loving grace: Some ethical guidelines for user experience in ubiquitous-computing settings [1]

“If ubicomp applications are rushed to market and allowed to appear as have so many technological artifacts in the last thirty years, then they will present those users with a truly unprecedented level of badness.”

Note: It can be difficult, initially, to consider ubiquitous-computing environments as a special case for user experience work. Before they are knit together, the elements constituting ubiquitous systems may appear to be merely conventional technological devices, with relatively well-documented interfaces and affordances. It is only as they join and fuse that the emergent properties we think of as “ubicomp” come to the fore. It can be equally imprecise to speak of “users,” in a context where a human being encountering ubiquitous information-processing technology may more accurately be considered as a subject. Nevertheless, I have used the term throughout, as it is established and widely understood.

Ubiquitous computing is coming. It is coming because there are too many too powerful institutions vested in its coming; it is coming because it is a “technically sweet” challenge; it is coming because it represents the eventual convergence of devices, tools and services that became inevitable the moment they each began to be expressed in ones and zeroes.

It is a future structurally latent in the new schema for Internet Protocol addressing, IPv6, which, with its 128-bit address space, provides some 6.5×1023 addresses for every square meter on the surface of our planet, and therefore quite abundantly enough for every pen and stamp and book and door in the world to talk to each other. And of course it is a future economically latent in the need of manufacturers and marketers for continuous growth, and the identification of vast new markets beyond the desktop, laptop, personal audio player and mobile phone.

The slow fusion of our mobile phone and wireless broadband networks, the accelerating miniaturization and vastly reduced cost of RFID chips, the increasing ease with which circuits can be printed or embedded in wearable, even disposable items, improved techniques in ambient information display, the aging of society and corresponding necessity for outboard memory augmentation, even factors like public fear and the ostensible prerogatives of security in the post-September 11th era (“reduce the public sphere, restrict access, and limit unmonitored activity”2 ) all imply that ubicomp will play an increasingly prominent role in our lives, technically, socially and psychologically.

Despite our best efforts—which is to say, the best efforts of a great many sensitive and intelligent people working in good faith, over the course of a decade and in every country where access to the Internet is commonplace—even ordinary operations in such a comparatively simple regime as the World Wide Web still all too often present users with unacceptable difficulty, confusion and uncertainty. Moments of perplexity and doubt remain, strewn through even the most quotidian tasks like landmines among the fields. The Web works, but rarely as effortlessly or in a manner as free from undue complication as we might wish.

By comparison with the World Wide Web, ubiquitous computing is vastly more insinuative. By intention and design, it asserts itself in every moment and through every aperture contemporary life affords it3. It is everyware.

The prospect of such moments of disjunction and dismay being allowed to persist in the enormously more powerful, pervasive and intimate milieu of ubicomp, percolating through even to realms of existence not previously considered as subject to operations through an “interface”—and especially in contexts where users’ governing mental models are likely to be social and interpersonal4 in nature rather than technical—is still more unacceptable. (This is most especially so in the absence of compelling and clearly articulated value propositions for ubiquitous systems from the user’s point of view.)

Social engineering/society by engineers

It should be clear that ubicomp represents a substantial raising of stakes over the Web case, the PDA case, the mobile-phone case, or other scenarios we’re accustomed to; that its field of operation is by definition total; and that its potential for harm if poorly implemented is such that the user experience is too important to leave to chance, or the discretion of developers.

My sense is that the challenge of ubiquitous computing for user-experience professionals resides fundamentally in two places: in the regrettable quality of interaction typically manifested by complex digital products and services designed without some degree of qualified UX intervention, and in the ease with which ubiquitous systems can overwhelm or render meaningless the prerogatives of privacy, self-determination and choice that have traditionally informed our understanding of civil liberty.

We can all readily encompass the danger of the first situation. With all due respect, we have seen that products designed by engineers, or whose design is permitted to default to the tastes, preferences and predilections of engineers, almost always fail end users (unless those end users are themselves engineers).

This is not an indictment of engineers. They are given a narrow technical brief, and within the envelope available to them they return solutions. It is not in their mandate to consider the social and environmental impact of their work. From our vantage point as user-experience professionals, however, it is clear that there have always been emergent properties of systems that are designed with a given end in mind – and that sometimes, those properties and effects are of much greater consequence than the intended result.

If ubicomp applications are rushed to market and allowed to appear as have so many technological artifacts in the last thirty years—i.e., without compassionate attention to the needs and abilities of all sorts of human users, without many painstaking rounds of iterative testing and improvement in realistic settings—then they will present those users with a truly unprecedented level of badness.

Imagine the feeling of being stuck in voice-mail limbo, or fighting unwanted auto-formatting in a word processing program, or trying to quickly silence an unexpectedly ringing phone by touch, amid the hissing of fellow moviegoers—except all the time, and everywhere, and in the most intimate circumstances of our lives. Levels of discomfort we accept as routine (even, despite everything we know, inevitable!) in the reasonably delimited scenarios presented by our other artifacts will have redoubled impact in a ubicomp world.

Even if for this reason alone, we must ensure that this class of products and services is designed better, with more sensitivity and compassion, than others in the past.

It is, however, the impact of ubicomp on civil liberty that I am most concerned with. While the quality of ubiquitous interaction is more squarely within the typical ambit of our professional concerns, it is the civic sphere where our input and perspective is most critical and can be leveraged to secure the most enduring and important gains.

Ubiquitous systems lend themselves easily to—indeed, redefine—surveillance. However discrete they may be at their design and inception, their interface with each other implies a domain of action that extends from the very contours of the human body outward to whatever arbitrarily large civic space can be equipped with the necessary sensors and effectors. In short, there is no current technology with greater potential to support authoritarian and totalitarian social engineering, and the limitation otherwise of choice.

This will not always be a matter of imposition: it should be pointed out that some of us, perhaps even a majority, will want and strongly prefer such systems when they become available. As I have noted previously, critics tend to react negatively to the prospect of panoptical surveillance, “but what those who do so generally fail to understand is that many, many people like the idea that they’re always being watched, because they equate that watching with always being cared for…if the most accepted model for pervasive devices to date has been the Assistant, we should never forget that a competing model—one that holds strong appeal for a great many people—is the Superintendent.”

In the contemplated introduction of any system with so much inherent potential for oppression, it is clearly incumbent upon its designers to provide reasonable assurances for the maintenance or extension of human freedom, agency and autonomy.

Why us, why now?

With our orientation toward, and intense dedication to improving, the quality of interaction experienced by users of the World Wide Web (and technical systems of all sorts), we in the user-experience community are uniquely positioned to affect the emergence of this technological milieu for the better.

With our advocacy on behalf of a party otherwise under- or unrepresented in the development process—the human being(s) using the product or service at hand—we bring a certain grounding clarity to the proceedings. With our insights concerning the optimal order, sequence, priority, and rate of information presented to the user, we have frequently allowed a successful business case to be asserted where there was none before. We may even, if we are particularly lucky, be able to bring aesthetic sense and discretion to the projects on which we are engaged.

It is my sense that the time is apt for us to begin articulating some baseline standards for the ethical and responsible development of user-facing provisions in ubicomp applications, before our lives are blanketed with the poorly-imagined interfaces, infuriating loops of illogic, and insults to our autonomy that have characterized entirely too much human-machine interaction to date.

None of the following should be understood to arrogate to ourselves the role of sole guardian of the user’s interests, or to overlook the foundational work already done in the human-computer interaction community. These guidelines are intended for working information architects, usability specialists and user-experience designers, and address situations from their perspective.


The intent of this section is to enunciate some general principles for us to observe, as designers and developers for ubiquitous systems, whereby the ethical and social prerogatives of our “users” can be preserved.

The most essential and the hardest to express with any rigor, which we might call principle 0, is, of course, first, do no harm: if we could all be relied upon to take this simple idea to heart, thoughtfully and with compassion, there would be very little need to enunciate any of the following.

Given the difficulties of deriving practically useful guidance from such bywords, however, let us enunciate a further five guidelines that should go some way toward illuminating the challenges we face in designing useful, humane instantiations of ubicomp:

  • Principle 1. Default to harmlessness. Ubiquitous systems must default to a mode that ensures their users’ (physical, psychic and financial) safety.

    We are familiar with the notion of “graceful degradation,” the ideal that if a system fails, if at all possible it should fail gently in preference to catastrophically, with functionality being lost progressively rather than all at once.

    Given the assumption of responsibility for users and their environments implied by the ubicomp rubric, such systems must take measures that go well beyond mere graceful degradation.

    Slaved passenger vehicles, dosage settings for pharmaceutical-delivery systems, controls for sealed or denied environments are examples of situations where redundant interlocks must be provided to ensure user safety.

  • Principle 2. Be self-disclosing. Ubiquitous systems must contain provisions for immediate and transparent querying of their ownership, use, capabilities, etc., such that human beings encountering them are empowered to make informed decisions regarding exposure to same.

    Some analogue of broadcast station identification conventions, or perhaps of the Identification Friend or Foe (IFF) standards by which military systems identify themselves to each other, would be necessary.

    “Seamlessness” must be an optional mode of presentation, not a mandatory or inescapable one: both the interfaces through which information is passed between adjacent systems, and the actual data that is so communicated, must be equally capable of self-revelation.

    Ubiquitous systems, by definition, cannot help but gather information constantly, including arbitrarily granular location of users in four-dimensional spacetime. It would be unreasonable and unrealistic to assert a Web-derived model for user consent to such ongoing information-garnering activities in the ubicomp context: the scenario would be one of constant, exasperating interruption to task flow, as the user was asked to give explicit consent to the transmission of each momentary state. Given this, some provision for at least determining who owns a given system, and what will be done with information so revealed, is necessary.

  • Principle 3. Be conservative of face. Ubiquitous systems are always already social systems, and must contain provisions such that wherever possible they not unnecessarily embarrass, humiliate, or shame their users.

    Consider this brief vignette, from Thomas Disch’s legendary 1974 novel 334:

    “”Arnold Chapel,” a voice over the PA said. “Please return along ‘K’ corridor to ‘K’ elevator bank. Arnold Chapel, please return along ‘K’ corridor to ‘K’ elevator bank.”

    “Obediently he reversed the cart and returned to ‘K’ elevator bank. His identification badge had cued the traffic control system. It had been years since the computer had had to correct him out loud.”

    While Disch undoubtedly deserves credit for having so vividly imagined ubicomp avant le lettre, some twenty years ahead even of Mark Weiser, is there any reason why the system’s correction need be perceptible to anyone but Chapel himself? Why humiliate, when adjustment is all that is mandated?

    This goes beyond formal information-privacy concerns, toward the instinctual recognition that no human society can survive the total evaporation of its protective hypocrisy. Some degree of “plausible deniability,” including above all imprecision of location, is probably necessary to the psychic health of a given community, such that even (natural or machine-assisted) inferences about intention and conduct may be forestalled at the subject’s will. Still worse than the prospect of being nakedly accountable to an unseen, omnipresent “network” is being nakedly accountable to each other, at all times and places.

    At the absolute minimum, and in accordance with Principle 2, ubiquitous systems with surveillant capacity must announce themselves as such, in such a way that their field of operation may be effectively evaded.

  • Principle 4. Be conservative of time. Ubiquitous systems must not introduce undue complications into ordinary operations.

    If they impact such operations, they must be at least as transparent to users as the pre-existing equivalent: that is, one should be able to sit in a chair, place a book upon a shelf, boil a kettle of water without being asked if one “really” wants to do so, or having fine-grained control wrested away. In the absence of other information, the default assumption must be that an adult, competent user knows and understands what they want to achieve and has accurately expressed that desire in their commands to the system5.

    By the same token, a universal undo convention similar to the keyboard sequence “Ctrl Z” should be afforded; “save states” or the equivalent must be rolling, continuous and persistently accessible in a graceful and intuitive manner. If a user wants to undo, or return to an earlier stage in an articulated process, they should be able to specify, e.g., how many steps or minutes’ progress they would like to efface. (“Make it like it was two or three minutes ago!”)

  • Principle 5. Be deniable. Ubiquitous systems must offer users the ability to opt out, always and at any point.

    As an absolute ethical imperative, users must be afforded the ability to make their own meaningful decisions regarding their exposure to ubiquitous perception, the types and channels of information such exposure will necessary convey, and the agencies receiving and capable of acting on such conveyance.

    Critical to this is the ability to simply say “no,” with no penalty other than the inability to make use of whatever benefits the ubiquitous system offers its users. (The “safe word” concept may find an novel and unforeseen application here.)


These recommendations, clearly, are not comprehensive6:

They are certainly capable of being gamed, of being exploited by individuals determined to gain unfair advantage. They depend vitally for their effectiveness on voluntary compliance. They will necessarily involve compromises, conflicts, tensions and trade-offs. When were things ever otherwise?

But if thoughtfully and consistently implemented, it is my strong belief that they will go a long way toward improving the baseline experience for the human users and subjects of ubiquitous systems, and therefore rendering such systems acceptable for widespread implementation.

This, above all, is not a place for “service packs”; if ever there were a situation to compel the devotion of our full professional attention and compassionate effort to the individual human subject of technological intervention when it could still make a difference, this is it.7

For More Information


No comments: