November 13, 2005

Morality and User Interface Design

Morality and User Interface Design are two topics that do not seem too closely related at first glance. After all, interface design is about “nice” and usable interfaces, but we as Usability Engineers and User Interface Designers don’t have to make decisions that have any severe impact on the morality side, right? – Well, how you design a user interface for a mobile phone may not be something Immanuel Kant would have bothered himself with, were he alive today. But what about such things as, e.g., user interfaces for weapon control?

M.L. Cummings at MIT wrote an interesting article on “Creating Moral Buffers in Weapon Control Interface Design” [abstract] in which she takes a look at military and also medical settings and describes the moral implications that decisions in those areas of interface design inevitably have.
The basic argument she makes is, that a user interface can create a “gap” between a person’s actions and their consequences which results in psychological/emotional (and in some cases also physical) distancing from those consequences and therefore in a diminished sense of accountability and responsibility: the moral buffer.
In addition, users have a tendency to anthropomorphise computers. (Those of you who ever yelled at their computer when it didn’t do what it was intended to do will know what she is talking about.) This, together with the cognitive limitations a stressful situation can produce and the moral buffer described above, can even lead to users assigning moral authority to computers in certain situations. This may seem rather theoretical – as long as you are not a patient in a hospital where staff relies on a system like APACHE, which determines “at what stage of a terminal illness treatment would be futile”.

Usability Engineers and User Interface Designers should be aware of this issue which basically affects every area where an interface has to be designed for a system that influences the well-being of humans – or the lack thereof, as with weapon control…

Two thoughts come to mind:
  • Can interface design also have the contrary effect, creating a deeper sense of moral involvement by the user?
  • Are there other moral pitfalls in a Usability Engineers / User Interface Designers work – even when not concerned with life-critical systems?

For the first question, I think it is possible. Ironically, an area that Cummings names as one encouraging emotional detachment could provide an example: video gaming. It is correct, that war seems like a video game at times and this can definitely alter the perception of the things that happen. But fortunately there are other types of games besides shooters. Take “The Sims” as an example. People spend hours caring for those computer-generated characters, providing them a nice home and helping them advance in “life”. Nothing they do has any significant impact on “real life” (except for the lack of time users may experience for other activities), but yet, players care very much for the well being of their “friends”. So this special kind of interface seems to cater to the tendency to anthropomorphise the computer by giving interface elements human form. (For “The Sims” players, it may even be weird to talk to the characters as “interface elements”. But that’s what they are, you click on them, get context menus – everything there…)
So does that mean that every interface should look like a game or provide little people for the user to see? Probably not. But maybe the standard questions: “What constitutes the task?” and “What information is needed to fulfil the task?” should be supplemented with “What does the user need to realize the implications of his actions?” If it’s Sims walking around, so be it. For other user types it may be numbers and statistics. The point is that “traditional” interface design may often take the easy way by put a narrow focus on the task and not caring for anything else – such as consequences.

This also answers the second question: Usability Engineers and User Interface Designers may ignore the moral impact of their work whenever their focus becomes to narrow. And the classical focus on users and their tasks may be exactly that: too narrow! We want to help users do their work more efficiently and comfortably. But designing systems in a way that allows that may also help to do the work with less people. So in this case it’s not the consequences of the users’ tasks that the Usability Engineer should consider, but rather the consequences of his own work that improves the efficiency of users working with a system.

The basic lesson is – and hopefully that does not come as a surprise for you – that Usability Engineers’ work is conducted in a context that may be larger than the one which is analysed in Contextual Analyses. This may not always be as obvious as it is with weapon control interfaces. Sometimes one could think, that a system one is dealing with is a very clear-cut entity that is used to fulfil a clearly defined task and that has no other (moral) connection to the “real world”. As seen above, one is well advised to think again…

6 comments :

Stefan Wobben said...

Donald Norman recently made the case for Activity Centered Design. The problem with ACD is that the focus on the task is to narrow. A more holistic view of User Centered Design is necessary. Do you have any ideas how a holistic usd method should look like?

Aapo Laitinen said...

Your post reminds me of something I noticed while my brother was playing the game Star Wars: Knights of the Old Republic (or KOTOR). He would happily go around slaughtering people in the Grand Theft Auto series (or GTA), but was having terrible difficulty turning to the Dark Side in KOTOR. The difference seems to be that in KOTOR you typically have with you one to three computer-controlled characters, who comment on your actions. Doing something morally questionable would result in "Why did you do that?", "Was that really necessary? He needed his money more than we do." or something to that effect, and usually you had to answer something. In GTA, your victims typically die with a faint moan, and praise is the only thing you'll hear. Also, KOTOR prods you into considering the player character to be your alter ego, whereas in GTA you're pushed into acting out a role, instead of deciding for yourself. And finally, in KOTOR you can succeed and get rewarded by being good, but in GTA you'd always fail your missions if tried to follow any moral code.

Gabriel White said...

Great to hear people talking about this topic more. It's something I've been giving some thought lately, and one of the things that I've thought about it the extent to a UI is an "integrative" experience - does the UI help increase the user's sense of integrated self or disrupt / fragment?

Mark said...

@Stefan
I think that I’ll write for my blog a post on this Activity Centred Design issue. (Not that the world needs yet another comment, but anyway…)
Regardless of whether you do “UCD” (as defined by Norman) and focus on users and individual tasks or Norman’s ACD with a focus on activities, both of these views may be too narrow for the reasons I stated.
Coming to your question. I think, that if you work in a commercial context, the word “holistic” may get all your client’s alarm bells ringing. (Budget Warning!) This means that in such a context you may be restricted by constraints set by the client you are working for and this may by default narrow your focus. (Which is the client’s right if he buys your service to address a specific issue.)
In the ideal world (you as usability engineer / user interface designer are free to decide without any restrictions), you could ask two basic questions (I’m giving simplified “draft thoughts” here):

1. What are the implications of the task…err…sorry…activity in question?
The answer to this question depends on the dimensions you consider relevant. These could be, e.g., loss/gain of life, loss/gain of profit, fun/ennui, stress/relaxation etc. This means extending the focus beyond merely understanding the activity in question. (The comfort of the “traditional” approach is of course that it saves you a lot of time and does not burden you with a very personal kind of involvement.) The answer to the questions you state may take the simple form of doing the job or not.
It would be interesting to see what such a “catalogue” of questions could look like and which decisions could arise on that basis.

2. What are the implications of your work / of the goal of your work?
To answer this you would have to understand how the activity you investigate is embedded in organizational processes and/or people’s lives. Then you could estimate, what potential influences it could have when you make people do a task more effectively/efficiently/more pleasantly. And it’s here where things start to become really complex, because you won’t be able to anticipate each and every consequence. And even if you could, you could only reasonably guess, but never be sure. Again, avoiding this question makes life easy, but if you are aware of this issue, it may be hard for you to ignore the question.
(This has an equivalent in the discussion about the responsibilities of scientists. Are they required to bother themselves with the potential consequences of their research? See e=mc^2…) We are touching the realm of “work ethics” here.

In the commercial context you may be restricted by default concerning those questions and answering them. But I think it’s worthwhile assuming the “ideal world” for a moment, thinking about the dimensions that could be relevant to answer the questions and possible consequences of the answers. For question 1, the inclusion of hedonistic aspects in usability evaluations may be one piece of the puzzle, for question 2 something like the UPA Code of Conduct may be a step on the way.

Later then, one should try to find out which of the thoughts and ideas could be transferred to the “real world” to extend our focus, if only a bit. For this we may also benefit from the different perspectives and background of the people that are active in the usability profession (computer scientists, psychologists etc.)

So, the extension of focus in User Centred Design is triggered by discussions…like this one.

Mark said...

@Aapo
You made quite some good comments there. I agree that having the choice and being aware of this fact as well as identification play an important role.
(Disclaimer: I have played neither GTA nor KOTOR and comment on the basis of your descriptions.)
I GTA it seems that you don’t have a choice as to being a good or bad boy if you want to succeed in the game. You realize that the choice has been taken away from you by design. So the only choice you can make is to play the game or not. If you know that the game has no influence on the “real life”, you might as well go for being the bad guy.
With KOTOR you realize that you have a real choice and the comments from the computer characters and (probably the whole design of the game as well as your experience with the movies) contribute to you identifying with the character you play. Even while this game, too, has no influence on real life, the decision to go for the Dark Side is harder than in GTA for these reasons. (Your realization that “it’s only a game” may lead you to try out the Dark Side.)
So for systems that affect real events and people, questions for interface design could include: “How can it be conveyed to the user that he has a choice at all times?” and “How can the interface support his identification with what is happening?”
Maybe giving the choice may be even better than forcing what the designer thinks is the “right” decision upon the user. That would be worth some thoughts.

Mark said...

@gabriel
Those could be some interesting – and complex – thoughts. I think that the examples given by Aapo, namely games, are good examples for such “dissociations”. But they also exist with non-game applications as the article on the moral buffer shows.
So the questions as to whether an interface results in you experiencing yourself as a whole or rather leads to a fragmentation (or “externalisation” of certain parts of your self) is less esoteric than it may seem at first glance. It’s a valid perspective that lies outside of what is usually looked at in “traditional” usability engineering.