December 07, 2006

Silent and incompetent users?

In this New York Times article on armrests that “chew up pants” (free registration required), two passages are especially interesting if one looks at the article from a usability engineer’s point of view because they hint at phenomena that are also known in the area of user interface design.*
Rate of Complaints
„An hour spent interviewing Long Island Rail Road riders waiting for trains at Pennsylvania Station last week turned up 13 people, nearly all men, who said they had torn 22 articles of clothing, mostly pants. Only one of them had submitted a claim. Some men’s faces lighted up when asked if they had ever torn their clothing on the armrests, as if they had been waiting to tell someone.“

Sometimes one encounters the argumentation that an interface or a piece of software must be good because no one complains about it. The reasons for users not complaining are manifold, e.g. they
- don’t care enough to complain
- develop workarounds for their problems
- do not know where to turn with their complaint
- simply abandon using the software and get themselves a better product.
One should thus not assume that users who do not complain are satisfied users. If the judgment of interface usability is based solely on the rate of users who call the hotline to complain, one could be in for big trouble… It can be worthwhile actively eliciting information on satisfaction instead of passively waiting for it.
Self-Assessment of Competence
“You feel more like an idiot than anything ... But then you realize, they could have designed it better.”

The first part of that statement hints at another reason why users may not complain about badly designed interfaces: they simply do not perceive a problem as being caused by bad design. Instead, they attribute the error to their own assumed incompetence. This behaviour can also be witnessed during some empirical usability tests. (“Oh, I seem to have made a mistake there.”) It is one of the reasons why obviously flawed interfaces often receive rather good scores in questionnaires and rating scales that are filled in during the post-test phase: users think that the problem lies within themselves and that they cannot handle the intrinsically “good” interface.
Therefore, data from usability testing must be carefully analysed and not every utterance by users should be directly turned into a design guideline. It’s an essential part of the usability engineer’s work to integrate data from different sources (observations, interviews, questionnaires across different users) to get the whole picture regarding the user experience and potential for improvement.

Bottom line: Silent users or users who express that they may be incompetent in certain areas should not be regarded as “seal of quality” for the usability of an interface. Grabbing into the “usability toolbox” can produce valuable insight to get a more valid impression concerning usability and options for improvement.

Regarding the tendency of users to be hesitant in expressing their (justified) dissatisfaction, it’s nice to hear statements like the second part of the one quoted above that illustrate a change in attitude: users start to expect quality and they do not want to put up with badly designed products.
Ultimately, designing usable products helps to put things in perspective for users and allows them to properly judge their own competence. They realize where sources of problems really lie so they can provide information on how “they could have designed it better”.

*The whole article is interesting as an example of including user-centred measures (too) late in a design process and the resulting costs.

2 comments :

Stefan Wobben said...

Good post Mark (whats your lastname?)

Also the Usability Stockholm Syndrome gives a nice perspective on this matter.

"people are coming to our labs, as a guest of Microsoft. There's a little piece of human nature which says you don't go to someone's house and then insult them. They come to our place, we're giving them free stuff--no wonder they subconsciously want to please us a little."

Fact is that people will never verbalize everything we want to know.

Question is what can we do about it?
Give beter instructions. Make more use of etnographic research? Any thoughts?

Mark said...

@Stefan
The article by Jensen Harris is indeed a nice addition to the point of view I am describing.
One could think that – in contrast to what Harris describes – that when testing for an external client (i.e. when you’re not the in-house usability guy), you would not run into that kind of problem because you can remind participants that you have no stakes in the interface and it’s your sole task to evaluate it.
But participants not familiar with usability testing might not make that difference between “developer” and “tester” of an interface and think of them as one party instead – one party that will be offended if they hear to many negative comments about the interface. The fact that participants also receive some form of gratification when you are testing as an external contractor, might contribute to that attitude.

Of course one should attempt not to bias participants when giving instructions and try to explain what the test is about (and what not), but insisting too much on telling them that “we are testing the software and not you” may have exactly the opposite effect of making them feel uncomfortable and rendering them close-lipped. It’s a bit of a Catch-22 situation: if you insist on stating the obvious, people may become suspicious. If you leave things open, that may well have the same effect.
It’s important to keep in mind that a usability test – unlike a physics experiment – is a social situation that cannot be conducted by going through a checklist because you have to react flexible to differences between people. Some people may think that this may be a flaw of empirical usability tests but it really is just a constituting element: it’s all about people and so an appropriate approach should be used and it should not be assumed that each new user has the same characteristics as the last one who participated in the test.

As mentioned in my post, I think that asking users for their opinion alone can never give you the full picture. It’s really one of the interesting parts of usability engineering work to piece the information together. People might say one thing, but act in a way that seems contradictory (and that’s not only applicable to usability testing…)
So you really have to combine your data sources: questionnaires, interviews, user utterances during interaction, observations and whatever seems relevant. Ethnographic research may be helpful if you collected that type of data for the product in question (or similar products, similar users and similar working contexts). It may give you the “background” on which to interpret the data from the usability test. Not least, it’s your experience in conducting and analysing tests (and interacting with people participating in such a test) that has a significant impact on the quality of the data gathered and the resulting analysis. One could say that good usability testing is a combination of “craft and creativity”. It would be interesting to read more on that topic…or maybe write more if I get around to it.