September 02, 2009

What You Always Wanted to Know About Eye Tracking - Part 1: Fixation Detection

Since the topic of eye tracking keeps drawing people’s attention – it’s a silver bullet for some whereas others see it as a complete waste of resources – I thought a series of posts that focus on the methodological aspects of eye tracking research would be in order. As such, the posts will not focus on particular (usability) studies that have been conducted with eye tracking but rather shed some light on implicit and explicit assumptions, caveats etc. of the method itself. Ideally, readers will find some information to make up their own minds regarding the appropriateness of the eye tracking method for the examination of diverse questions.

Before exploring more advanced conceptual aspects of the eye tracking method, it is worth noting that there are basic technical aspects that must be kept in mind when conducting or interpreting eye tracking research.


Variability of Fixation Detection Algorithms

Eye tracking is basically the measurement of the gaze direction with a certain frequency. After a calibration procedure, gaze direction can be mapped onto coordinates, e.g. on a computer screen, in order to determine how the user’s gaze moves over the display.

Eye tracking hardware varies in the frequency with which data is gathered. Starting from ca. 60 measurements per second and going to 1000 Hz or even more. Usually, these individual measurement points are not the focus of the research – instead,
fixations and saccades are the measures of interest. These have to be determined on the basis of the raw data (i.e. the individual measurement points). Salvucci & Goldberg (2000) note that “though widely employed, fixation identification and its various implementations have often been given short shrift in the eye-movement literature” (p.71) Fixation identification is a two-step process, with the first step consisting of cleaning up the raw data. When the subject blinks, e.g., the raw data gets contaminated and the respective measurement points have to be excluded before further data processing occurs. In the second step, the cleaned up data is then subjected to an algorithm that aggregates the raw data to fixations. As a user of eye tracking (or recipient of the respective studies), one should be aware that there is more than one algorithm to do so. Salvucci & Goldberg, after examining the practical implications of applying diverse algorithms to the same sets of eye tracking raw data come to the conclusion that “the choice of identification algorithms can dramatically affect the resulting identified fixations. By describing their chosen algorithm … researchers can better communicate their analysis techniques to others and thus provide more standardized and understandable results“ (p. 78). What is clear from this conclusion is, that there is no absolute „truth“ concerning fixations that is somehow revealed by applying the eye tracking method. In the worst case, there are as many „truths“ as there are algorithms for fixation identification. Now, that is not to say that this fact renders eye tracking useless – one should, however, be aware of the fact that doing eye tracking is not as straightforward as measuring temperature: the data concerning a subject’s fixations that is gained during eye tracking has already undergone heavy processing and is influenced not only by the behaviour of the subject, but also by the algorithm implemented in the eye tracking hard-/software. This does not necessarily imply that the conclusions drawn on the basis of fixation data gained with different algorithms (from the same raw data) will also diverge, but the question would be worth systematic exploration in a practical context.

The
next part of the series will discuss another technical aspect of eye tracking: influences on measurement precision.

References
Salvucci, D.D. & Goldberg, J.H. (2000). Identifying Fixations and Saccades in Eye-Tracking Protocols. In A.T. Duchowski (Eds.), Proceedings of the 2000 symposium on eye tracking research & applications (p. 71-78). New York: ACM Press.

4 comments :

Guy Redwood said...

There is a facility in the latest versions of Tobii Studio that allows you to check the accuracy of eye tracking with a tobii system after calibration. It displays a number of dots on the screen and a marker showing you where your fixation is. You look at a dot and the tracker confirms that it knows where you are looking. It's a nice simple demonstration of the high accuaracy of modern eye tracking systems.

Jon Ward said...

Another Tobii functionality that you should be aware of is the fact that there are 2 filter settings within the software and also a raw data option so that purely gaze point is plotted or analysed. This is ideal for large reading studies, fine text or rapid movement on screen. Also there are guidelines on the settings for the fixation filters, based on academic research that optimise the filters for stimuli that are mostly text based, image based, moving stimuli or a combination. As all this can be changed or tried post recording there is no problems with 'choosing the wrong settings' before you go into testing and indeed good practicioners may use multiple filters for different parts of the same stimuli.

Harry Brignull said...

Interesting stuff. All these details get neatly swept under the carpet when a heatmap is exported and stuck in a shiny PowerPointpresentation.

I often wonder if calibration was run before a test and afterwards, what the "offset" might be in terms of accuracy. Surely the hardware becomes decalibrated during use, to some extent?

Paul Olyslager said...

I was looking up some heatmap applications lately to experiment with them and found your post along the way. I've put your link in one of my posts because I thought it would be interesting for my readers to show them a bit about actual eye-tracking, instead of the click-tracking. Looking forward for your second part.