ACM Distinguished Speakers Program:  talks by and with technology leaders and innovators

Measuring Users Experiences: Retrospect and Prospect

Speakers: Effie Lai-Chong Law
Topic(s): Human Computer Interaction,Measurement & Evaluation


In 2011 I was exploring the issue on the measurability of users’ experiences, as inspired by my collaboration with a group of HCI researchers to analyse several fundamental issues - definitional, theoretical, methodological, and practical - pertaining to the field of User Experience (UX).  Nonetheless, the inquiry whether users’ experiences could be measured in a meaningful, valid and reliable way received mixed responses from the HCI community.  Some found the inquiry worthwhile to investigate, given the goal of identifying standardized metrics in UX, which were (and still are) lacking.  Some dismissed the inquiry as irrelevant, but interestingly based on two opposite lines of argument. One claim is that experiences, affects and emotions have been measured for decades in psychological research, and therefore the measurability of users’ experiences should not be a concern. The other claim is that quantifying or giving numeric values to users’ experiences or affective responses is essentially meaningless, because implications one can draw from those values, especially for product design, are limited.  The controversies can be boiled down to the basic problem: What do we talk about when we talk about “User Experience” or its acronym “UX”?   Pedantically speaking, different linguistic expressions of the two words could connote different things: (user experience) vs. (users’ experiences) vs. (user-experience), but dissecting their nuances is another level of debate.  Nevertheless, in resonance with the views of some other UX researchers, I refer to UX as a field of which the core is users’ emotional responses in connection with interactive technologies.

In this talk I will revisit the definitional problem of UX that was surveyed in 2009.  Since then some studies have looked into this problem; it is intriguing to review new insights thus gained to shed light onto the issue of measurability. Apart from this conceptual perspective, I will address the issue from the practical perspective, namely discussing how psychophysiological measurements have increasingly been deployed as UX metrics, though sceptics remain.  Clearly, such measures alone are not enough to infer emotional responses, which need to be subjectively interpreted by users who are experiencing or have experienced them.  Questionnaires are arguably one of the most cost-effective means to capture such subjective data. But there are a plethora of options. Which one is optimal for evaluating a specific application is disputable; the number of items (length) seems a key criterion in industrial settings whereas the number of constructs (content) seems more relevant for academic settings. In addition, various qualitative methods such as interviews, ethnographic observations, and narrative-based analysis yield data that can be converted into numeric measures, say, with data mining techniques. Such processes can lead to deeper insights into users’ experiences of the interactive system in question. While this can be valued by academic researchers, the time required can be a barrier for real-life practices.  Despite or given these recurrent research-practice gaps, will standardized metrics in UX, comparable to the three prototypical usability metrics, eventually be developed? Is it an inherently unfeasible endeavour or is it something critical yet to be discovered?

With a systematic retrospect I hope to contribute to the prospect of making this endeavour realizable. 


About this Lecture

Number of Slides: 45
Duration: 60 minutes
Languages Available: English
Last Updated: 12-07-2015
Request this Lecture

To request this particular lecture, please complete this online form.
Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.
Featured Speaker

Keith Cheverst
Lancaster University

Get Involved!
Help improve the DSP by nominating a speaker or providing feedback to ACM.