Distributed usability evaluation: enabling large-scale usability evaluation with user-controlled instrumentation
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Distributed usability evaluation : enabling large-scale usability evaluation with user-controlled instrumentation. / Christensen, Lars; Frøkjær, Erik.
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: extending boundaries. Association for Computing Machinery, 2010. p. 118-127.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Distributed usability evaluation
T2 - 6th Nordic Conference on Human-Computer Interaction
AU - Christensen, Lars
AU - Frøkjær, Erik
N1 - Conference code: 6
PY - 2010
Y1 - 2010
N2 - We present DUE (Distributed Usability Evaluation), a technique for collecting and evaluating usability data. The DUE infrastructure involves a client-server network. A client-based tool resides on the workstation of each user, providing a screen video recording, microphone input of voice commentary, and a window for a severity rating. The idea is for the user to work naturalistically, clicking a button when a usability problem or point of uncertainty is encountered, to describe it verbally along with illustrating it on screen, and to rate its severity. These incidents are accumulated on a server, providing access to an evaluator (usability expert) and to product developers or managers who want to review the incidents and analyse them. DUE supports evaluation in the development stages from running prototypes and onwards. A case study of the use of DUE in a corporate environment is presented. The study indicates that the DUE technique is effective in terms of low bias, high efficiency, and clear communication of usability issues among users, evaluators and developers. Further, DUE is supporting long-term evaluations making possible empirical studies of learnability.
AB - We present DUE (Distributed Usability Evaluation), a technique for collecting and evaluating usability data. The DUE infrastructure involves a client-server network. A client-based tool resides on the workstation of each user, providing a screen video recording, microphone input of voice commentary, and a window for a severity rating. The idea is for the user to work naturalistically, clicking a button when a usability problem or point of uncertainty is encountered, to describe it verbally along with illustrating it on screen, and to rate its severity. These incidents are accumulated on a server, providing access to an evaluator (usability expert) and to product developers or managers who want to review the incidents and analyse them. DUE supports evaluation in the development stages from running prototypes and onwards. A case study of the use of DUE in a corporate environment is presented. The study indicates that the DUE technique is effective in terms of low bias, high efficiency, and clear communication of usability issues among users, evaluators and developers. Further, DUE is supporting long-term evaluations making possible empirical studies of learnability.
KW - Faculty of Science
KW - automation, beta test, case study, distributed, evaluator effect, instrumentation, learnability, remote, screen video, software industry, think-aloud, usability evaluation, voice commentary}
U2 - 10.1145/1868914.1868932
DO - 10.1145/1868914.1868932
M3 - Article in proceedings
SN - 978-1-60558-934-3
SP - 118
EP - 127
BT - Proceedings of the 6th Nordic Conference on Human-Computer Interaction
PB - Association for Computing Machinery
Y2 - 16 October 2010 through 20 October 2010
ER -
ID: 32962960