A Usable Evaluation Tool for Designers

DS 69: Proceedings of E&PDE 2011, the 13th International Conference on Engineering and Product Design Education, London, UK, 08.-09.09.2011

Year: 2011
Editor: Kovacevic, Ahmed, Ion, William, McMahon, Chris, Buck, Lyndon and Hogarth, Peter
Author: Woodcock, Andree; Fielden, Simon; Bartlett, Richard
Series: E&PDE
Section: Design Teaching Environment 3
Page(s): 690-695


Evaluation is an essential and yet largely overlooked component in design education. Although a more user centred, inclusive approach to design is now advocated, practicing designers may not have been trained in the most appropriate ways to evaluate their designs. Reasons for this may include lack of resources and time available in the curriculum, lack of experience of lecturers in evaluation methods and a curriculum which emphasized design at the expense of evaluation. Without such evaluation, iterative design may only informed by internal critical review. With a wider understanding of diversity, the need to design for an increasingly wide range of users (such as those with Specific, Critical, Additional Needs (Scott, 2010) and the older elderly) and design technology based products, there is a greater need to understand user requirements and evaluate products with representative end users. Ergonomists and design organisations have tried to understand the problems of designers and create tools and methods to remove the barriers to more robust evaluations (Woodcock, 2001).However, a survey of SME developers of assistive technology products showed that developers needed support in the selection of the most appropriate evaluation methods. They may not have had much previous experience of evaluation, relied on a limited set of evaluation methods (such as focus groups) and were dependent on third parties gathering information for them. Based on previous experience of the development of paper and computer based design support tools and the teaching of research methods courses to designers, a decision support system has been developed to guide the designers of assistive technology products in the selection of the most appropriate evaluation methods. Through answering a series of questions related to the usage context, users of the product, stage of the design process an resource availability, a profile of the required evaluation is built up. This is used to rank 45 commonly used evaluation methods. Initial testing with previously evaluated products has shown that the system generates similar methods (valid) methods to those used, and that through answering the questions the SMEs learn more about the nature of the evaluation that they are seeking to conduct. With little time overheads and an on line help system describing the methods, it is hoped that the toolset will increase the validity and rigour of product evaluation, at whatever stage it is conducted. As a teaching and learning tool, the toolset can be easily integrated into design related activities, to help new designers understand the issues that need to be considered. The use of the toolset could therefore replace current approaches to the teaching of research and evaluation methods.

Keywords: Assistive technology, usability testing, evaluation support, ergonomics


Please sign in to your account

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties. Privacy Policy.