DS 110: Proceedings of the 23rd International Conference on Engineering and Product Design Education (E&PDE 2021), VIA Design, VIA University in Herning, Denmark. 9th -10th September 2021

Year: 2021
Editor: Grierson, Hilary; Bohemia, Erik; Buck, Lyndon
Author: Dewit, Ivo; Corradi, David; Goossens, Maarten
Series: E&PDE
Institution: University of Antwerp, Belgium, Faculty of Design Sciences, Belgium
Section: Assessment Methods
DOI number: 10.35199/EPDE.2021.34
ISBN: 978-1-912254-14-9


One issue within current product development literature entails, both with students and with professionals, the reliable scaling and the objective assessment of products. This article will attempt to describe a way how Masters students at a university, within a product development case, can both assess each other, assess the process of product development, and assess the final product using comparative judgement. This, while achieving learning goals described at master level. A growing body of literature is finding support for the notion that comparative judgment can help learners and assessors in different learning and working situations. Comparative judgement asks an assessor to compare two products and ranks one product over the other. These products can be both small scale – such as a short presentation, a paper or a drawing; and large-scale such as a masters’ thesis or a full fledge solution to a real life solution to a problem. Studies indicate such comparative judgement can be used during a learning process. Learners are taught to think why product a is better than product b, and learn to articulate why a product is better. This thinking process is often referred to as metacognition, whereby the learner attempts to understand all intricate aspects why a product is better than another one. Compared to using rubrics, the comparative assessment is usually done in terms of holistic processes, whereby the whole is more than its aspects, or when it comes to designing with increased complexity, e.g., product-service systems (PSS). Rubrics usually divide the product into small aspects, a presentation is subdivided into criteria on layout, on persuasive message, and so on, where each criterium receives a separate score. Comparative judgement, on the other hand, focusses on the entire product and compares the entire product as such into a holistic process. Comparing multiple products multiple times leads to a ranking of products and connected to that a reliability score, and an ability score. The reliability score is an indication of the chance a different group of assessors might get a similar ranking. A score above .7 (ranging from 0 to 1) is considered decent reliability. An ability score refers to the amount of time a product has been considered the better (or worse) choice or option compared to a range of other products, and also influences the final ranking. The current study evaluates how first year Master students (n=72) apply comparative judgement using an online tool, Comproved ( during a semester-long group project on PSS design, and compares this to how assessors use this in a product development cycle. Based on post hoc questionnaire details and interviews, we will list the advantages and the disadvantages of comparative judgement, as stated by the both students and assessors. Finally, we will list the potentials of comparative judgement during product development cycles and their impact on product quality, reliability of judgement and metacognitive strategies for learning.

Keywords: design education, product-service systems, assessment reliability, comparative judgement, metacognition strategies


Please sign in to your account

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties. Privacy Policy.