Journalist and Writer
Hilary Wilce specialising in all aspects of education
Image of Hilary Wilce

The Quandary - 05 Feb 2009

Multiple choice questions can be harder if you read the question carefully or have a deeper understanding. Surely this is wrong?

Hillary's Advice

Checking boxes is never a great way of testing anything but the most rudimentary understanding of a subject. And you don’t have to look hard to find examples of multiple-choice questions that are ambiguous or confusing.

In a recent ICT exam, for example, pupils were asked to tick one disadvantage of using a software package to work out a budget, over using a pencil and paper to do the same thing. They were given four possibilities – the formulae could be wrong, the wrong prices could be entered, a virus could corrupt the information and multiple printouts could be produced – and the right answer was, according to examiners, the virus one. But any pupil who stopped to think hard about it would have found it perfectly possible to argue a strong case for two of the items, and even to make some sort of case for the other two.

Recent science and maths papers have thrown up similar problems, and although exam question setters work hard to eradicate ambiguity, it is almost impossible to reduce a complex world to single right answers except in a certain narrow – or shallow -- band of knowledge.

These questions are very blunt instruments indeed, and it is no wonder that pupils who are tuned in to complexity and nuance have problems with them. Like so much of our exams and testing industry, the questions have far more to do with helping adults rank and sort children for their own ends, than with deep learning.