Mark Dawe, Chief Executive of the OCR exam board, has recently claimed that students should be allowed to use Google during GCSE and A-level exams. Predictably, this intervention met with opposition from educational traditionalists, such as Chris McGovern, Chairman of the Campaign for Real Education, who responded that this is “nonsense […] a dumbing down […] Exams should be about knowledge and understanding […] Therefore we do have to test what children are carrying in their heads.’
We doubt many would dispute that exams are about testing knowledge and understanding. But the real issue is the moral that McGovern seeks to extract from this point: Should exams be only concerned with what children “carry in their heads”? This is admittedly a natural inference to make, but as cutting-edge research—such as the interdisciplinary ‘Extended Knowledge’ project at the University of Edinburgh—indicates, this conclusion may go against a lot of contemporary work within cognitive science.
McGovern wants to focus on what lies within the students’ heads, but should students be at least allowed to use extra-cranial, though still biological, resources, such as moving their hands during the exam? Gesturing after all is known to increase when we reason about a problem rather than when we describe the problem itself, which suggests that gesturing may not only serve communication purposes but may also play a role in the actual process of thinking itself.
To investigate the impact of gesturing during problem-solving, psychologist Susan Goldin-Meadow (2003) asked two groups of children to memorize a list and then solve a mathematical problem before trying to recall the list. One group was allowed to gesture during the problem-solving task, whereas the other group was asked not to. The outcome was that removing gesturing during problem-solving had a detrimental effect on the subsequent task of recall. According to Goldin-Meadow, the best explanation is that gesturing shifts or lightens aspects of the neural processing thus freeing up resources for the memory task. In other words, even though gesturing does not take place within our heads, it does seem to be an embodied cognitive resource, which we regularly employ when we solve problems.
Of course no one would suggest that gesturing should be banned during exams. But as philosopher Andy Clark has noted (2007) the twist in the story is that gesturing bears a lot of similarities with using other non-biological resources for boosting and structuring our ongoing thought processes. For example, when we try to solve a long multiplication problem such as 789 times 987, we normally use pen and paper to externalize the problem in symbols before proceeding to its solution by performing simpler multiplications and noting the result for use in later stages. The overall process involves eye-hand motor coordination and it is not simply performed within the head of the person reciting the times tables. It involves intricate, continuous interactions between brain, hand, pen and paper.
Long multiplication, in other words, is a complicated cognitive task that, most of us, would be unable to perform in the absence of extra-organismic props and devices. According to Clark, however, if we are willing to accept that gesturing is an embodied cognitive process that we can bring to bear when problem-solving, then we may be willing to also accept that performing a complicated mathematical task by interacting with pen and paper is a cognitive process that extends beyond our brains to include the relevant external artifacts.
Within contemporary philosophy of mind and cognitive science, this has come to be known as the hypothesis of extended cognition (Clark & Chalmers 1998), with the main idea being that cognitive processing can, under appropriate conditions, literally extend to the devices we interact with. This is undoubtedly a provocative claim, but it is also one that has given rise to a burgeoning research programme in contemporary cognitive science.
To turn back to education, then, the question to ask is what are the implications of this hypothesis for how we structure exams? Pen and paper seem to be intricately involved in human cognition and this seems to chime with our intuitions about education. Just as with gesturing, no one would ever ask students to hand in their pen and paper before an exam, and the same may occasionally hold for calculators too. Intuitions, however, are not so clear when it comes to laptops, search engines and online encyclopedias.
So what are the types of extra-organismic processes that can reliably extend cognition and thereby knowledge? And can we really support the claim that knowledge may be occasionally extended, in the sense that the cognitive abilities generating and sustaining it can be extended, along the lines suggested by the hypothesis of extended cognition?
Recent work at the intersection of epistemology with philosophy of cognitive science indicates that this may be a viable possibility indeed (Pritchard 2010; Palermos 2014). Specifically, knowledge seems to necessitate the presence of two general conditions that can be met by both intracranially and extracranially stored information. Firstly, the relevant source of information must be objectively reliable, independently of what the relevant individual thinks about it. Secondly, the individual must be sensitive to telltale signs of malfunctioning, or unreliability, with respect to the source of information.
We might get a batter grip of the pedagogic question we are here dealing with by breaking down these two conditions a little further. Starting with the second one first, the idea is that the relevant (intra- or extra-organismic) resource must be integrated into the individual’s intellectual character, such that he or she can be sensitive to any relevant shortcomings. If the resource operates in an abnormal way compared to what the individual is used to take as normal functioning, then he or she must be able to spot this and respond appropriately. The reason this is important is because, in normal circumstances, when nothing appears to be wrong, it allows the individual to be in a position to take his or herself to know the retrieved information by default—even if he or she has never considered whether the relevant resource is in fact reliable or where its reliability comes from.
For example, just as with our visual systems, so in the case of Wikipedia we withhold judgment when something does not appear right; when, for example, the webpage looks like a hoax that is trying to phish our accounts or when the relevant entry appears to have been written offhand. In normal circumstances, however, when nothing seems wrong, we take ourselves to know the delivered information even if we know nothing about the underlying mechanisms that support the reliability of the relevant resource.
Of course, as the first condition requires, the relevant resource must also be objectively reliable, independently of what the individual believes, and this may in fact raise a problem for extended knowledge. For, while we may be in a position to assume that natural selection can ensure the objective reliability of our natural cognitive resources, such reassurance won’t do in the case of extra-organismic tools, such as Internet search engines and online encyclopedias. In such cases, there can be several possibilities of error due to a variety of non-natural reasons ranging from blatant incompetence and gender and ethnic biases to ulterior, commercial motives for accentuating certain pieces of information while downplaying others.
Interestingly, in the case of Wikipedia, this problem has been significantly resolved by the ‘power of the many eyes’: Wikipedia operates on the basis of an entirely free and open editability policy, which allows anyone to contribute. This, in return, has the positive effect that individual mistakes and personal preferences cancel each other out. Of course this is not to deny that lots of work remains to be done towards this direction, even for Wikipedia itself, but open authorship and massive collaboration in monitoring the results of search engines and online encyclopedias has so far been very successful and it definitely seems to be the way to go.
So what does this all mean with respect to the use of search engines during exams? Is this something we want to allow in the future? The response must probably be a cautiously positive one. Knowledge, in principle, does not seem to pick out information that is solely located within our heads. Accordingly, in theory, there is nothing wrong with allowing for such online resources during exams. But in order for any information thus retrieved to count as knowledge, the two conditions that we have set must also be met.
Therefore, we need to do at least two things before we can introduce search engines and similar web-processes during exams. First, we need to teach students how to Google efficiently, in a way that parallels the retrieval of information from our biological resources. Secondly, we must also make sure that online search engines and encyclopedias are at least as reliable as our brain-bound processes. Both of these tasks require careful planning and perhaps a significant reconceptualization of the current curriculum.
But should we in any way be concerned that this is a recipe for making students dumber, as Chris McGovern may fear? From the extended knowledge point of view, it is not at all obvious why being smart should be associated with holding lots of information within the head. What is the point of internalizing a large body of facts that you do not know how to process? And why keep fixating on what is ‘in the head’? Just as Goldin-Meadow’s experiment suggests that gesturing can make us more efficient in performing cognitive tasks, so perhaps a less biased way for thinking about what being smart means is in terms of the ability to use whatever resources (intra-cranial or extra-cranial) are readily available in order to more efficiently solve any given problem at hand.
If that’s true, however, then barring Google during future exams could be just as detrimental and backward looking as asking students to solve a mathematical problem while sitting on their hands. Is this a policy we would like to implement at a time when students learn how to use a tablet or a laptop before they even know how to write? Unlike most of us, today’s students do not simply learn how to use the web. They grow up in it.
About the authors
Dr S. Orestis Palermos is a postdoctoral fellow working for the AHRC-funded ‘Extended Knowledge’ project at the Eidyn Research Centre, University of Edinburgh.
Duncan Pritchard (FRSE) is Professor of Philosophy at the University of Edinburgh and Director of the Eidyn Research Centre, which is based in the School of Philosophy, Psychology & Language Sciences.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
Clark, A. (2007). Curing cognitive hiccups: A defence of the extended mind. Journal of Philosophy, 104, 163–192.
Goldin-Meadow, S. (2003). Hearing Gesture: How Our Hands Help Us Think, Harvard University Press, Cambridge. MA, 2003.
Palermos, S. O. (2014). Knowledge and cognitive integration. Synthese, 191(8), 1931–1951.
Pritchard, D. H. (2010). Cognitive ability and the extended cognition thesis. Synthese 175, 133–51.