By Dr Wayne Holmes, Lecturer in Learning Sciences and Innovation Institute of Educational Technology, The Open University
Artificial Intelligence (AI) seems to be rarely out of the news. In fact, AI is fast becoming an integral and inescapable part of our daily lives: from Siri to self-driving cars, from forecasting stock movements to predicting crime, from face recognition to medical diagnoses. At the same time, but with less fanfare, AI has also quietly entered the classroom.
Whether students, teachers, parents and policy makers like it or not, ‘intelligent’, ‘adaptive’ and ‘personalised’ learning systems are increasingly being deployed in schools and universities around the world. Meanwhile, the tech giants Amazon, Google and Facebook are investing millions of dollars developing Artificial Intelligence in Education (AIED) products, joining well-established multi-million dollar-funded AIED companies such as Knewton. In fact, by 2020, AIED is predicted to become a market worth almost €1 billion.
The ethics of AIED – an unknown quantity
However, all of this AIED research, development and deployment is taking place in a moral vacuum. Around the world, virtually no research has been undertaken, no guidelines have been provided, no policies have been developed, and no regulations have been enacted to address the specific ethical issues raised by AIED. And, while the use of artificial intelligence techniques such as neural networks and machine learning in education might not be as newsworthy as ‘killer robots’, the impact on students and the consequent implications for future society are profound.
In fact, AIED techniques raise an indeterminate number of self-evident but as yet unanswered ethical questions (only some of which are even partially covered by GDPR). To begin with, concerns exist about the large volumes of data collected to support AIED, such as the recording of student competencies, emotions, strategies, and misconceptions. Who owns and who is able to access this data? What are the privacy concerns? And, who should be considered responsible if something goes wrong?
It’s not just about data
However, while data raises major ethical concerns, AIED ethics cannot be reduced to questions about data. Other issues include the potential for bias (conscious or unconscious) incorporated into AIED algorithms and impacting negatively on individual students. But these particular AIED ethical concerns, centred on data and bias, are the ‘known unknowns’. What about the ‘unknown unknowns’, the ethical issues raised by the field of AIED that have yet to be even identified?
Where AIED interventions target behavioural change (such as by ‘nudging’ individuals towards a particular course of action), the entire sequence of AIED-driven activities also needs to be ethically-grounded. Other AIED ethical questions include:
- What are the criteria for ethically acceptable AIED?
- What are the AIED ethical obligations of private organisations (developers of AIED products) and public authorities (schools and universities involved in AIED research)?
- How might schools, students and teachers opt out from, or challenge, how they are represented in large datasets
- What are the ethical implications of not being able to easily interrogate how AIED deep decisions (using multi-level neural networks) are made?
It is also important to recognise another perspective on AIED ethics: the ethical cost of inaction and failure to innovate must be balanced against the potential for AIED innovation to result in real benefits for learners, educators and educational institutions.
An opportunity to address the challenges
As this brief summary illustrates, there are multiple issues with profound ethical implications. Yet, despite the 40-plus year history of AIED, there has been virtually no engagement with ethics of AIED. This is why the Open University research group openAIED is holding a workshop, Ethics in AIED: Who Cares? at AIED 2018 – the 19th International Conference on Artificial Intelligence in Education taking place in London this month.
The workshop is an opportunity for researchers who are exploring ethical issues critical for AIED to share their research, to identify the key ethical issues, and to map out how to address the multiple challenges facing us all, working towards establishing a basis for meaningful ethical reflection necessary for innovation in the field of AIED. The workshop will also include a discussion led by Professor Beverly Woolf from the University of Massachusetts Amherst – one of the world’s most accomplished AIED researchers.
We hope our workshop will drive an important conversation about AIED ethics, moving the research community towards meaningful ethical reflection needed for vital innovation in the AIED field.