Meet the lecturer: Nigel Crook

Each month we chat to a university lecturer about their passion for their subject – and for teaching

A researcher in artificial intelligence (AI) and robotics, Professor Nigel Crook is associate dean for research and knowledge exchange at Oxford Brookes University, where he also founded and directs the Institute for Ethical AI. His research interests include machine learning, embodied conversational agents, social robotics and autonomous moral machines.

Was there a moment when you realised that this subject would be your lifelong passion?

I don’t recall a specific moment, but I know that it was my combined interests in philosophy and computing which propelled me in the direction of artificial intelligence – even though, for most of my undergraduate studies, I had no idea what that term meant. I took an opportunity to do a PhD in the subject, which is the point at which my passion for the subject began to form.

How has that passion developed over time? Have you gone down different byways – or maintained the same core interests?

I have been something of a nomad within the subject. After my PhD, I began teaching a course on neural networks with a colleague who got me interested in the whole area of machine learning. Later I branched into computational neuroscience and the mathematics of chaotic systems. Then, late in my academic career, I took voluntary severance and accepted a position as a postdoc research assistant in computational linguistics at Oxford University.

This was such a formative time for me. Until then, my interests in AI were mainly focused on developing new algorithms. At Oxford I discovered a passion for AI systems that had a ‘social’ dimension.

Social robotics is the study of robots that can interact with humans and in a human environment. Photo © 2possessed-photography-jibmsms4_ka-unsplash.com

 

When I returned to Oxford Brookes in 2011, I created a new Cognitive Robotics Laboratory to work specifically on social robotics. My most recent development was the result of realising that my amateur interest in moral development in a theological context could be combined with my academic interest in social robotics. I then began to work on a subject that really grabbed my passion: autonomous moral machines.

At the same time, ethical AI and the so-called ‘alignment problem’ became of significant public interest. My passion for this area led to the creation of the Institute for Ethical AI.

“For most of my undergraduate studies I had no idea what artifical intelligence meant”

Are you still learning about the subject yourself?

Yes, very much so. My current work involves the use of reinforcement learning to enable a robot to learn how to abide by moral and social norms. I am learning a good deal about hierarchical reinforcement learning, which I think will prove to be important in this area. This need to continually adapt and learn new things is one of the things I love about academia.

Are there ways in which you’d like to see computing and AI taught differently?

Definitely. When I was head of department, I found that the classical one- or two-hour lecture followed by practical class each week just wasn’t effective. Students struggled to connect the material that they were given in the lecture with the hands-on practice of writing a programme.

We got rid of the lectures, and instead taught students in computer rooms using a three-step cycle: mini-teach, demonstration, application. That short three-step cycle transformed our students’ capacity to learn programming.

Where do your most rewarding moments come as a teacher?

I have always loved teaching advanced programming, in particular object-oriented programming. In the early days, I taught a programming language called Eiffel, which is a beautifully designed language and a joy to program in and to teach.

Later, I enjoyed teaching Java for different reasons – including its many quirks and nuances that you had to understand. I very much enjoyed setting the students little problems that revealed some of these quirks, seeing their bemusement when Java didn’t behave as they expected, and then helping them to understand what was really going on. Great fun!


Follow Nigel on Twitter @NigelCrook
Further reading: www.ethical-ai.ac.uk


You might also like: Meet the lecturer: Natalie Kopytko

Leave a Reply

Send an Invite...

Would you like to share this event with your friends and colleagues?

UPCOMING LIVE WEBINAR

How to simplify access to your institution

Thursday, July 1, 11AM (BST)