The Great AI debate: A force for good?
Posted on 28th November 2024 by Elena Oncevska AgerI took part in The Great AI debate: A force for good? as part of the ELTons Festival of Innovation 2024, the day before it was announced that Noticing had won the ELTons award for Innovation in the Use of Technology.
The debate was chaired by the British Council’s EdTech Lead Neenaz Ichaporia, and the panel included Senior Researcher and Data Scientist at British Council Mariano Felice, Founder and CEO of Plingo Guy Zaslavsky, and Principal of CIDER International School Neeti Tripathi.
Noa has the ability to use Deepgram's Transcribe API, and this allowed me to simply extract a diarized transcript of the debate's audio, which can be found here. I have cleaned up my own responses to the three questions which were asked of me, and included them below. Also, as time at the end was limited, I was unable to share all my thoughts for the closing statement, so for that part I have expanded upon the transcript.
Q1. What aspects of language education (teaching and learning) can AI help with and how?
When it comes to supporting learning, be it in the context of learning languages or learning teaching, AI can offer up a vast range of personalised, immersive and safe environments so learners can choose their own learning paths and their preferred ways of learning. We know from the literature that these are prerequisites for involved, enjoyable and lasting learning. We learn through our active involvement, through powerful experiences, by exercising our agency in building knowledge. This kind of responsibility is central to learning.
By generating for us, AI can narrow these knowledge building processes, positioning us as spectators of knowledge building, not active agents. So, instead of having AI generate content for us, AI can be more usefully employed to help us generate better, richer content of our own. This is exactly what we do at Noticing Network: helping teachers reflect, plan lessons and investigate their classrooms through dialogue with AI models designed to prompt rich thinking. Non-judgemental dialogue is key to learning. Speaking has the power to reveal confusion in our understanding and help us revise our thinking, to throw us in doubt, and teachers/mentors cannot always support these knowledge building processes on an individual basis.
Like in any good pedagogy, the learner needs to remain centre-stage, with the teacher/mentor/AI model facilitating the knowledge building processes, not doing the knowledge building for them. The result is better preparedness to use the language in new ways (for students) or design more engaging learning experiences (for teachers).
Q2. What evidence is there for the efficacy of AI-powered learning solutions? And what evidence should we be looking for?
The evidence I’m looking for whenever I review any use of AI is if it is supporting of agency. Does it rely on the learner’s active involvement? Do they feel better prepared for it? There is increasing evidence of AI successfully supporting learners in developing their productive skills, such as speaking and writing. AI can act as a conversation partner to help with pronunciation, accuracy and/or fluency, while alleviating speaking anxiety.
AI can also support the development of writing by checking learners’ grammar and vocabulary, as well as providing feedback on their writing. Our experience suggests that AI models may not be as ‘honest’ as learners might want them to be because they are trained to please and this urge might take precedence. As for teachers, AI for lesson planning appears to be rather popular, though what we mean by this can range from ‘fill in a quick form and get an 80% finished lesson plan in minutes’ to what we at Noticing Network are doing, which is to simulate a pre-lesson mentoring discussion – a dialogue which is meant to enrich teachers’ thinking by getting them to revise things, notice any incompleteness in their thinking or find those important moments of confusion which trigger new thinking and help them feel better prepared for the challenges of the classroom. My pre-service teachers have particularly appreciated the ‘patience’, the privacy and the validation they get when using Noticing.
There is also some evidence for the benefits of AI-supported collaborative learning; indeed, having an AI as an additional discussant (like in Zoom) can enrich conversations, while role-modelling productive group engagement. AI now makes it possible for teachers to design adaptive gamified environments, which can lead to yet more immersive learning.
Q6. Do teachers and learners need to worry about the ethics of AI? How is this relevant to them?
Oh, absolutely we should be concerned about how ethical our engagements in society are, and our engagements with AI are no exception. To provide context for my discussion of ethics, I would like to share where I’m coming from values-wise. To quote Dr Jan McArthur from the University of Lancaster: Ideally, education would prepare students to meaningfully, agentically (proactively) and compassionately engage with various aspects of their social life. Learners would make positive social contributions which support their own and others’ wellbeing. Ideally, teachers would provide role-modelling for such engagement with society.
So, yes, I am worried about students passing off AI-generated essays as their own, because it means that they have missed out on opportunities to actively engage in planning, drafting and discussing their ideas, depriving themselves, in the process, of that joy of learning, of that satisfaction of a job well done, which is central to wellbeing. Equally, I’d be worried about teachers being OK with 80% of their lesson planning being done for them, especially as I consider lesson planning to be central to the teaching vocation. Carefully designing engaging learning tasks for a specific, unique, your own group of students is a privilege, and an opportunity to demonstrate care. I suspect that teachers openly talk to their students about lesson planning co-authorships with AI. Perhaps this is due to the morality of it. A good test for integrity would be to ask ourselves the question: Would it be acceptable for another human being to do this task for me?
So, integrity is a key ethical consideration. Considering if the specific use of AI prepares us to be productive and compassionate members of society, is another. There are yet more ethics-related concerns which I’ll just gloss over now, in the interest of time: Is data stored safely and privately? Does everyone have equal access? Are some cultures (linguistic or otherwise) favoured? Is AI-generated content used responsibly, to make a positive impact in society? Are people engaging with AI treated compassionately and with respect, i.e. as whole persons and not just numbers? Does the use of AI facilitate meaningful human connection? What is the environmental impact?
Closing statement
Instead of being tempted by ‘super quick, easy and convenient’ ways to ‘save time’, and in the process inadvertently replacing ourselves by AI, we are better off considering ways in which we can use this powerful tool agentically, thus making a positive contribution in society which improves our own and others’ wellbeing. There is a symmetry in the rich experience to be had whether a learner engages in a carefully thought-out AI-accompanied learning experience, and a teacher collaborating with AI to prepare that rich experience for the learner. It can be said that both the learner and the teacher are likely to emerge from such collaborative processes with AI better prepared and more confident to respond to similar challenges in the ‘real’ world because of their personal investment in the process. They own their knowledge, unlike the student generating an AI-essay or the teacher using an AI-lesson plan.
Such active collaboration with AI is advocated for in the literature. In their manifesto for ‘thoughtful integration of AI in education’, Urmeneta and Romero (2024) argue for “collaboration between human intelligence and AI”. Akata et al. refer to this as “hybrid intelligence” (HI) So, instead of content generator, AI can act as an interlocutor, as an intellectual dialogic partner, as a sounding board to help extend our ideas while supporting our agency and authenticity. We need to copy and paste with care.
We hope Noticing is one step towards this ideal, and just one of many yet-to-be-conceived examples of using generative AI in ethical, non-generative, non-replacing ways, with humans and AI playing to each other’s strengths, like true collaborators (Wu et al., 2021). So, Is AI a force for good? Absolutely, it can be, provided we use it ethically, to support learner-centered pedagogies, to help develop agency, to create safe, non-judgemental spaces for meaning-making, hopefully in collaboration with AI going forward (HI-style). And provided people are empowered to see through the fog of shiny, attractive AI offerings that promise but don’t deliver on these ideals. Only then can we emerge from our collaborations with AI feeling better prepared and human.
Written by Elena Oncevska Ager
Written by Elena Oncevska Ager
Elena Oncevska Ager is Full Professor in Applied Linguistics at Ss Cyril and Methodius University
in Skopje, North Macedonia.
Her work involves teaching English for Academic Purposes (EAP) and supporting the development of English
language teachers, in face-to-face and online contexts. Her research interests revolve around EAP and
language teacher education, with a focus on mentoring, group dynamics, motivation, learner/teacher
autonomy and wellbeing.
Elena is particularly interested in facilitating reflective practice, in its many forms, including
through using the arts and by using AI to facilitate it. Her investigations are designed in such a way
as to inform her practice of supporting learning and teaching.