I have recently completed my PhD and secured a position as a senior learning adviser at Edith Cowan University, focusing specifically on AI literacy. Artificial intelligence has been my passion for the past 6.5 years—well before generative AI became a fixture in our educational landscape. During this time, the ‘technological terrain’ has shifted dramatically.
Since starting in this new role, I’ve had the opportunity to focus exclusively on AI in education. While my PhD research centered on AI, I was previously juggling multiple roles that didn’t afford me the same depth of reflection I’m experiencing at ECU. There, I can also be surrounded by curious minds that challenge my thoughts with questions and resources. Some of them, I am bringing to you in this post.
Last week, a colleague shared an article by Jason Lodge drawing an analogy between e-bikes and generative AI. I initially found the comparison brilliant until I read Joanna Kai’s thoughtful complement to the article. These contrasting perspectives prompted me to examine my own struggle to maintain a positive outlook on AI as a beneficial educational tool. Or at least purely beneficial.
AI is Not Just Another Technology
I’ve never viewed AI as a saviour, but my perspective was further challenged when listening to a podcast I’ve followed for years. AI in Education is a podcast that features Dan Bowen (a Microsoft employee) and Ray Fleming (more connected to industry than education). Ray suggested that educators’ reluctance to incorporate AI mirrors historical resistance to other technologies. He claimed we’ve seen this pattern before with Google—when teachers feared it would do students’ homework—and with calculators.
Not the calculator analogy again! We must stop using these simplistic comparisons that minimise AI’s capabilities and invalidate educators’ legitimate concerns. Google could never complete a student’s homework in its entirety, but generative AI can. It can complete assessments and even create (or complete) entire courses, as demonstrated by tools like OpenAI’s Operator, as Leon Furze shows us. Could we compare AI more to a knife than a calculator? I will explain why. Bear with me.
While Jason and Joanna offer insightful analogies comparing generative AI to “e-bikes for the brain,” equating this technology with calculators and search engines is fundamentally misleading. AI is here to stay—we all acknowledge this reality. However, from the beginning, this technology has been imposed upon teachers with minimal consultation. Now, educators are tasked with evaluating whether these tools enhance educational experiences (Brazil et al., 2025) without adequate preparation for making such assessments. The pervasive notion that AI is “just like a calculator” is not only incorrect but harmful.
AI literacy: The Real Impact on Educators and Students
My role involves helping staff and students use AI responsibly in their academic pursuits. Daily, I encounter individuals experiencing anxiety about the future. They fear their limited understanding, their inability to keep pace with rapid technological changes amid numerous other responsibilities, and the potential risks to themselves and their students. Though I strive to maintain optimism about AI, I recognise that it represents something far more complex than a calculator.
I am encouraged by the ways other institutions are supporting their communities in developing AI literacy. While the adoption of calculators in education followed a relatively uniform path, the integration of AI presents a much broader range of applications and challenges. This diversity means that there is no single, universal approach to establishing best practices for AI use; instead, institutions must tailor their strategies to fit specific needs and contexts.
For instance, Messri and Crockett provide practical, step-by-step guidance for implementing AI tools in educational settings. Their recommendations closely align with those of Jonathan Brazil and others. They emphasize the importance of clear policies and ongoing support for effective AI integration, a clear path to follow.
These studies illustrate how institutions can draw on established frameworks while adapting to the unique opportunities and challenges AI presents. Hillary Wheaton, in her podcast for AARE Technology SIG, highlights RMIT’s impressive initiatives to support staff and students—approaches I’ve considered adapting.
We Need Better Metaphors and More Caution
While some contribute meaningful analogies and share effective practices, we should abandon comparisons between generative AI and car engines “that you don’t need to understand to operate,” calculators, or search engines. Generative AI is unprecedented in our educational history.
I’m concerned about the aggressive advocacy urging educators to “play” with these tools and “give them a try” without proper understanding. Would we encourage someone to drive a car before obtaining a license, understanding traffic rules, or demonstrating competence? Why, then, are we expected to implement tools without comprehending them? Why is there such hostility toward cautious educators and students? How can we ask users to evaluate these tools when they lack the literacy to do so?
AI literacy: Taking a Measured Approach
I believe we need to proceed cautiously. E-bikes can be dangerously fast, especially when modified. While they offer valuable benefits for certain users (allowing less physically able individuals to access previously unreachable places), each case warrants careful consideration. Similarly, AI offers tremendous potential benefits (such as supporting children with special needs). But it can also generate the anxiety and harm I’ve described.
I wish I could offer a more unconditionally positive perspective. My analogies rarely cast AI as an unambiguous hero. Instead, I see it more like a knife—a tool that can spread butter on bread or harm others, depending on how it’s wielded.
Educators’ concerns shouldn’t be minimised, and our analogies should accurately reflect the uncertainty that characterises AI’s current role in education.

Juliana Peloche is a senior Learning adviser at Edith Cowan University specialising in AI literacy. With a doctorate in AI in education from the University of Wollongong and over 20 years of teaching experience across Brazil, Chile, and Australia, she bridges emerging technologies with educational practice. Her research on stakeholder perceptions of AI uniquely positions her to guide institutional adaptation to technological change. At ECU, she leads initiatives to enhance AI literacy among faculty, staff, and administrators, drawing on her extensive experience in institutional leadership and policy development.
This article was originally published on EduResearch Matters. Read the original article.
Got something on your mind? Go on then, engage. Submit your opinion piece, letter to the editor, or Quick Word now.