The Pros & Cons of AI in the Classroom
“Useful Assistant or Answer Machine?” by Sean W. Malone | AI Generated
Few technologies in modern history have entered classrooms as quickly (or as controversially!) as artificial intelligence. In a matter of years, AI tools have gone from research curiosities to everyday utilities capable of generating essays, solving math problems, summarizing books, and answering complex questions on demand. For educators, parents, and institutions, this has created understandable anxiety alongside genuine excitement.
At Lexandria, we take a measured view. AI presents real opportunities to improve educational tools and access, but it also poses serious risks if used carelessly — particularly for students who are still developing foundational intellectual skills. Any serious discussion of AI in education must grapple honestly with both sides of that ledger.
For much of the public conversation, however, that balance is missing.
The Risks of AI in Student Learning
The most significant danger of AI in the classroom is not that students will occasionally misuse it, but that they will abdicate responsibility for learning altogether.
A student who relies on AI to answer math problems does not learn how to do math. A student who submits AI-generated essays without reading the source material or generating original thoughts does not learn how to write — or how to think. In practical terms, this is no different from plagiarism or cheating in any other context. The difference is that AI makes the temptation easier and the misconduct harder to detect.
This concern is not theoretical. A 2023 survey by the Pew Research Center found that roughly one in five U.S. teenagers who had heard of ChatGPT reported using it for schoolwork, with usage increasing among older students. Meanwhile, Turnitin reported in 2024 that millions of student submissions already showed detectable AI-generated content, raising widespread concerns among educators about assessment integrity and the erosion of academic trust.
The underlying problem is not technological. It is developmental.
Education exists to cultivate habits of mind: disciplined reading, structured reasoning, sustained attention, and the ability to articulate ideas clearly. These skills are not acquired by exposure alone. They are built through effort — through grappling with difficult texts, making mistakes, revising arguments, and learning to sit with uncertainty. When AI replaces that struggle rather than supporting it, learning collapses into mere task completion.
This is especially damaging in subjects that depend on cumulative mastery. Mathematics, writing, logic, and critical reasoning cannot be learned passively. Students who bypass the learning process by leaning on AI may appear productive in the short term, but they are quietly hollowing out their own competence. Over time, this produces a widening gap between apparent performance and actual ability — a gap that becomes painfully obvious in higher education, the workplace, or civic life.
Some observers compare this moment to earlier technological disruptions, such as calculators or spell-checkers. But the comparison only goes so far. A calculator assists computation after mathematical principles are understood. AI can replace the entire intellectual process from prompt to final submission. That difference is not trivial. It goes to the heart of what education is meant to accomplish.
Where AI Can Add Legitimate Value
None of this implies that AI has no place in education. When used thoughtfully, it can be a powerful tool — particularly at the instructional and institutional level.
At Lexandria, we use AI extensively, but deliberately. Our guiding principle is simple: AI should increase efficiency and scale in areas where human expertise already exists. It should not replace the thinking that education is designed to cultivate.
In practice, this means using AI to help clarify, clean, and format large volumes of classical and historical text so they can be consistently presented within our platform. It means leveraging AI-assisted coding tools to accelerate improvements to user experience that would otherwise be out of reach for a nonprofit organization. It means generating supplemental imagery that helps maintain student engagement — imagery we would never be able to afford or produce manually at scale.
AI also assists in drafting outlines and preliminary prose for new educational resources, as well as generating first-pass quizzes and assessments aligned to course material. In all such cases, the output is reviewed, revised, and verified by human experts. AI accelerates the process, but it does not determine the final content.
This distinction matters, and it is frequently lost in broader debates.
Much of the confusion around AI in education stems from a failure to distinguish between assistance and substitution. The most important questions are not whether AI is used, but how and by whom.
First, is AI being used to improve efficiency at tasks the user already understands and has mastered or is it being used to evade the work required to gain that mastery in the first place? A teacher using AI to generate quiz questions from familiar material is fundamentally different from a student using AI to write an essay on a book they’ve never read.
Second, is AI functioning like a research assistant — helping locate sources, organize information, or surface relevant passages — or is it being asked to deliver final answers to complex questions the user does not yet understand? In the former case, AI can support learning. In the latter, it replaces it.
Third, are AI-generated outputs being checked against primary sources, reviewed by knowledgeable humans, and subjected to skepticism or are they being accepted uncritically?
AI systems are well-documented to fabricate citations, introduce factual errors, and confidently present falsehoods. Treating them as authoritative is not education. It is outsourcing essential judgment in a potentially dangerous way.
These distinctions are often flattened in more didactic discussions of AI in schools, but they are essential. Without them, conversations devolve into either blanket bans or uncritical enthusiasm, neither of which serves students well.
Institutional Responsibility is Paramount
One common objection deserves direct attention: is it hypocritical for educators to use AI while banning its use by students?
The answer is no, as long as the distinction between assistance and substitution is clearly maintained.
When a teacher uses AI to grade multiple-choice quizzes, this is no different in principle from using an automated Scantron. When a teacher uses AI to brainstorm lesson activities, organize course material, or streamline administrative tasks, the intellectual labor of teaching remains intact. AI is reducing friction, not replacing thought.
By contrast, when a student uses AI to complete assigned work that is explicitly designed to develop their own skills, the AI is doing the thinking for them. These are not parallel uses. They serve entirely different ends.
Education has always involved asymmetry. Teachers have access to tools, experience, and knowledge that students do not yet possess. That asymmetry exists precisely to guide students toward mastery — not to eliminate the need for growth altogether.
For institutions like Lexandria, the responsibility is therefore twofold. First, we must use AI in ways that enhance access, quality, and sustainability without compromising intellectual rigor. Second, we must be explicit about where we draw boundaries, and why.
AI allows us to offer high-quality educational resources at no cost, to iterate quickly, and to reach audiences we otherwise could not. But those benefits are only meaningful if the underlying content remains accurate, demanding, and human-guided.
Every AI-assisted resource we publish is reviewed. Every primary text remains central. Every tool we deploy is evaluated against a simple standard: does this help learners engage more deeply with the material, or does it encourage disengagement?
AI is neither the savior nor the destroyer of education. It is a tool: Powerful, flexible, and morally neutral.
Its impact depends entirely on how it is used.
Used poorly, AI will accelerate intellectual atrophy, reduce learning to output generation, and erode trust in academic work. Used well, it can expand access, reduce administrative burden, and allow educators to devote more time to teaching rather than logistics.
The question that matters most is not whether AI is present in the classroom, but whether it is serving as an assistant — or doing the thinking for us.
At Lexandria, we are committed to the former.