Imagine walking into a classroom that should be buzzing with conversation, but instead the only sound is the soft and nervous clicking of keyboards. Students hunched over laptops, their eyes flicking between their screens and the doorway, tense and alert. Some are typing up original work, putting their best effort into drafting a paragraph and sharpening their thesis. Others have ChatGPT open, hidden away in their many open tabs, generating everything from outlines to entire essays within mere seconds. Not everyone is using AI, though nearly half of the room is. Their silence isn’t focus. It’s pressure, the immense weighing pressure to keep up, to produce new ideas and unique work in a world where a machine can write faster than any human ever could.
Across the Kingswood Oxford community, that tension has become part of the daily atmosphere, specifically in humanities classrooms. The rise of AI isn’t just changing students’ homework habits, it’s reshaping how students learn to think and write. For many teachers whose work is based on the development of these skills, this shift in use of AI in the past couple years has been dramatic.
In conversations with three KO humanities teachers—two English teachers and one history teacher—each described a landscape that had evolved faster than almost any previous educational trend. What began as curiosity a couple of years ago has become a quiet but constant force, influencing how many students complete (or don’t complete) their work. As AI becomes harder to ignore, teachers are asking new questions: How do you teach writing and analysis when a machine can instantly do both? How do you discourage shortcuts without overwhelming students? How do you embrace AI’s potential as a resource without losing the skills education is supposed to build?
All three teachers agreed that AI appears often in the moments when students feel the greatest pressure, most commonly on longer writing assignments, leading many students to use AI at home to complete major components of these assignments.
English teacher Cameron Biondi was one of the teachers to identify this pattern of AI usage, marking it quite obvious from the beginning, noting how any longer take-home assignments were the perfect opening for AI misuse.
Through his experience in the classroom, Mr. Biondi realized students’ intentions weren’t just to have AI write the entire piece. Many students used it to outline, revise, or simply shape ideas they hadn’t fully formed themselves. In response, this year Mr. Biondi and other teachers made the decision to shift towards more spontaneous in-class work. “We’re doing more in the classroom…multiple in-class writings,” he said. When students think on the spot, it becomes much harder to outsource the process.
Another strategy used by multiple teachers, including Mr. Biondi, is having students explain their choices out loud, making their reasoning part of the assessment. “You write it, and you have to defend it out loud,” Mr. Biondi explained, allowing students to do writing at home and do the thinking with less of the in-class pressure while ensuring they understand what they write and can support it.
Like Mr. Biondi, other teachers have witnessed similar patterns in student AI usage is history Department Chair David Baker. He agreed, stating that he most commonly witnessed the use of AI on essays and any other writing pieces, acknowledging the pressure to perform well often pushes students toward AI generated assistance. Even noting this trend, he refuses to rely on any AI detectors, which he believes can be inaccurate and unfair. Instead, he has redesigned assignments so AI can’t easily produce a response that meets requirements. “If I require you guys to use certain sources from certain databases…most AI’s don’t have access to those,” he explained. This forces students to interpret information themselves rather than asking bots like ChatGPT to do it for them.
Similarly to Mr. Biondi, Mr. Baker has also incorporated oral components into assignments. “The best way to know if kids actually know what they’re writing about is to have them talk about it,” he argued. If a student can explain their own thinking clearly, he’ll know they’ve done the work.
Despite the great lengths teachers go to in an effort to counteract the effects of AI, none of the teachers believe in banning AI from the classroom. In fact, all three teachers shared their interest in experimenting with how AI might be used responsibly. Some teachers even use AI to help create assignments for class. “I actually asked AI for help in trying to AI-proof my assignments—which is hysterical to think about,” Mr. Baker said, sharing the irony of the situation.
But his goal isn’t total prohibition. In fact, he plans to design an assignment for this spring where students will use AI as a brainstorming partner rather than a ghostwriter. “A writing assignment where AI is a tool,” he began explaining. “Where you guys will actually get to use AI as a thought partner.”
English teacher Megan Hilliard, similar to Mr. Baker, does incorporate AI somewhat frequently—though mostly in her planning. “I definitely use AI as a thought buddy,” she said, especially while developing assignment ideas for new courses. Ms. Hilliard believes that AI can be helpful especially for teachers, especially in generating ideas, organizing structure, and refining unit content.
She has also had experience with incorporating AI directly into a student assignment. Last year, her junior class tested AI’s creative ability by making it write raps in the style of their favorite artists. This project was very successful with the class and many students enjoyed this unique assignment style. She plans to continue involving AI in the classroom, next semester she’s hoping to create an assignment for her senior class to work with ChatGPT in a socratic-style dialogue, aiming to develop more ideas without giving them answers.
All three teachers have had many experiences with AI in their classrooms, both fighting against students misusing it and attempting to thoughtfully incorporate it into assignments. They are aware that some students will always be tempted to use AI, so they are working hard to find a way to address that temptation. These teachers have all drawn their own similar conclusions about why students become so reliant on AI, not because students are lazy, but because of the pressure and insecurity they feel, which is especially seen in younger generations. “Younger students might feel they don’t have as many of the skills yet… so they might be more tempted to rely on it,” Mr. Biondi explains, really trying to understand some students’ actions.
“I see it more with sophomores,” Mr. Baker stated, agreeing with the trend Mr. Biondi noticed. Mr. Baker claims that though older students use AI too, younger students often may feel more overwhelmed. Heavy workloads, extracurriculars, and tight deadlines make AI feel like a lifeline for countless students.
Ms. Hilliard believes that the normalization of AI could also be due to the misunderstanding of the plagiarism involved in AI usage. “Because you’re importing prompts into a machine…it doesn’t feel quite the same,” she explained. The moral barrier feels lower, and it doesn’t feel like plagiarism because you aren’t copying from another student. This only contributes to the growing struggle to regulate AI, as students don’t comprehend the full magnitude of their choices.
Still, these teachers see the real dangers that students overlook. “We’re not going to know anything,” Mr. Baker said. “We’d be like the people on the spaceship in Wall-E.” He stressed the real threat of students losing their ability to think for themselves and form their own opinions.
Furthermore, Mr. Baker points out the obvious flaws in AI that many overlook. “It’s been developed by someone who has an opinion,” he explained. “ Naturally it is going to have an opinion.” Agreeing with Mr. Baker’s concern, Ms. Hilliard proposed another necessity when working with AI. “Making sure it isn’t making stuff up or being biased,” Ms. Hilliard said. “Those are the two greatest dangers of AI.” She wishes that students had learned ethical AI use earlier in their education so that teachers could introduce more advanced applications sooner. Without that foundation of knowledge, misuse simply becomes too easy.
She also worries that students overlook AI’s bigger societal risks, including its impact on the environment, its spread of misinformation, and its reinforcement of stereotypes and systemic biases. To her, learning about AI use is important because she is so well aware of what’s below the surface of these AI bots. She knows the great dangers and risks posed with using AI. Even so, none of the teachers want to remove AI completely. They see a future where it becomes a tool students use thoughtfully, not a shortcut they depend on. Each hopes students will learn to collaborate with AI rather than surrender their work to it.
In the end, the future of AI at KO can’t fully be shaped by the teachers or the technology itself. It will be shaped by the choices students make, whether they use AI to expand their thinking or avoid it, to brainstorm ideas or to evade effort, to stay in control or to let the machine take over.
Teachers are adapting quickly, experimenting with new approaches and working hard to protect genuine learning. But ultimately, students will decide the role AI plays in their education—whether they use it to think, or let it think for them.

