When an Apocalypse Isn’t the End of the World: Generative AI in Schools
In his much-shared article “The Homework Apocalypse,” Ethan Mollick opens with the assumption that “Not enough educators (and parents) are preparing for the Homework Apocalypse that is coming this Fall.” Given the conversations I see happening among our members and the larger independent school community, I think he’s probably right, although I’m impressed by much of the conversation I see happening.
Mollick advises educators to think beyond trying to define or stop what constitutes cheating on assignments; statistics have long told us that cheating is rampant among high school and college students. Mollick pushes educators to probe a little more deeply into the “why” of assigning students to write essays, read complex texts, and solve problem sets. Academic leaders need to ask faculty questions and lead conversations that seek to get at better defining the purpose of these assignments. Educators need to move to a spot where they see AI as a useful partner in supporting students in reaching learning goals rather than as an enemy to be thwarted in (probably) futile efforts at “catching” AI use.
It’s a great piece and every educator should read all of it, not just my take. Because I think reading it provides the motivation to tack what has to be a central question: what does generative AI mean for me, in my role, at my school?
Academic Leaders have to help their colleagues in schools move from curiosity to understanding faster than any of us may really want to. One of the memes I keep seeing and thinking about these days is the one that says something like: “You won’t lose your job to AI. You’ll lose your job to someone who knows how to use AI.” Just like we all had to learn more about hosting video calls than we wanted to in 2020, there’s an urgency to learning about generative AI that is being thrust upon us. (willing or not). Anyone not retiring in the near future needs to engage in a deep evaluation of classroom work and assignments. In other words, it is time for every independent school educator to become “someone who knows how to use AI.”
How can we become educators who know how to use AI? It’s a three step process.
First, educators need to understand what generative AI is, how it is similar to and different from the robots portrayed in science fiction, and recognize the ethical dilemmas widespread generative AI development and use brings to the landscape. Why and how does generative AI work? What harm has this technology already brought with it? What risks do we face?
Next, educators need to become confident in their capacity to assess AI’s capabilities and limitations. This will require doing some deeper diving into what’s possible than many educators have done. Just putting a few prompts into ChatGPT isn’t enough. Educators need to explore several tools with colleagues, talk about the results, get past the initial “wow” response, and ask real questions. What will this kind of AI mean in my discipline or grade level? In students’ future careers? How can AI help me in completing my professional work more effectively and efficiently? Check out our suggestionshere for a curated list of possible sites to use in this kind of exploration.
Finally, educators have to decide what needs to change immediately, what can wait until next month, and what needs to evolve over time. Not adapting to generative AI’s existence is not an option. During our Academic Leaders Forum, small groups discussed a hypothetical but not far off scenario that incorporated multiple viewpoints around the use of AI to create highly personalized learning experiences for students. We’ve updated that scenario for school leaders use or adapt to encourage conversations about how generative AI might impact multiple aspects of their school.
Academic Leaders are uniquely suited to tackling this process because they’re experts on diving into pedagogy and curriculum. Educators will need to learn “pedagogical content knowledge specific to AI,” says Daniela Ganelin, co-author with Glenn Kleiman of Understanding AI Technology: An Introduction for Educators. Academic Leaders should pull the school’s mission and guiding documents such as a portrait of the graduate into the conversation immediately. Referring back to Mollick, if the goal of reading a complex text is to develop life-long critical thinking and analytical skills, what does that mean for those who teach these skills? What questions need to be asked? What curricular and pedagogical changes have to be made? At which grade levels do those changes need to be made so that our students develop these skills?
These are questions and conversations that can’t be answered in a single one-hour meeting, or even during opening faculty days. This is a challenge to tackle over time, in collaboration, and with iterations. Faculty will benefit from having dedicated collaborative time with teammates, departmental peers, and students (don’t leave out the student voice!) throughout the year to approach this work. Every Academic Leader will need to work with their colleagues to ensure that teachers have this time and that it’s protected and generative.
So that “Homework Apocalypse” I mentioned at the beginning of this essay? The word apocalypse evolved from the Greek word apokalypsis, which translates to "something uncovered" or "revealed." In other words, AI doesn’t have to be the end of the world–it can show us a new path forward.
A few of our members shared their AI policies and commented on each other’s drafts during the forum and they’ve generously shared those in our member portal and we invite others to see those and share their own as well.
Academic Leaders know that talking through tricky situations in the abstract helps teams handle challenges when they actually appear. Our AI scenario is designed to help association members have essential conversations about new challenges, before they become conflicts or crises. Use our scenario to develop your team’s readiness to respond.