Insider Spotlight: Genna Sarnak
Welcome to the Insider Spotlight section, where we feature real questions from our team and answers from educators who are making a difference teaching news literacy. This month, our featured educator is Genna Sarnak from Northfield, Massachusetts, where she teaches digital media literacy to middle school students.
Q: When did you first determine that it is important to integrate AI topics into your teaching?
A: I teach my students how to think critically about the world, from understanding digital footprints and misinformation to reading laterally and finding credible sources. I hope that they leave my class empowered with a toolkit to navigate the often- challenging digital landscape around them.
It quickly became apparent at the beginning of the school year that it was essential to integrate AI into the curriculum. Adults are using AI, including many teachers. Whether we like it or not, students are encountering and using AI as well. I started incorporating it in small ways at first with short quizzes, and students were receptive to them. From there, we moved on to full lessons and practicing using AI chatbots.
Q: How have students responded to learning about AI? Are they generally curious, skeptical, excited?
A: Overall, students are very interested in learning about AI! When we started the “AI or Not?” lesson, students completed the K-H-W-L chart and, not surprisingly, responses ranged all over the board. Some wrote that they heard “AI will take over the world,” that it’s “dangerous,” “will steal your information,” “replace jobs,” or that it’s just generally “bad.” Through the explicit lessons, we’ve been able to explore and break down how AI works (understanding training data and algorithmic bias), the potential pros and cons, the future of AI, and how students themselves can use algorithms to their advantage.
For example, we’ve practiced how to effectively write prompts using AI-generator tools (like ChatGPT), then reflect and improve upon the answers with a second prompt. Tasks like this illustrate appropriate uses of generative AI for students, while also explicitly teaching them how to effectively use keywords (for example, asking it to cite sources or reply with a specific audience in mind).
Q: What are key takeaways students have from the “Introduction to Algorithms” lesson on Checkology?
A: Students learned and were empowered to understand what an algorithm is, how they work, and how to use algorithms to their advantage. This included how they can generate the most impartial and least biased search engine results, curate their newsfeeds, engage with and determine credible sources, and reflect on their own biases. The lesson helped students understand how algorithms can serve them. Students were also able to understand how training data can be flawed and, as a result, biased. I’d say the overall result is that most students now have a better grasp on what AI is, how to determine if an image or text is generated by AI, how to fact- check sources when they’re unsure (and what factors to look for), and how AI tools are changing the nature of evidence for claims.
Q: What is one memorable time of a student having an “aha moment” related to AI?
A: It was really interesting to hear students’ perspectives on the open-ended ethical questions that I posed, such as, “Is it okay for AI to make decisions around who to hire based on job- –seekers’’ resumes, or for doctors to use it when making a diagnosis?” Challenging students to think about the real-life implications of AI answered the “So what?” and “Why does this matter?” that I sometimes encounter from students.
Q: From your experience, do young people have an inherent understanding of technologies like AI, or are they still vulnerable to falsehoods online?
A: In my experience, students are especially vulnerable to falsehoods online, particularly around the media they consume, share, and create. Students certainly do not have an inherent understanding of technologies like AI, and now more so than ever before, teaching digital information literacy and critical thinking skills feels vital.
Q: As our infamous “bird quiz” reveals, identifying AI-generated content can feel like a guessing game! How have you built students’ confidence in their ability to tell what’s real and what’s not?
A: It isn’t always easy, so we talk about transparency and trying to understand what is the main purpose of the content presented to us. I focus on the importance of questioning content before blindly believing it (or worse, reposting it!). Especially with social media and viral posts, I stress the importance of looking at who’s sourcing and posting the information, breaking down the actual claim, what evidence is being cited to support the claim, and then fact-checking using credible sources. We practice this a lot!
I teach students some “clues.” For example, AI-generated images may struggle with drawing hands/fingers, have inconsistencies (people may suddenly disappear, for example), produce blurred text or backgrounds, and or use watermarks and labels to denote they’ve been created with AI. If it quacks like a duck, and looks like a duck, but walks like a dog, something should be sending off alarms in your head, I tell them.
I’ve used resources like the “AI-generated news or not?” slides from The Sift as well to teach about critically analyzing human-created content (an author with a byline) vs AI-generated content (which is cited as such).
Ultimately, though, with the rise of more convincing AI visual technologies, hunting for visual clues alone isn’t sufficient. I teach them that the only real way to identify what’s real or not is to think critically about the context (or missing context) of what’s being posted and to find the original source. Students are encouraged to approach viral posts with skepticism, using their lateral reading and critical thinking skills to look up anything suspicious. They practice with sites like Google reverse image search and Tineye for looking up images, and they use fact-checking sites, including RumorGuard, to find reliable sources for viral claims.
Q: What advice do you have for other educators who aren’t sure how to teach about AI in the classroom, or whether they should teach it at all?
A: Explicitly teaching students AI literacy (such as how it works, when and how to utilize it, and starting a dialogue around potential ethical issues) is essential, and I think it’s important for all schools to address. Ignoring AI or living in fear of teaching about it is a disservice to our students. There are tons of resources out there to help, and if you’re feeling overwhelmed, I’d suggest starting small. Try short daily prompts, or teaching part of a lesson — it doesn’t have to be a massive unit. I often use RumorGuard slides from The Sift newsletter as “mini-lessons” to begin class (as well as the weekly -updated Daily Do Now slides and the AI version). Students understand and respond to these examples because they’re relevant, up-to-date, and — best of all, for me — they’re super easy to implement with very little prep! This is a great starting place.
At a minimum, engaging in an honest conversation with students about AI can be enlightening and useful. In an era of misinformation and disinformation, it’s our responsibility as educators to help students comprehend the world and equip them with the necessary skills for them to succeed. By supporting students with their AI literacy, we can help prepare them for the rapidly changing landscape of information and equip them to build the foundational skills they will inevitably need.
Start teaching about AI
Spend this summer exploring a short and sweet selection of free AI teaching resources from NLP. Dive in right away with summer school students, or explore on-demand professional development opportunities and make plans to jump in when the new school year rolls around. >