Schools
A calm, practical AI safety and well-being workshop for students.
Built for today’s classrooms: AI proficiency, digital judgement, and healthier tech habits.
Students are already using AI for homework help, entertainment, and increasingly for companionship. This session teaches responsible AI use and the habits that protect mental wellbeing, privacy, and identity.

This school workshop is designed for principals, teachers, and parents who want their students to build real AI skills without sleepwalking into the risks. The tone is educational, not fear-based – with clear examples students recognize from real life.
Students will learn:
- AI proficiency that makes sense
- what an AI model is (plain language)
- what AI is good at vs. where it confidently gets things wrong
- how to prompt for better results (and how to verify information)
- Mental health & well-being in the age of relationship chatbots
- why AI “companions” can feel real – and why that matters
- warning signs of unhealthy attachment, isolation, or dependency
- practical habits that protect sleep, attention, and real-world relationships
- how to get support if conversations cross a line
- Scams, fraud, and identity traps students actually face
- phishing and “free” offers that steal logins
- impersonation, fake accounts, and manipulated screenshots
- sextortion and pressure tactics: what to do immediately
- simple protections: privacy settings + 2FA basics
Junior Students
Focus: Foundations, safe curiosity, and simple rules students can remember.
Students learn:
- What AI is (and isn’t) with kid-friendly examples
- How to ask better questions (basic prompting)
- “Don’t share” rules: name, school, address, photos, passwords
- Early scam awareness: “free Robux,” prize links, fake giveaways
- What to do if something feels weird: pause → tell a trusted adult
Wellbeing angle: AI can sound friendly, but it isn’t a real friend or a trusted adult.
Middle (Intermediate)
Focus: Identity, social pressure, and smarter decision-making online.
Students learn:
- Using AI for learning without crossing integrity lines
- Quick verification habits when AI (or the internet) gets it wrong
- Group chat safety: screenshots, rumours, dogpiling, pressure tactics
- Relationship chatbots: why they feel personal, and when it becomes unhealthy
- Scams targeting teens: impersonation, fake accounts, account takeovers
Wellbeing angle: Clear red flags + a simple off-ramp (who to tell, what to save, what not to send).
Secondary
Focus: Real-world consequences: reputation, consent, money, and future opportunities.
Students learn:
- Prompting with judgement: checking sources, bias, and hallucinations
- Academic integrity: AI as tutor/editor/planner (without cheating)
- Relationship chatbots + boundaries: privacy, manipulation risk, dependency
- Scams and fraud: phishing, job scams, money mule traps, “investment” bait
- Sextortion response plan: what to do immediately, who to contact, how to protect yourself
Wellbeing angle: Calm, non-shaming guidance that reduces risk and supports mental health.