Skip to content
Introducing Day of AI USA! Powered by MIT RAISE + Day of AI!

Human Rights and Artificial Intelligence

Students explore how AI impacts fundamental human rights, privacy, safety, and equality through real-world cases and student-led action projects.

Age:

Approximate Total Time: 4-5 hours

Summary:
This interdisciplinary unit explores how artificial intelligence intersects with fundamental human rights, including non-discrimination, privacy, and safety. Through real-world case studies, interactive simulations, and collaborative projects, students learn how AI can both threaten and protect these rights. The unit culminates in a student-led Human Rights Action Summit, where teams design and pitch practical solutions to safeguard human rights in their school and community.

Lesson Flow:

Lesson 1 – Human Rights and Privileges (45 minutes)
Students distinguish between rights and privileges through a movement-based activity, then connect historical struggles for rights to emerging challenges created by AI. They are introduced to UNESCO’s three AI ethics principles—Respect People, Stay Safe, and Promote Equality—and begin their AI Rights Evidence Journal to collect examples throughout the unit.

Lesson 2 – Right to Non-Discrimination (55–60 minutes)
Students learn how algorithmic bias can produce unfair outcomes and violate human rights. After analyzing real-world cases (hiring systems, healthcare algorithms, and criminal justice tools), they connect their findings to the UNESCO principle Promote Equality and record examples in their Evidence Journal.

Lesson 3 – Right to Privacy (50–60 minutes)
Students investigate how AI uses personal data through a “Mystery Migrant” privacy activity, analyzing how digital footprints reveal sensitive information. They evaluate two data-use policies to determine which better reflects the UNESCO principle Respect People, emphasizing dignity, consent, and transparency in data collection.

Lesson 4 – Right to Safety (50–60 minutes)
Students become “AI Safety Inspectors,” using a traffic-light system (green/yellow/red) to judge the safety of different AI tools. They assess physical, emotional, and social risks such as misinformation and deepfakes, then create personal “I will…” safety checklists to practice responsible technology use.

Lesson 5 – Taking Action (50 minutes)
In the culminating experience, students select one right—Safety, Privacy, or Non-Discrimination—and design a one-week advocacy plan guided by UNESCO’s global principles. Working in teams, they prepare 60-second persuasive pitches for a classroom Human Rights Action Summit, where peers evaluate proposals based on feasibility, evidence, and measurable impact.

Materials

This text will show when the visitor is not logged in. It is managed in the Miscellaneous single entry.