a1tQ200000AJo0CIAT

Course: AI LAB- Level 2
Understanding Ethics in AI and Robotics

  • 6-12 grade
  • Intermediate

Lesson Description:

In this lesson, students explore the ethical side of artificial intelligence (AI) and robotics using the NAO V6 humanoid robot. Through discussion, real-world examples, and a collaborative ethics activity, students consider privacy, bias, safety, and responsibility in robotics. They examine ethical dilemmas — such as when an AI makes mistakes, or when data collection crosses privacy boundaries — and reflect on how to design fair and responsible technology. By the end of the lesson, students will propose their own “Ethical AI Guidelines” for robots like NAO, showing how they believe robots should interact with people and society.


OBJECTIVES

  • Define ethics and identify ethical issues in AI and robotics.

  • Explain how bias, privacy, and safety influence robot design and use.

  • Discuss responsibility and accountability in AI decision-making.

  • Brainstorm ethical challenges for robots in public spaces.

  • Create a short report proposing guidelines for ethical AI.


EQUIPMENT & SUPPLIES

  • NAO V6 robot (charged, with Choregraphe or Python 2.7 environment).

  • Computer with Choregraphe Suite and stable network connection.

  • Projector or Smartboard for demonstrations or visuals.

  • Chart paper, markers, or shared digital whiteboards.

image description

Lesson Modules


Teaching Tips:

Hook: Display Asimov’s laws on a slide or read them aloud. Ask: “Would following these laws make robots ethical?” Students often say yes — use that to segue into complexity.

Demonstration: Show NAO’s face detection or recognition ability. Then ask, “If NAO greets someone by name in public, is that polite — or creepy?” Connect to privacy and consent.

Discussion Starter: Pose a thought experiment like the trolley problem and encourage debate. Emphasize there isn’t one right answer — ethics involves reasoning about consequences and fairness.

Real-World Link: Introduce the 2018 Uber self-driving car accident (AI failed to recognize a pedestrian). Ask: “Who’s responsible when an AI makes a mistake — the robot, the coder, or the company?”

What does it mean for a robot to make a “good” decision? Can a robot learn right from wrong?

To start, let’s look at Isaac Asimov’s Three Laws of Robotics — a famous idea from science fiction:

  1. A robot may not harm a human being or, through inaction, allow a human to come to harm.
  2. A robot must obey human orders, unless it conflicts with the First Law.
  3. A robot must protect itself, as long as this doesn’t conflict with the first two laws.

Sounds perfect, right? But what happens when a robot faces a moral dilemma — like the “trolley problem,” where it must choose between two harmful outcomes? These questions help us see that programming ethics into machines isn’t so simple.

Today, we’ll explore how AI decisions affect real people and how we can build technology that behaves responsibly.


Teaching Tips:

Guided Discussion: Have students brainstorm examples of each challenge. Ask: “Can AI be biased? How could a robot protect privacy?”

Connections: Use NAO’s vision or speech recognition as concrete examples:

  • Privacy: NAO’s cameras could record faces — ask permission first.
  • Bias: NAO might misunderstand certain voices or accents — an example of accessibility bias.
  • Safety: NAO stops moving if it falls — a built-in ethical safeguard.

Key Point: Emphasize that ethical AI means balancing benefits with fairness, safety, and respect for people.

AI and robots make decisions that affect people every day — in medicine, transportation, and even classrooms. But those decisions can raise ethical questions. Let’s look at four big areas:

  1. Privacy: AI systems collect data. Face recognition, cameras, and microphones can record people without their knowledge. Who owns that data?
  2. Bias: If AI is trained mostly on certain kinds of data, it may unfairly misjudge others. For example, facial recognition works less accurately on some skin tones.
  3. Safety: Robots must be designed to avoid harming people — physically or emotionally. That means careful testing and fail-safes.
  4. Responsibility: When AI fails, who’s accountable — the developer, the company, or the robot’s user?

Every one of these challenges affects how we design and use robots like NAO.


Teaching Tips:

Setup: Divide students into groups (3–5 per group). Provide chart paper and markers or an online collaboration tool.

Prompt: Read the scenario aloud: “NAO is a public helper robot. What problems might arise?” Encourage creative and realistic ideas.

Circulate: Guide discussions with questions like “Who controls NAO’s data?” or “What if NAO gives wrong advice?”

Share Out: Have each group present 1–2 top ethical issues. List them under headings: Privacy, Bias, Safety, Social Impact.

It’s time to think like AI designers! In this group activity, imagine that NAO is placed in a public area — like a mall or airport — to help people. What ethical issues might come up?

Group Brainstorm

  • Privacy: Is NAO recording people? Should it ask for permission first?
  • Bias: Does NAO understand everyone equally — or only certain voices or languages?
  • Safety: Could NAO accidentally bump into someone, or give wrong information during an emergency?
  • Social Impact: Would people feel comfortable around NAO? Could it replace human jobs?

Write your ideas on chart paper or a shared digital whiteboard. Then choose one issue your group thinks is most important and explain why.


Teaching Tips:

Purpose: This mini-project serves as a summative reflection and allows students to apply the lesson’s ideas creatively.

Scaffolding: Provide sentence starters or a template (e.g., “Guideline: Robots should __ because __.”).

Differentiation:

  • Advanced students can research existing AI ethics frameworks (IEEE, EU AI Act) and compare them.
  • Support students by letting them focus on 2–3 strong guidelines if writing is challenging.

Assessment: Evaluate based on accuracy, clarity, justification, and creativity.

Now that you’ve discussed ethical issues, create your own rules for robots like NAO! These will be your “AI Ethics Guidelines.”

Mini-Project Instructions

  1. Write an introduction explaining why ethics matter in robotics.
  2. List 3–5 guidelines that you think every AI robot should follow.
  3. For each guideline, explain what it means and why it’s important.
  4. Include at least one real example from today’s discussion.

Example: “Guideline 1: Robots must always ask for permission before collecting data. This ensures privacy and trust.”

You can write your guidelines as a short essay or make a digital slide deck. Be creative — give your rules a name like ‘My Five Laws of Robot Ethics.’


Teaching Tips:

Discussion: Use this reflection to gauge understanding and promote deeper thinking.

Debrief: Revisit examples: Uber accident (safety), face recognition (bias), NAO’s camera (privacy).

Ethics Extension: Ask: “Should robots have rights?” or “What should humans always control?” Encourage open dialogue and critical reasoning.

Exit Ticket: “Name one rule you’d include in a Robot Bill of Rights.” Collect to assess understanding individually.

Let’s reflect on what we learned about ethical AI.

  1. What is one ethical issue robots like NAO might face?
  2. Who should be responsible if an AI causes harm?
  3. Why is bias in AI dangerous?
  4. How can robots be designed to respect privacy?

Mini Quiz

  • 1. What is bias in AI? (Unfair preference caused by data or design.)
  • 2. What is an example of AI bias? (Facial recognition failing for some skin tones.)
  • 3. True or False: Robots can be programmed to always make perfect moral choices. (False)
  • 4. Why do we need ethical guidelines for AI? (To ensure fairness, safety, and accountability.)