a1tQ200000A01b4IAB

Course: AI LAB- Level 2
NAO Vision System & Choregraphe (Python Basics)

  • 6-12 grade
  • Intermediate

Lesson Description:

In this lesson, students explore how robots “see” the world through their cameras and how we can program this behavior in both block-based and text-based (Python) formats. Using NAO’s vision system in Choregraphe, students learn to connect the robot’s camera, capture an image, and write a simple Python script that saves a photo and interprets what NAO sees. They also learn key Python concepts such as variables, conditionals, and loops applied directly to robotics. The lesson culminates in a guided mini-project where students make NAO detect a color or simple shape and respond accordingly. This lesson marks the transition from visual programming to Python coding, helping students see how real-world robot vision systems are programmed.


OBJECTIVES

  • Understand how a robot uses its vision system to process images.

  • Use Choregraphe’s vision tools to take and display photos.

  • Write a Python script to connect to NAO’s camera and capture an image.

  • Learn basic Python concepts (variables, loops, and conditionals).

  • Apply these concepts in a mini-project to detect a color or shape.

  • Compare block-based programming with text-based Python coding.


EQUIPMENT & SUPPLIES

  • NAO V6 humanoid robot (battery charged, with NAOqi 2.8).

  • Computer with Choregraphe Suite (Python 2.7 support).

  • Shared Wi-Fi or Ethernet connection between computer and NAO.

  • Objects for vision testing (e.g., red ball, colorful blocks, printed shapes).

  • Projector or classroom display (for teacher demo and NAO’s camera feed).

image description

Lesson Modules


Teaching Tips:

Setup: Connect NAO to Choregraphe and open the Video Monitor (View → Video Monitor → ▶). Project the live feed. Have students take turns showing objects to the camera—this builds excitement and curiosity.

Prompt Discussion: Ask: “What do you notice about NAO’s view? Is it blurry? Zoomed in? How is it different from your eyes?” Explain that robots don’t see images like humans—they capture raw pixel data that needs to be processed into information.

Connect to Previous Lessons: Reference Lesson 2: “Last time, you taught NAO to recognize an object. This time, we’ll explore how that happens—by working directly with its camera.”

Class Management: If there’s only one NAO, rotate students in groups or display the feed for the whole class. If multiple NAOs are available, have groups compare what each robot sees from different angles.

Safety: Keep NAO stationary on a stable surface and avoid bright lights pointed at the lens. Turn off Autonomous Life to prevent sudden motion.

Extension: Mention real-world uses: self-driving cars use cameras for lanes, drones use vision to navigate, and warehouse robots use cameras to track packages. NAO uses similar vision tools to interact intelligently.

Have you ever wondered how robots can “see” the world? Today, we’ll explore NAO’s eyes—its cameras! Look closely at NAO’s head. You’ll find tiny lenses that act like eyes. These allow NAO to capture images of its surroundings, just like we do with our eyes and brain.

Your teacher will open Choregraphe’s Video Monitor so you can see what NAO sees in real time. Watch as you wave your hand or hold an object in front of NAO’s camera feed. Notice how the robot’s view looks different from what you see!

Now think about this question: How could we make NAO understand what it’s seeing?

How does a robot “see”?


Teaching Tips:

Setup: Demonstrate connecting NAO and using the Take Picture box. Walk through its settings (camera, resolution, file name). Show how to retrieve the saved image through the File Explorer panel.

Discuss Behind the Scenes: Explain that the box calls the ALPhotoCapture module under the hood—this is what actually takes the picture. Soon, students will write code to call that module directly.

Concept Reinforcement: This step introduces the idea that every Choregraphe block is powered by Python code. Emphasize that by moving to code, they’ll gain more flexibility—adjusting parameters, adding logic, and combining actions.

Optional Demo: Show the Red Ball Tracker if available (uses ALRedBallDetection). NAO will follow a red ball—students will be amazed to see the robot reacting to color and movement.

Troubleshooting: If the photo doesn’t appear, verify the save path, robot connection, or camera activation. Remind students that NAO’s top camera is camera 0 by default.

Before we start coding, let’s explore what NAO’s vision tools can do in Choregraphe.

  1. Open the Vision box library in Choregraphe.
  2. Find the Take Picture box and drag it to the workspace.
  3. Connect it to the onStart trigger so it runs automatically.
  4. Set the camera to Top and resolution to 640×480 (VGA).
  5. Click ▶ to run. NAO will take a photo—listen for a shutter sound!

Now find your photo in /home/nao/recordings/cameras/ and open it. You just made NAO take a picture using a pre-built block!


Teaching Tips:

Preparation: Ensure students use Python 2.7 syntax (no parentheses needed for print if added). Walk them through indentation—Python is sensitive to spaces!

Debugging Guidance: If code errors occur:

  • Check for typos or misaligned indentation.
  • Ensure NAO is connected and awake (green status in Choregraphe).
  • Use the console logs at the bottom of Choregraphe to locate errors.

Key Concepts to Emphasize:

  • Variables: IP, PORT, and photoCapture store data and connections.
  • Functions: Code executes top-to-bottom—each command tells NAO what to do in order.
  • Conditionals & Loops (Preview): Explain these control flow structures; they’ll use them next to make decisions.

Teacher Note: This simple photo capture is a milestone. It transitions students from visual to text programming and introduces real robot APIs.

Now it’s time to open the box and look inside! You’ll write your own Python code to make NAO take a picture—just like the vision box did.

  1. In Choregraphe, create a new Python Script Box (Create Box → Empty → Python).
  2. Open the box and find the onInput_onStart function—this is where your code will go.
  3. Type this code carefully:
from naoqi import ALProxy

def onInput_onStart(self):
    IP = self.getRobotIp()
    PORT = 9559
    photoCapture = ALProxy("ALPhotoCapture", IP, PORT)
    photoCapture.setResolution(2)
    photoCapture.setPictureFormat("jpg")
    photoCapture.takePicture("/home/nao/recordings/cameras/", "vision_test")
    self.logger.info("Picture taken and saved!")
    self.onStopped()

Click ▶ to run. If everything works, check the robot’s folder again—you’ll see a new picture! You’ve now done in code what the vision box did for you earlier.


Teaching Tips:

Scaffolding: This code uses conditionals (if/else) to let NAO make a decision based on what it sees. Emphasize that this is the same logic used in AI systems for decision-making!

For Beginners: Allow them to simulate detection—set a variable color_found = "red" and write if color_found == "red": so they practice syntax without using the full detector.

For Advanced Students: Encourage looping the detection process (e.g., checking multiple times or tracking changes). Some can experiment with ALColorBlobDetection for custom colors.

Troubleshooting:

  • Ensure NAO is stationary and lighting is even.
  • Use a solid red object—striped or reflective objects may fail.
  • Encourage students to test and iterate—debugging is part of real robotics!

Ethics & Safety: Remind students that NAO’s camera is for scientific learning—always use it responsibly.

Let’s take it further! You’ll now help NAO see something specific—like a red ball. You’ll use Python conditionals to make NAO respond when it detects color.

  1. Import additional modules:
from naoqi import ALProxy
import time

def onInput_onStart(self):
    IP = self.getRobotIp()
    PORT = 9559
    mem = ALProxy("ALMemory", IP, PORT)
    ball = ALProxy("ALRedBallDetection", IP, PORT)
    tts = ALProxy("ALTextToSpeech", IP, PORT)
    ball.subscribe("detect")
    time.sleep(2)
    result = mem.getData("redBallDetected")
    ball.unsubscribe("detect")
    if result and isinstance(result, list) and len(result) >= 2 and result[1]:
        tts.say("I see the red ball!")
    else:
        tts.say("I do not see a red ball.")
    self.onStopped()

Hold a red ball about 40 cm in front of NAO’s camera, then run your script. What does NAO say?


Teaching Tips:

Expected Answers:

  • Python code gives us direct control over NAO’s sensors.
  • NAO uses modules like ALPhotoCapture and ALRedBallDetection to “see.”
  • Blocks are visual shortcuts; Python is flexible and precise.
  • Detecting a blue ball requires a different module (e.g., ALColorBlobDetection).

Assessment: Use student reflections or a short quiz to check comprehension (e.g., “What is the purpose of ALProxy?”). Assess both understanding and debugging skills.

Extensions:

  • Try looping the detection so NAO continuously checks its environment.
  • Program NAO’s LEDs to change color when it finds a target.
  • Discuss real-world vision systems like self-driving cars or facial recognition.

Wrap-Up Message: “Today, you didn’t just code—you helped a robot see! This is the foundation of computer vision and artificial intelligence.”

Let’s wrap up what we learned today!

  1. What is one new thing you learned about Python programming?
  2. How does NAO use its vision system to interact with the world?
  3. What’s the difference between using a Choregraphe block and writing your own Python code?
  4. What might happen if you wanted NAO to detect a blue ball instead of a red one?

Discuss your answers with your team, then share one insight with the class!