a1tD0000003mESuIAM

Course: Graphical and Python programming using NAO
7: Face Off

  • 9-12 grade
  • Intermediate

Lesson Description:

In this lesson, students will learn how to make NAO detect, recognize and distinguish faces and use queues/lists in python.

Objective:
Use logic and sensors to make the NAO robot recognize a face using :
Graphical programming (beginners) and Python (Advanced).
 


 

Standards Covered

CCSS.ELA-LITERACY.RST.11-12.10

By the end of grade 12, read and comprehend science/technical texts in the grades 11-CCR text complexity band independently and proficiently.

CCSS.ELA-LITERACY.RST.11-12.3

Follow precisely a complex multistep procedure when carrying out experiments, taking measurements, or performing technical tasks; analyze the specific results based on explanations in the text.

CCSS.ELA-LITERACY.RST.11-12.8

Evaluate the hypotheses, data, analysis, and conclusions in a science or technical text, verifying the data when possible and corroborating or challenging conclusions with other sources of information.

CCSS.ELA-LITERACY.RST.11-12.9

Synthesize information from a range of sources (e.g., texts, experiments, simulations) into a coherent understanding of a process, phenomenon, or concept, resolving conflicting information when possible.

image description

Lesson Modules


Teaching Tips:

You can download the file for this module here

The step by step guide to the program is under CLASS VIEW.


Basic Task: Seeing Face to Face
 
In this module we’ll experiment with NAO’s ability to detect human faces. First, we will have the NAO speak when it sees a human face. 

Step 1 :

Open Choregraphe and create a program that will have the robot say "hello human" when he detects a face.  

You can test the code showing your face when you access the real robot.

Step 2: 

Add a sound tracker box that will start in parallel to the behavior you just created.


Teaching Tips:

You can download the file for this module here

The step by step guide to programming the behavior is under CLASS VIEW

Intermediate Task: Recognizing Faces
 
In addition to detecting any human face, the NAO can recognize individual faces. However, it must be trained first.
 
1. Set up the chain of Choregraphe boxes to create a behavior that will: 

Create a program that does the following when triggered by the bumpers: 

The NAO robot asks you to show your face in order to learn it and associate it with your name or the generic noun"me". 

Once you showed your face, the NAO has two options:

Option one the learning is successful and NAO proceeds to recognize your face and say your name;   

Option two the learning has failed and NAO asks you in a loop to show your face again until he's succeeding to learn it. 


2. Run the behavior. Press the foot bumpers while the NAO is looking at your face, and it should recognize you. If it does, the eyes will change color. Then let the NAO see you and greet you.

You need the real robot for the behavior to run properly, it will create an error without a physical robot.


Teaching Tips:

 You can download the file for this module here


Intermediate Task: Seeking Out Faces
 
The NAO can see a face that happens to place itself in front of its camera. Now we will make it scan its head to look for faces.
 

  1. Begin with the results of the first exercise, which detects faces.
     
  2. Add a new timeline box to do a head scan, as shown below.


     
  3. Add keyframes to the custom box to make the head move from side to side.
     
  4. Run the behavior and see if the robot can see faces. If not, you may need to slow down the head motion.
     

Teaching Tips:

You can download the code for this module here.

The step by step guide to the program is under CLASS VIEW.


Advanced Task: Remembering Faces
 
Begin from the basic task, where the NAO looks in the direction of a noise. We will change this behavior to make the robot remember the last two positions it has heard a noise in, and to cycle through these positions.
 
1. Begin with the basic task. Remove the sound track box by a box named sound loc. Go over the Sound loc. box and understand its outputs.

Describe in your own words what does the second output of the Sound Loc box do.

2. Create a new box, and add an input named add_position which takes two Number parameters as shown below.




3. Now connect the output of the sound Loc. box to your new boxes input. The Sound Loc. box doesn’t move the robot’s head, but only outputs where a sound was heard.




4. Create a code to the custom box to have the robot turn his head between the two directions a sound was detected.

5. Run the behavior. See that the robot looks at sounds and oscillates between looking at the two latest places it heard noises from.

6. (Optional) As it stands now, we may jump to look at a new position, wait two seconds, and then look at that new position again immediately. Modify the code so that we do not look at the same position twice in a row.

 
 
 
 


Teaching Tips:

Additional Exercises
 
  1. When the robot sees a face, in addition to giving a greeting, make it wave and flash its lights.
  2. Make the NAO recognize two different faces and greet the people differently.
  3. While the robot is scanning for humans, make it stop scanning if it sees a face and look at that person. To make it look in the correct direction, you may need to reduce the scanning speed further.

Teaching Tips:

Solutions
 
Basic:

  1. What does the face detection box output?
    The number of faces that were detected.

     
  2. Speculate as to why the robot does not always detect your face.
    Possible responses include: poor algorithms, changing lighting conditions, variations in angle and position.

 
Intermediate:

  1. What is the difference between face detection and recognition?
    Detection is realizing that we see a face: recognition is knowing whose face we see.
  2. What does the face recognition box output?
The name of the face that was detected.

 
Advanced:

  1. What areas will the search not see? How could you expand the robot’s search, using both the head and by walking?
    It won’t see anything behind the robot or above or below the search plane. The search could be expanded by also changing the HeadPitch angle, and/or by turning around.

 
 

Questions
 
Basic:

What does the face box output?

Speculate as to why the robot does not always detect your face.

 
Intermediate:

What is the difference between face detection and recognition?

What does the face recognition box output?

 
Advanced:

What areas will the search not see? How could you expand the robot’s search, using both the head and by walking?