Basic Task: Seeing Face to Face
In this module we’ll experiment with NAO’s ability to detect human faces. First, we will have the NAO speak when it sees a human face.
Step 1 :
Open Choregraphe and create a program that will have the robot say "hello human" when he detects a face.
You can test the code showing your face when you access the real robot.
Add a sound tracker box that will start in parallel to the behavior you just created.
Intermediate Task: Recognizing Faces
In addition to detecting any human face, the NAO can recognize individual faces. However, it must be trained first.
1. Set up the chain of Choregraphe boxes to create a behavior that will:
Create a program that does the following when triggered by the bumpers:
The NAO robot asks you to show your face in order to learn it and associate it with your name or the generic noun"me".
Once you showed your face, the NAO has two options:
Option one the learning is successful and NAO proceeds to recognize your face and say your name;
Option two the learning has failed and NAO asks you in a loop to show your face again until he's succeeding to learn it.
2. Run the behavior. Press the foot bumpers while the NAO is looking at your face, and it should recognize you. If it does, the eyes will change color. Then let the NAO see you and greet you.
You need the real robot for the behavior to run properly, it will create an error without a physical robot.
Intermediate Task: Seeking Out Faces
The NAO can see a face that happens to place itself in front of its camera. Now we will make it scan its head to look for faces.
- Begin with the results of the first exercise, which detects faces.
- Add a new timeline box to do a head scan, as shown below.
- Add keyframes to the custom box to make the head move from side to side.
- Run the behavior and see if the robot can see faces. If not, you may need to slow down the head motion.
Advanced Task: Remembering Faces
Begin from the basic task, where the NAO looks in the direction of a noise. We will change this behavior to make the robot remember the last two positions it has heard a noise in, and to cycle through these positions.
1. Begin with the basic task. Remove the sound track box by a box named sound loc. Go over the Sound loc. box and understand its outputs.
2. Create a new box, and add an input named add_position which takes two Number parameters as shown below.
3. Now connect the output of the sound Loc. box to your new boxes input. The Sound Loc. box doesn’t move the robot’s head, but only outputs where a sound was heard.
4. Create a code to the custom box to have the robot turn his head between the two directions a sound was detected.
5. Run the behavior. See that the robot looks at sounds and oscillates between looking at the two latest places it heard noises from.
6. (Optional) As it stands now, we may jump to look at a new position, wait two seconds, and then look at that new position again immediately. Modify the code so that we do not look at the same position twice in a row.
- When the robot sees a face, in addition to giving a greeting, make it wave and flash its lights.
- Make the NAO recognize two different faces and greet the people differently.
- While the robot is scanning for humans, make it stop scanning if it sees a face and look at that person. To make it look in the correct direction, you may need to reduce the scanning speed further.