Senses of the NAO
Humans have five senses - hearing, sight, touch, smell and taste. In addition, there are other lesser known human senses. They include the sense of balance, the sense of temperature, and kinesthetic sense. Kinesthetic sense is the ability to know where different parts of your body are without relying on other senses. It is this kinesthetic sense that enables you to close your eyes and touch parts of your body.
There are robots that are capable of performing similar functions to all the senses listed above. The physical devices in robots that sense the environment are called sensors.
Basic Task: Light Up
The NAO has foot bumpers which can be pressed, and various LEDs that light up. In this lesson, we will make the NAO’s eye LEDs change color when a foot bumper is pressed.
1. Open Choregraphe and create a program that makes the LED light differently if the left bumper or the right bumper is pressed.
2. You won't be able to see it on the simulated robot but the code should work.
Finite State Machines
A finite state machine, or FSM, is an abstraction commonly used in computer science and robotics. A finite state machine includes states (a finite number of them) and transitions. We will use a running example to explain these concepts.
Suppose that an exam is coming up, and you are at home preparing for it over the weekend. Your behaviors (what you do during the weekend) can be described with a finite state machine. Your state can be described using two features: how prepared you are for the exam (prepared or unprepared), and how much energy you have (rested or tired). The goal is to ultimately be both prepared and rested at the end of the weekend.
An FSM state must completely describe the situation without any external information. So, one state would be (prepared and rested). Notice that we combined both features into a single state. There will be four states in total - (prepared and rested), (prepared and tired), (unprepared and rested) and (unprepared and tired). Each of the four states contains all the information about the situation. Within a FSM, only one state is active at a time.
So in our example, say that we start off in the (unprepared and tired) state, since a week of classes just ended (so we’re tired), and we haven’t had time to study for the exam yet. We want to end up in (prepared and rested). To go from state to state, we have to define the transitions. Transitions move us from one state to another, and can be triggered through actions or events. Intuitively, actions are things that are performed by choice, and events are things that occur. We will use actions in the example below, and discuss events at the end of this section.
Some actions we can take in our example are studying, playing, and sleeping. If we’re tired, then sleeping should make us rested. Thus, the transitions from (prepared and tired) to (prepared and rested), and from (unprepared and tired) to (unprepared and rested), are triggered by sleeping. Similarly, the action studying will make us prepared for the exam but will tire us out. However, we can only perform this action if we are rested. As such, there is a transition from (unprepared and rested) to (prepared and tired) that is triggered by studying. The last action is playing, which can be performed at any state. However, playing causes us to both become tired and forget what we’ve learned. So, there are transitions from all the other states to (unprepared and tired) that are triggered by playing.
A FSM can be illustrated with a diagram.Circles indicate states, and arrows indicate transitions. The initial state is illustrated by a straight arrow that points to the state that we start out in.
Besides triggering transitions with actions, events can also be used. Events are things that happen, that are typically caused by something external to the FSM. For example, an event could be the teacher reducing the scope of the exam. This event might transition us from (unprepared and tired) to (prepared and tired), since we already know the areas covered in the revised exam even without studying.
Intermediate Task: Switching States
In this task, we will make the robot turn its ear LEDs on and off by pressing one foot bumper, and toggle its eye colors by pressing the other foot bumper. We will implement this using a finite state machine.
Similar to our finite state machine example in the section above, we have two features in the states. The features are: whether the ear LEDs are on (on and off), and what color the eye LEDs are set to (A or B). Thus, we have four states.
Each press of a foot bumper is an event that will trigger a transition to a different state.
1. First, we’ll create one of the states. Add a diagram box named “Ears on and eyes A”, and add two outputs, “Left”, and “Right” to the box
2. Double-click on the custom box, and a new flow diagram will show up. Add a Bumpers box, an Eyes LED box, and a Ears LED box, and connect them.
3. Set the ears intensity to 100%, and set both the eye colors to a color of your choosing, which we will refer to as color A. The state (ears on and eyes as color A) is now defined with these three boxes. The two outputs, left and right, correspond to the events of the left foot bumper and right foot bumper being pressed.
4. Click on root to go back to the main flow diagram, then copy and paste these three boxes to create four states, as shown below. Rename each box to correspond to the state it represents.
5. Double-click each of the three new states, and edit the Ears LEDs and Eyes LEDs boxes so that they match the state they are in.
6. You have created the four states. The states the boxes represent, clockwise from the top-left are: (ears on and eyes as color A), (ears on and eyes as color B), (ears off and eyes as color B) and (ears off and eyes as color A).
7. Now add the transitions between them.
8. Run the behavior. Press the bumpers to ensure that all the states work as intended. Congratulations! You have implemented a finite state machine.
Intermediate Task: Reading Raw Data From the Sensors
In this lab, we will learn how to observe the raw sensor values from the robot from within Choregraphe as well as within the Monitor Desktop progam.
1. Within Choregraphe on the pull-down menu click on the View > Memory Watcher option. A window, or extra tab should appear in the bottom center of the main window with the title “Memory Watcher”. This is where we can observe some of the sensors and inputs that are stored in the NAO’s memory.
2. Click on the memory watcher tab. The window should be blank with a short string that reads “<select memory keys to watch?”. Double click on this value, and a dialog window will pop up.
3. When the dialog window pops up. Check “View Devices” at the bottom, and scroll down until you find “Device/SubDeviceList/InertialSensor/AccX/Sensor/Value”, and check this item. Also select “.../AccY/Sensor/Value” and “.../AccZ/Sensor/Value”. These are the accelerometer (A sensor that detects acceleration and tilt) (see sensor section) readings along the x, y and z axes. You will have to expand upon the sub-folders until you get the proper selections, so go into devices, subDeviceList, InertialSensor, etc… to check the boxes as shown below. When you are done, Click OK.
4. The two values will be shown in the memory watcher window. At the bottom you can alter the period of update. In addition, you can export this data to a .CSV file by clicking the “Start Recording” button:
5. We can also view and graph this data in the Monitor Desktop program that comes packaged with the NAO. In the same folder as the Choreograph application, open up the Monitor Desktop application.
6. When it opens, choose the top “New configuration file” option.
7. A window will popup displaying all the bits of data we can monitor. In the bottom of the window click the checkbox titled “view devices”. Scroll down through the various selections and choose the same two devices as we had viewed before: “Device/SubDeviceList/InertialSensor/AccX/Sensor/Value”, and “.../AccY/Sensor/Value” and “.../AccZ/Sensor/Value”:
8. Save the selection as a sample .XML file, and then the main viewing window will show.
9. In the bottom left corner of the new window, check the boxes next to “Watch All” and “Graph All”. This will graph the variables we selected. Finally, change the Subscription Mode to “Every <nb> ms”. The default value in the dialog that pops up is fine. This is how often we refresh the sensor values in the graph.
10. Now the x, y, and z axis accelerometer readings will appear on the graph. Try turning the NAO in all directions. Determine what direction each of the three axes measure acceleration along.
11. (Optional) Examine some of the other sensor readings. Some good ones to try are any of the Device/SubDeviceList/JOINT/Position/Sensor/Value, which returns the measured joint angle, or Device/SubDeviceList/InertialSensor/(AngleX, AngleY, AngleZ, GyrX, or GyrY)/Sensor/Value. Try to figure out what these measure on your own.
Sensors and Actuators
Besides the sensors described earlier, the NAO also has an internal gyroscope and accelerometer inside its torso. The internal gyroscope and accelerometer function like the inner ear, which provides a sense of balance for humans. The gyroscope measures angular velocity (how fast the robot is turning). The accelerometer measures acceleration (which way gravity is pointing). Together, the gyroscope and accelerometer can tell the NAO if it is upright, lying on its back, or lying on its front. Additionally, the two sensors let the NAO know if it is falling down, so it can brace itself for a fall with its arms.
Actuators on a robot refer to its joint motors. The NAO has 21 different motors that can be controlled separately. There are two motors on its head/neck, two for each shoulder, two for each elbow, five for the hips, one for each knee, and two for each ankle. In Module 4, we moved many of these motors to make the NAO dance.
Each motor on the NAO is coupled with a sensor, called an encoder. This sensor measures how far each motor has turned. This is known as the motor’s angle of rotation. For example, the sensor on the elbow joint is able to tell if the arm is straight or bent at an angle.
Using the angles of rotation of its joints, the NAO is aware of the pose of its entire body. For example, the NAO can calculate how far the hand is from the head. It does so using a kinematic chain, which essentially uses trigonometry to calculate relative positions of joints. In the figure below, we show a two-link arm with an elbow joint. Using the length of the arms and the angle of the elbow, we can calculate the position of the end-effector (the wrist) relative to the base (the shoulder).
Advanced Task: A Bright Idea
In a previous module, we used the foot bumpers to toggle between four states. Here we’ll use the three buttons on NAO’s head to toggle between eight states. Things are becoming unwieldy using Choregraphe boxes, so we’ll switch to using Python. Each of the three head buttons will control one of the three color components: red, green and blue. By combining these colors, we can make others. For example, red and blue together will make purple, and red and green make yellow.
1. First, add a new box python block to blend the lights. Add three “bang” inputs, named “red”, “green”, and “blue”
2. Add a Tactile Head box (in the Sensors category), and connect the boxes as shown below. The three outputs of the Tactile Head box trigger when the front, middle and rear buttons on the head are touched.
3. Double-click on your custom box to edit the source code. Here you need to write a code in Python in order to light the LED triggered by the input from the proper tactile head sensor.
Hint: The colors are set using binary 24-bit RGB values. The last eight bits are the blue component, the preceding eight bits are the green component, and the eight bits before that are the red component. So, written in hexadecimal, 0xFF0000 is red, 0x00FF00 is green, and 0x0000FF is blue.
4. Now run the behavior. Experiment with all eight different color combinations.
5. (Optional) Draw the finite state machine to describe the behavior we just created. How many states are there? What are the transitions between the states?
Advanced Task: Mirror, Mirror on the Wall
Another sensor of the NAO, often overlooked, is its encoders. These sensors measure the angles of all of the NAO’s joints. In this task, we will disable stiffness on one of the NAO’s arms so that it can be moved freely. Then, we will use the encoders to read the positions of the arm joints, and mirror these same positions on the NAO’s other arm. In this way, by moving one arm, the other arm will follow.
1. Create a Bumpers box, and connect it to a custom Mirror Joints (python) box as shown below. Hitting the left bumper will start the behavior and hitting the right bumper will stop it.
2. Create a Python code that will make the right arm stiff and left arm unstiff ( the mirroring arm). You code needs to check the values of the right arm and make the left arm get these values and move in a way that mirrors the left arm.
3. Run the behavior. Press the foot bumper, and move the robot’s arm around. Check that the other arm mirrors the same position.
- Implement the eye blending task (A Bright Idea) using a finite state machine composed of Choregraphe boxes. Compare the difficulty of adding additional states using boxes versus Python code.
- Building on the arm mirroring task, make it so tapping the head switches which arm is being mirrored.
Hint: this will require an additional input and calls to setStiffnesses. Also, the state of the behavior will now have two features: whether or not the behavior is active, and which arm is being mirrored.
- Start with the eye color blending task, and make it so you control the intensity of each color using the foot bumpers (so don’t always use 0xFF as the component). Make one bumper toggle between colors, and show the color component being modified on the feet LEDs. When the other foot bumper is pressed, increment the current color component by 0x10. Be sure to handle wrap-around properly: no color component can exceed 0xFF.
- Make the robot’s head track his own hand. Disable stiffness on the arms, and get the hand and head positions with the ALMotion method getPosition. Using setAngles, make the NAO’s head look in the direction of the vector from the head to the hands.
(Optional) Switch which hand the NAO looks at when the head tactile sensor is touched.