You are on page 1of 7

6.

01 Fall 2015

1 de 7

http://sicp-s4.mit.edu/tutor/index.py/6.01/DL13

Design Lab 13: A Noisy Noise Annoys an Oyster


The questions below are due on Thursday December 03, 2015; 09:55:00 PM.
--

Goals:To boldly go where no robot has gone before

1) Getting Started
Do this lab with your assigned partner. You and your partner will need a
laptop that reliably runs lib601 and soar. This lab's soar brain will not run
reliably on Windows, and it is iffy on Mac, so you are encouraged to use a lab
laptop or to access the software remotely (see the instructions on the
"Reference Material" page) for the portions of the lab involving simulation.
You will need a lab laptop to control the physical robot.
There is no new code for this lab, and so you should use your code from
software lab 13.

2) Introduction
In Software Lab 13, we programmed the robot to navigate to a goal point in
an unknown environment, making a map of that environment along its way. In
that lab, though, we assumed that the robot's sonar and odometry are
perfect. We know that this is not the case with the real robot, and so we
would like to make our soar brain robust against erroneous sonar readings.
Let's see what happens when we introduce a little bit of noise. To do this, set
NOISE_ON to True, which adds some noise to the robot's sonar readings, as
follows. Note that this means that there is some small chance of observing a
wall where there is none, and there is some small chance of not observing a
wall where there is one.
To start with today, we will run the simulation in bigFrustrationWorld. Set
THE_WORLD = bigFrustrationWorld in mazeBrain.py, and make sure
you have set NOISE_ON = True. Try running the soar simulation with noise
turned on. How does your robot perform with noise?
Think about how you could model the effects of sensor noise using a
Bayesian perspective, where, rather than being definitively occupied or not,
each cell has some probability of being occupied. Then we can treat one
sonar reading as giving us an observation for each of the cells along the ray
going from the sonar sensor to the place where it terminates. We can use the
same observation model for each of the cells.

07/12/2015 04:19 p.m.

6.01 Fall 2015

2 de 7

http://sicp-s4.mit.edu/tutor/index.py/6.01/DL13

Check Yourself 1:
Assume that the robot
this have on our belief
this have on our belief
this have on our belief

gets a sonar reading of 0.7. What effect should


that there is a wall at 0.7? What effect should
that there is a wall at 0.3? What effect should
that there is a wall at 0.9?

Check Yourself 2:
Which of the following are valid ways to represent each cell's
probability of being occupied?
an instance of markov.StateEstimator
an instance of dist.DDist
a single floating-point number
a boolean
a single integer
a pair of integers
Which would you like to use to store your probabilities? How would
you do a Bayesian update with that representation?

Check Yourself 3:
What are the possible states in this domain? What are the
observations?
What is an appropriate observation model Pr(Ot St )?

Pr(St+1 St )?
What is an appropriate initial belief Pr(S0 )?
What is an appropriate transition model

Checkoff 1:
What happened when you turned the noise on? Discuss with a staff
member the changes you plan to make in order to get the robot to
solve the maze with noise.

Modify your controller so that, instead of being definitely occupied or not,


each cell has some probability of being occupied (or clear).
Be sure to update the probOccupied method in DynamicRobotMaze to

07/12/2015 04:19 p.m.

6.01 Fall 2015

3 de 7

http://sicp-s4.mit.edu/tutor/index.py/6.01/DL13

reflect the changes you made. This method should take a tuple (r,c) as
input and return the probability that the cell at (r, c) is occupied.
For debugging purposes, you can set the show_heatmap variable to True in
mazeBrain.py. This will cause another new window to be displayed, which
contains a visualization of each cell's probability of being occupied. If a cell is
highly likely to be clear, it will be white; if it is highly likely to be occupied, it
will be black.This will slow things down, and so should only be used for
debugging purposes.
Checkoff 2:
Demonstrate that the robot makes it to the goal in
bigFrustrationWorld with noise on. Discuss with a staff member
the changes you made. In particular, be prepared to discuss your
choices for initial belief, observation model, and transition model.

07/12/2015 04:19 p.m.

6.01 Fall 2015

4 de 7

http://sicp-s4.mit.edu/tutor/index.py/6.01/DL13

3) Go, Speed Racer!


Next, improve the controller to make it reach the goal point faster.
For the rest of the lab, we will be running our simulations in raceWorld.py.
To prepare to run in this environment, we will make the following changes to
mazeBrain.py:
Load raceWorld.py in the soar simulator.
Set THE_WORLD = raceWorld
Set NOISE_ON = True
Note that we are using a smaller grid size for this world (.1m instead of .2m).

Check Yourself 4:
Run your code in raceWorld, with noise on. How long does it take
your robot to reach the goal?

4) Optimizations
Our goal for the rest of the lab is to make improvements to your controller to
allow it to more quickly navigate an unknown maze. How to do this is
completely up to you; try to think about places where you could make
changes (to the search, the mapmaking procedures, and/or the driver) to
speed up the controller. If you are having trouble thinking of improvements to
make, consider some of the following:
Whenever the robot needs to update its plan, the search procedure
tends to take a long while to run. Try to think of ways to update your
plan less often, and/or to speed up your search.
You may notice that the robot occasionally turns around and returns to
a previous point on its path. This happens because the robot keeps
moving while the search is being performed, even though soar's
graphics stop updating. By the time the search finishes, the robot has
moved, and must turn around to move back to where it was when it
started the search. There are many ways to solve this problem, some
may be easier or more effective than others.
The robot is currently using a proportional controller to move between
grid cells. Because the cells are so close to one another, this often
leads to a really jerky, stop-and-start motion. Changing the driver, or
doing some post-processing of the path, might help to solve this
problem.
The driver's top speed also tends not to get very high. It might be alright
to dial up the speed, but be careful; robots can be dangerous at high
speeds!
The robot spends a lot of its time turning in place. Think of ways you
could alter the driver, or the state space over which you are searching,

07/12/2015 04:19 p.m.

6.01 Fall 2015

5 de 7

http://sicp-s4.mit.edu/tutor/index.py/6.01/DL13

to reduce needless turning.


One slightly more sophisticated strategy would be to try and find a path
to the goal that has the least probability of containing a wall, rather than
making a binary decision about whether a cell contains a wall or not
before searching for a path. This strategy requires some care, since the
probability of a path containing a wall is calculated by multiplying
probabilities, whereas the ucSearch function computes costs by
addition.
Two things are off limits: you may not change the noise model in the
simulator, and you may not change the check the robot uses to
determine whether it has reached the goal.

Check Yourself 5:
Run your updated controller in raceWorld, with noise on. How long
does it take your robot to reach the goal?

Paste the code from the soar brain (displayed on completion of the race)
below:

5) Really? Really.
Finally, we will run your code on the real robot. To prepare your code for
running on the real robot:
Set THE_WORLD = realRobotWorld
Disable noise (set NOISE_ON = False)
Change inp = io.SensorInput(cheat=True) to read inp =
io.SensorInput().
Replace the line about checkoff.getData in the brainStart
function with the following:
robot.startTime = time.time()
Replace the line about checkoff.generate_code in the brainStart
function with the following:
code = None
Test your robot in one of the small pens on the side of the room. Make sure
that one group member is always ready to pick the robot up, should it
try to run away!
Code that worked in the simulator might not work perfectly on the robot; be
sure to test and debug your code on the real robot before asking for a

07/12/2015 04:19 p.m.

6.01 Fall 2015

6 de 7

http://sicp-s4.mit.edu/tutor/index.py/6.01/DL13

checkoff.
Checkoff 3:
Demonstrate your working controller in one of the small pens on the
side of the room.

6) Search and Rescue


We can combine ideas from the different sections of 6.01 to recreate a
search-and-rescue operation. One scenario would be locating injured victims
after a natural disaster; imagine that we know that there is a injured person in
a particular room, and we want to send a robot to find that person.
We can approximate this scenario with an infrared light source (representing
the victim) in a maze (representing the home); we have two such mazes set
up on the sides of the room.
Our goal is to use the planning procedures from this lab to move the robot
toward a goal location until the robot head detects a sufficiently bright light, at
which point it should switch over to the "pet robot" behavior and try to position
itself roughly 1 foot from the light source.
Grab a robot with a head attached. Find your light tracking circuit from
Design Lab 81, and make sure that it still works; hook it up to the robot's
head, and make sure that it still tracks the light.
Finally, implement the "search-and-rescue" behavior described above.

07/12/2015 04:19 p.m.

6.01 Fall 2015

7 de 7

http://sicp-s4.mit.edu/tutor/index.py/6.01/DL13

Footnotes
1

or you can use a staff version of the circuit if you are unable to find yours

07/12/2015 04:19 p.m.

You might also like