Levels of Motion Control

Levels of Motion Control

Controlling a mobile robot can be done at a number of levels
and ROS provides methods for most of them.
These levels represent different degrees of abstraction,
beginning with direct control of the motors
and proceeding upward to path planning
and SLAM (Simultaneous Localization and Mapping).

Motors, Wheels, and Encoders

Most differential drive robots running ROS use encoders
on the drive motors or wheels.
An encoder registers a certain number of ticks
(usually hundreds or even thousands) per revolution of the corresponding wheel.
Knowing the diameter(直径) of the wheels and the distance between them,
encoder ticks can be converted to the distance traveled in meters
or the angle rotated in radians.
To compute speed,
these values are simply divided by the time interval between measurements.

This internal motion data is collectively known as odometry
and ROS makes heavy use of it as we shall see.
It helps if your robot has accurate and reliable encoders
but wheel data can be augmented using other sources.
For example, the original TurtleBot uses a single-axis gyro(陀螺仪)
to provide an additional measure of the robot’s rotational motion since
the iRobot Create’s encoders are notably inaccurate during rotations.
It is important to keep in mind that no matter
how many sources of odometry data are used,
the actual position and speed of the robot in the world can
(and probably will)
differ from the values reported by the odometry
The degree of discrepancy will vary depending on the environmental conditions
and the reliability of the odometry sources.

Motor Controllers and Drivers

At the lowest level of motion control we need a driver for the robot’s motor
controller that can turn the drive wheels at a desired speed,
usually using internal units such as encoder ticks per second
or a percentage of max speed.
With the exception of the Willow Garage PR2 and TurtleBot,
the core ROS packages do not include drivers for specific motor controllers.
However, a number of third-party ROS developers have published drivers
for some of the more popular controllers and/or robots
such as the ArbotiX, Serializer, Element, LEGO® NXT and Rovio.
(For a more complete list of supported platforms,
see Robots Using ROS.)

The ROS Base Controller

At the next level of abstraction,
the desired speed of the robot is specified in real-world units
such as meters and radians per second.
It is also common to employ some form of PID control.
PID stands for “Proportional Integral Derivative(比例积分导数)”
and is so-named because the control algorithm corrects the wheel velocities
based not only on the difference (proportional) error between the actual
and desired velocity,
but also on the derivative and integral over time.
You can learn more about PID control on Wikipedia.
For our purposes, we simply need to know that the controller
will do its best to move the robot in the way we have requested.
The driver and PID controller are usually combined inside a single ROS node
called the base controller.
The base controller must always run on a computer attached directly to
the motor controller
and is typically one of the first nodes launched when bringing up the robot.
A number of base controllers can also be simulated in Gazebo
including the TurtleBot,
and Erratic.

The base controller node typically publishes odometry data on the
/odom topic and listens for motion commands on the /cmd_vel topic.
At the same time, the controller node typically (but not always) publishes
a transform from the /odom frame to the base frame
either /base_link or /base_footprint.
We say “not always” because some robots like the TurtleBot,
uses the robot_pose_ekf
package to combine wheel odometry
and gyro data to get a more accurate estimate of the robot’s position
and orientation.
In this case, it is the robot_pose_ekf node that publishes
the transform from /odom to /base_footprint.
(The robot_pose_ekf package implements an Extended Kalman Filter as you can
read about on the Wiki page linked to above.)

Once we have a base controller for our robot,
ROS provides the tools we need to issue motion commands
either from the command line or by using other ROS nodes
to publish these commands based on a higher level plan.
At this level, it does not matter what hardware
we are using for our base controller:
our programming can focus purely on the desired linear and angular velocities
in real-world units and any code we write should work on
any base controller with a ROS interface.

Frame-Base Motion using the move_base ROS Package

At the next level of abstraction,
ROS provides the move_base
package that allows us to specify a target position
and orientation of the robot with respect to some frame of reference;
move_base will then attempt to move the robot to the goal
while avoiding obstacles
The move_base package is a very sophisticated path planner
and combines odometry data with both local and global cost maps
when selecting a path for the robot to follow.
It also controls the linear and angular velocities and accelerations
automatically based on the minimum and maximum values we set
in the configuration files.

SLAM using the gmapping and amcl ROS Packages

At an even higher level,
ROS enables our robot to create a map of its environment using
the SLAM gmapping package.
The mapping process works best using a laser scanner
but can also be done using a Kinect or Asus Xtion depth camera to provide
a simulated laser scan.
If you own a TurtleBot,
the TurtleBot stack includes all the tools you need to do SLAM.

Once a map of the environment is available,
ROS provides the amcl package
(adaptive Monte Carlo localization) for automatically localizing the robot
based on its current scan and odometry data.
This allows the operator to point and click on any location
on a map and the robot will find its way there while avoiding obstacles.
(For a superb introduction to the mathematics underlying SLAM,
check out Sebastian Thrun’s online Artificial Intelligence
course on Udacity.)

Semantic Goals (语义目标)

Finally, at the highest level of abstraction,
motion goals are specified semantically such as
“go to the kitchen and bring me a beer”, or simply, “bring me a beer”.
In this case, the semantic goal must be parsed and translated
into a series of actions.
For actions requiring the robot to move to a particular location,
each location can be passed to the localization and path planning levels
for implementation.
While beyond the scope of this volume,
a number of ROS packages are available to help with this task
including smach, executive_teer, worldmodel, semantic_framer,
and knowrob.

In summary, our motion control hierarchy looks something like this:

-> amcl
-> path planner
-> move_base
-> /cmd_vel + /odom
-> base controller
-> motor speeds