21
-10 Robotics and Automation Handbook
collision detection, kinematic constraints) can be directly answered using kinematic simulation. Inclusion
of the dynamics becomes important when the engineer is interested in more detailed information such as
reaction forces, sizing motors/actuators, sensor selection, control design, estimating power consumption
and determining how contacting parts behave. We consider two approaches to simulation of the robot
dynamics. First is the canned approach that handles the computation of the dynamic equations of motion
based on a CAD model of the system. The second approach is based on using a time domain simulation
package such as Matlab’s Simulink or MatrixX’s SystemBuild
®
.Thefirst approach is generally much easier
and provides sufficient information to answer fundamental questions. In addition, in many cases, it is
possible to integrate the dynamic simulation package with a control package such as Simulink or System-
Build. The second approach requires a deeper understanding of the system’s dynamics but enables greater
flexibility in terms of modeling the system (including control algorithms, sensory feedback, arbitrary
inputs).
21.3.4.1 Multibody Dynamic Packages
Two examples of multibody dynamic simulation packages are LMS International’s Dynamic Motion Sim-
ulation (DADS) and MSC Software’s Adams. With both packages, you begin by either building a pa-
rameterized model of the robot or importing the model from a CAD system, much like the simulation
packages described earlier. The software automatically formulates and solves the dynamic equations of
motion. Both MSC Adams and DADS
®
have integration capabilities with control simulation software
such as Simulink and SystemBuild. The engineer can take the geometrically defined models and easily
incorporate them within block diagrams created with the control system software. It is then possible to
evaluate a wide class of mechanical and control designs and estimate the system’s performance prior to
ever cutting metal.
SolidWorks provides an option for integrating the Adams simulation technology directly into the
SolidWorks environment. This interface, CosmosMotion
®
, is a physics-based motion simulation pack-
age that is fully embedded into the SolidWorks environment. The part’s geometry and mass properties
are defined inside SolidWorks. CosmosMotion reads these characteristics, along with all of the defined
constraints. The engineer specifies joint motion or forces, and the software displays the resulting forces
and/or motion of the system based on fundamental physical principles. The advantage of this approach
is twofold. First, the design and simulation is embedded in the same interface. The user is not concerned
with file transfers or compatability issues. Second, the same software package that the engineer uses to
design the robot is used to define the geometry and mass properties for the simulation. These advantages
come at a cost. A professional version of SolidWorks, at the time of this writing, is $4000. Likewise, the
CosmosMotion package is $6000.
Another option is to import the CAD model into a time domain simulation package such as Matlab
and Simulink. SimMechanics
®
provides an effortless importation of SolidWorks CAD assemblies into
the Simulink time domain simulation environment. This approach enables the ability to simulate the
behavior of mechanical, control, and other dynamic systems. As an example, we constructed a sim-
ple three-degrees-of-freedom manipulator in the SolidWorks environment. This model, shown in
Figure 21.9, consists of four parts: a base and three links. These four parts are combined in an as-
sembly. The assembly includes the ability to define constraints (what SolidWorks calls Mates) between
parts. In the case of the robot which has three revolute joints, we require six constraints: three con-
straints forcing the joints on adjoining parts to be collinear and three constraints forcing each face on
adjoining parts to be as close as possible. Now, when moving a part, the whole assembly moves, il-
lustrating the constrained motion due to the robot’s kinematics. It is also possible to include collision
detection and restrict motion when constraints are reached. At this point, SolidWorks is restricted to
constraint detection and ranges of motion. However, when defining parts, you can include not only
the geometry but the types of materials as well (specifically the material’s density). SolidWorks, like
many CAD packages, includes the ability to compute, based upon the geometry and materials, the
resulting mass properties for each part (mass, location of center of mass, principal axes of inertia,
Copyright © 2005 by CRC Press LLC
21
-12 Robotics and Automation Handbook
21.4.1 Graphical Animation
There are a number of different ways to visually display robot motion. In this section, we will focus on
where to begin if we wish to develop a package similar to the systems covered in the previous section. Many
of these packagesexploit theOpenGL interface. OpenGL was originally developed by Silicon Graphics, Inc.
(SGI) as a multi-purpose, platform-independent graphical API. OpenGL is a collection of approximately
150 distinct commands that you can use to specify objects and operations needed to produce interactive
three-dimensional applications. To provide a starting point, note that OpenGL is based on object oriented
programming, C++. To efficiently develop a robot simulation package, you develop a set of classes that can
constructsimple objects suchas spheres, cylinders, rectangles, and whateverbasic objects youneedto define
a robot. OpenGL uses a number of vector and matrix operations to define objects in three-dimensional
space. As a simple example, the following function would draw a cube.
Void DrawCube(float x, float y, float z)
{
glPushMatrix();
glTranslatef(x,y,z);
glBegin(GL\_POLYGON);
glVertex3f(0.0f, 0.0f, 0.0f); // top face
glVertex3f(0.0f, 0.0f, -1.0f);
glVertex3f(-1.0f, 0.0f, -1.0f);
glVertex3f(-1.0f, 0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f); // front face
glVertex3f(-1.0f, 0.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f(0.0f, -1.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f); // right face
glVertex3f(0.0f, -1.0f, 0.0f);
glVertex3f(0.0f, -1.0f, -1.0f);
glVertex3f(0.0f, 0.0f, -1.0f);
glVertex3f(-1.0f, 0.0f, 0.0f); // left face
glVertex3f(-1.0f, 0.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f); // bottom face
glVertex3f(0.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f); // back face
glVertex3f(-1.0f, 0.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(0.0f, -1.0f, -1.0f);
glEnd();
glPopMatrix();
}
OpenGL uses what is called a matrix stack to construct complicated models of many simple objects. In
the case of the cube, the function first gets the most recent coordinate frame, possibly pushed onto the
stack prior to this function call. Then the function glTranslate() performs a translation on the coordinate
frame. Likewise there are functions such as glRotate() and glScale() that provide the ability to rotate and
scale the object. The glBegin() and glEnd() functions provide bounds on the definition of the verticies,
defined by the glVertex3f() function, of the object.
Your robot will consist of a number of these primitives, thus the motivation for object-oriented pro-
gramming. Animation of motion will require the coordination of these primitives. It is now clear that a
visual robot simulation program requires a combination of programming for animation (using utilities
such as OpenGL) and an understanding of robot kinematics. There are a number of references, includ-
ing this handbook, that provide excellent guidelines for writing the algorithms necessary for providing
Copyright © 2005 by CRC Press LLC
Robot Simulation 21
-13
FIGURE 21.10 OpenSim simulation of vehicle and arm (courtesy of Oak Ridge National Laboratory).
coordinated motion of a robot (Asada and Slotine, 1989; Craig, 1989; Spong and Vidyasagar, 1989; and
Yoshikawa, 1990). Basically, you will have some form of interface that provides a desired motion for your
robot. This can be from a script file, mouse motion, slider bars on a dialog box, or any basic form of input
into your application. The kinematic routines will transform the desired motion of the robot into the
required transformations for each primitive or link on your robot. The results of these transformations
are passed to functions such as glTranslate() and glRotate() to provide the animation of the coordinated
motion.
Clearly, the above description is not sufficient to enable you to immediately start writing software for
robot animation. However, it does provide a good starting point. There are many examples where research
groups have used OpenGL to develop their own robot simulation package.
7,8
In addition, there are a
number of good references on programming in OpenGL (Hawkings and Astle, 2001; Woo et al., 1999).
These references, combined with the above robotics references, provide guidance on exactly how to write
software that animates objects, for example, the robot shown in Figure 21.10.
21.4.2 Numerical Simulation
InSection21.3,webrieflydescribedtheutility of programs such as SimulinkandSystemBuildforsimulating
the motion of a robotic system. However, this approach required the use of a secondary software package
for computing the dynamic equations of motion. In this final section, we will describe a simple procedure
for computing the equations of motion for a serial link manipulator and embedding the dynamics in a
Simulink simulation. The basic methodology requires the use of a symbolic software package and a deeper
understanding of dynamics. The approach is based on using homogenous transforms to describe key
points on the robot: joint locations and centers of mass. Using the transformations, we construct linear
and angular velocity vectors about each center of mass. The displacement terms are used to construct a
potential energy term while the velocity terms are used in the computation of the kinetic energy. We use the
LaGrange approach to compute the resulting equations of motion. We then describe how to include joint
7
ORNL’s simulation package: />8
Georgia Tech’s simulation package: />Copyright © 2005 by CRC Press LLC
21
-14 Robotics and Automation Handbook
and tip forces as well as partitioning the equations so that they are suitable for any numerical integration
routine. While this approach is presently limited to serial link manipulators, it should give the reader a
better understanding of the mechanics associated with simulation of robot dynamics.
Two primary elements in any time domain simulation consist of the state equations and an integration
routine. The state equations are derived using basic principles in dynamics. Our objective is to formulate
the dynamics into a general form, given in Equation (21.1), where
¯
x represents a vector of states for the
system and
¯
u is a vector of control inputs. In the case of a robot, the vector
¯
x could represent the position
and velocities of each of the joints. The vector
¯
u would contain the joint forces due to the actuators driving
the robot. The equation is derived from the system’s dynamic equations of motion.
˙
¯
x = F (
¯
x,
¯
u, t)
(21.1)
During each time step of the simulation, we compute the forces driving the robot, possibly due to a
control law, and compute the derivative of the state using Equation (21.1). We then use an integration
routine to compute the state of the system,
¯
x, for the next time step. The basic elements are shown in
Figure 21.11.
Clearly, the computation of the state equations requires quite a bit of effort. The following section
provides a basic procedure for computing the dynamic equation of motion, in the state equation format
described above, for a general serial link manipulator. The approach is based on computing, symbolically,
the position and orientation of each joint and center of mass of the robot. While not necessary, we
use the Denavit-Hartenburg approach. First, as a review, the homogeneous transform is expressed using
the traditional Denavit-Hartenberg (D-H) representation found in most robotics texts where the four
quantities θ
i
(angle), α
i
(twist), d
i
(offset), l
i
(length) are parameters of link and joint i.
H
i
=
c
θi
−s
θi
c
αi
s
θi
s
αi
a
i
c
θi
s
θi
c
θi
c
αi
−c
θi
s
αi
a
i
s
θi
0 s
αi
c
αi
d
i
00 0 1
(21.2)
The conventional use of the homogeneous transform treats each subsequent transformation as a body
fixed rotation and translation.
We also assume that we can define an additional set of homogeneous transforms from each joint to
a point on each link where the associated mass properties (mass and inertia matrix) are known. So,
our basic methodology consists of using two sets of homogeneous transformations; one to describe the
pose of the arm and one to identify the displacements and velocities, both translation and rotation, of
the center of mass of each link and payload with respect to the manipulators state (joint position and
velocity). We extract from the transforms the vertical displacement of each center of mass of each link for
an expression of the total potential energy of the system. Likewise, computation of the system’s kinetic
energy is based on computing the linear and angular and velocity of each links center of gravity with
respect to the inertial frame. Once the kinetic and potential energy terms are derived, we simply use
the Matlab jacobian() command to symbolically calculate the mass matrix and nonlinear dynamic
x
des
Controller
x
=
F
(
x
,
u
,
t
)
u
t
t–Ts
x
(t) = ∫ F(
x
,
u
,
t
)
dt
+
x
(
t
–
Ts
)
x
.
.
FIGURE 21.11 General simulation block diagram.
Copyright © 2005 by CRC Press LLC
Robot Simulation 21
-15
terms following the Lagrange formulation. Symbolic packages such as Maple and Mathmatica do all the
rest.
First, the position of the center of mass for each link, with respect to the system’s inertial coordinate
system, is computed by iteratively postmultiplying each of the homogeneous transforms, starting from
the base and working to the tip of the robot.
H
i
base
= H
i−1
base
H
i
i−1
=
R
i−1
base
¯
x
i−1
base
¯
01
R
i
i−1
¯
x
i
i−1
¯
01
=
R
i
base
¯
x
i
base
¯
01
,
¯
x
i
base
=
x
i
base
y
i
base
z
i
base
(21.3)
The potential energy due to gravity for link i is the vertical component (z
i
base
in direction of gravity) of
¯
x
i
base
times the mass of the link.
V
i
= m
i
gz
i
base
(21.4)
To compute the kinetic energy, we must first derive expressions for the linear and angular velocity of the
center of gravity for each link as a function of the states of the manipulator. We have an expression,
¯
x
i
base
in Equation (21.3), for the position of the center of gravity of link i with respect to the base inertial frame.
The velocity vector
¯
v
i
is computed by multiplying the Jacobian (with respect to the combined states of the
manipulator
¯
q
i
base
)of
¯
x
i
base
, J (
¯
x
i
base
,
¯
q), by the state velocity vector,
˙
¯
q. Note from Equation (21.5), based
on the chain rule, that
¯
q
i
base
is a vector of robot joint displacements from the base of the robot (joint q
0
)
to the ith joint (q
i
).
¯
v
i
=
∂
¯
x
i
∂t
=
i
j=1
∂
¯
x
i
∂q
j
˙
q
j
(21.5)
= J (
¯
x
i
,
¯
q)
˙
¯
q where
¯
q = [q
0
q
1
q
i
]
The rotational velocity is a little more involved but can be simplified by again using the homogeneous
transform. Starting at the base of the robot, project the net rotational velocity vector forward to the center
of mass of each link using the rotational matrix R
i
base
. Each subsequent joint consists of projecting the total
angular velocity vector of the previous joint to the current joint’s coordinate system, using the rotational
component of that joint’s homogenous transform, and adding the joint angular velocity.
¯
ω
i
=
˙
¯
q
i
+ R
i
i−1
¯
ω
i−1
(21.6)
We now have expressions for the linear and angular velocity of the center of mass for each link. The
total kinetic energy of the system is
T =
1
2
N
i=1
m
i
v
t
i
v
i
+ R
i
base
¯
ω
t
i
[I
i
]
¯
ω
i
(21.7)
where m
i
is the mass of link i and I
i
is the inertia matrix of link i about the center of gravity. As a final
step, we add external forces applied to the system. For now, we assume forces are applied only to the joints
Copyright © 2005 by CRC Press LLC
21
-16 Robotics and Automation Handbook
and tip of the robot. We use the principle of virtual work to lump these terms together.
∂W =
N
i=1
τ
i
∂q
i
+
¯
F
tip
∂
¯
x
tip
+
¯
M
tip
∂θ
tip
=
¯
τ + J
t
tip
(
¯
q)
¯
F
tip
¯
M
tip
∂
¯
q
=
¯
Q∂
¯
q
(21.8)
J
tip
(
¯
q) =
∂
¯
x
tip
base
∂
¯
q
Equation (21.4) and Equation (21.7) provide expressions for the kinetic and potential energy of the
system. We start with the classic definition of the Lagrange equations of motion.
∂
∂t
∂T
∂
˙
¯
q
−
∂T
∂q
+
∂V
∂q
=
¯
Q
(21.9)
The first term in Equation (21.9) can be expanded using the chain rule.
∂
∂t
=
∂
∂
¯
q
∂
¯
q
∂t
+
∂
∂
˙
¯
q
∂
˙
¯
q
∂t
(21.10)
Substituting Equation (21.10) into Equation (21.9),
∂
∂
˙
¯
q
∂T
∂
˙
¯
q
¨
¯
q +
∂
∂
¯
q
∂T
∂
˙
¯
q
˙
¯
q −
∂T
∂
¯
q
+
∂V
∂
¯
q
=
¯
Q
(21.11)
As with the velocity computation in Equation (21.5), we can exploit the jacobian() function in
Matlab for the evaluation of many of the terms in Equation (21.11). First, the term ∂T/∂
˙
¯
q isthe differential
of thescalar kineticenergy term with respect to the full state velocity vector defined in Equation (21.5).This
results in the vector, L
v
, in the script files shown below. The first term in Equation (21.11) represents the
mass matrix. This expression is easily computed by taking the Jacobian on L
v
with respect to the full state
velocity vector. The remaining elements in Equation (21.11) represent the nonlinear dynamics (coriolis,
centripetal, gravitational) of the system. Thus, it should be clear that once the kinetic and potential energy
terms are defined, it is straightforward to symbolically evaluate the dynamic equations. The Jacobian for
projecting external forces to the generalized coordinates can similarly be computed using the tip position
of the robot and the Matlab jacobian() function.
This basic methodology was developed to compute the dynamics of a class of robots operating on
the deck of a ship (a moving platform). We provide two examples to verify this methodology: a simple
one-degree-of-freedom (dof) system operating on a three-dof moving platform and a three-dof system
experiencing all six degrees of motion from the sea state. The first example is simple enough to verify
through hand calculations. The second example is more complex, yet practical. Figure 21.12 shows the
basic kinematic model of the one degree of freedom system experiencing three sea states in the X-Y plane.
We are assuming a one-dof system with mass M and rotary inertia I
z
located at the tip of a link of length L.
The system is experiencing only three of the six sea states: surge (x
s
), heave (y
s
), and pitch (θ
s
). The only
external force applied to the system is a joint torque τ, applied at joint 1. Appendix A shows the listing of
Matlab code used to generate the dynamic equations of motion. The two output variables of interest are
the MassMatrix and NLT (nonlinear terms). The resulting output is listed in Equation (21.12) and can be
Copyright © 2005 by CRC Press LLC
21
-18 Robotics and Automation Handbook
FIGURE 21.13 Force controlled hydraulic manipulator (courtesy Oak Ridge National Laboratory).
and acceleration). This expression is in the same form as Equation (21.1).
¨
¯
q
r
= M
−1
rr
{
¯
Q
r
+ J
t
(
¯
q
r
)
¯
F
ext
− NLT
r
− M
rs
¨
¯
q
s
} (21.14)
While the output of the single degree of freedom case can be listed in Equation (21.12), the results of
the dynamic equations of motion for the second system generates 84 pages of C code and would require
considerable effort to derive by hand. Fortunately, the code can be directly imported into Simulink through
the S-Function builder. The motivation for computing the dynamics equations of motion are threefold.
First, by having the dynamics in a symbolic form, it is possible to aid in the design process, changing
parameters to optimize the system. Second, a model of the system dynamics can aid in increasing the
fidelity of simulation for control design and analysis. Finally, there is no doubt about how the simulation
derived the system’s equations of motion.
The system modeled in this investigation is displayed in Figure 21.13. This system has a 500-lb payload
capacity andhas three active degreesof freedom. The actuatormodels include nonlineardynamic modeling
of the hydraulic system (servo-valve orifice equations, asymetric cylinders, fluid compliance ),controls,
and the dynamic equations of motion computed above.
The hydraulic actuator models generate force as a function of the servovalve current, actuator position,
and velocity. The Simulink model of the full system, including the sea state inputs, hydraulic models,
controls, and dynamic equations of motion, is shown in Figure 21.14.
+
-c-
-c-
k- ku
-c-
Sine Wave
Sine Wave1
Sine Wave2
+
+
+
+
×
×1
z
z1
Human Hand
Position
o
−−
+
Human
Stiffness
In1
Out1
In1
Out1
Subsystem
Subsystem 3
Subsystem 2
Subsystem1
Human
Force
To Workspace
1
alfa
accomodation
calo LDRD Joint Velocity
MATLAB
Function
MATLAB
Function
MATLAB
Function
1
s
Des Tip
Position
Des Joint
Position
Joint
States
Joint
States
TipForce
Tip Force
Tip
Position
Tip
Position
switch 3
switch1
switch
[time,sea_state]
Sea State
Out 1
1
1
Comp Force
Comp Sea Force1
Tip Pos.
Env. Force
atrix
+
+
Sea State
Ext Tip
Force
ID_
command
Actuator
Force
LDRD Arm
–
–
FIGURE 21.14 HAT simulation model (courtesy Oak Ridge National Laboratory).
Copyright © 2005 by CRC Press LLC
Robot Simulation 21
-19
Step
ID_command
Des Joint
Position
External Forces
applied to tip of robot
Ext Tip Force
Sea State
Double click
here for
information on
this demonstration
k-
+
+
−
−
+
−
In
Out1
In
Out1
In
Out1
Joint Controllers
Gc 1
Gc 2
Gc 3
Hydraulics
xv1
xv2
xv3
t1
t2
t3
F1
F2
F3
q1&
q1_d
x2&
x2_d
x3&
x3_d
1
Gain 2
4
Actuator
Force
S-function that
has full nonlinear
equations of motion
Torques States
Ext Forces
Tip
Force
Tip Force
Sea State
Subsystem
Joint
States
1
2
theta_to_L
theta_to_L
S-Function
Builder 1
S-Function
Builder
k-
k-
k-
Jt1 Angle
Jt2 Angle
Jt3 Angle
xv4
1
4
3
2
3
Tip Position
MATLAB
Function
FIGURE 21.15 Details of HAT controller and manipulator model.
The nonlinear model of the hydraulic manipulator, shown as the LDRD arm block in Figure 21.14,
is expanded in Figure 21.15. The inputs to the model include the desired joint positions, the eighteen
elements of the sea state (displacement, velocity, and acceleration of roll, pitch, yaw, heave, surge, and
sway), and the external force applied to the robot. In addition, this model includes auxiliary inputs for
system identification.
There are two primary blocks to note in Figure 21.15. The first is the system dynamics block. This
contains the C code generated previously that represents the forward dynamics of the LDRD arm. The
second is the hydraulics block. From the expanded hydraulics block, each of the three joints has two
primary elements: the servo valve and the actuator. In the case of the second and third joints, there is
also a transmission associated with the coupling of the linear actuator with the joint (the two “theta
to L”
blocks in the upper right). The servo valve models are based on the full nonlinear orifice equations and
regulate the fluid flow to the actuators as a function of excitation signal (command to the moving coil
on the servo vavle), the supply, and return pressure, as well as the pressure on both sides of the actua-
tor. The actuator model generates a force based on the position, velocity, and bulk modulus (stiffness)
of the fluid. The position and velocity of the actuator come from the dynamic model of the manipu-
lator. The force from the actuator is the excitation to the dynamic model of the manipulator. It should
now be clear that the full nonlinear behavior of the manipulator is embodied in the simulation of the
manipulator.
To complete our example, we now consider the impact sea state has on the performance of a force
controlled system. Inputs to the system consist of a human command regulating about a single point and
a sea state with conditions commensurate with a destroyer moving at 20 knots in sea state 5. Figure 21.16
shows the force provided by the human while attempting to regulate the tip position. There is the expected
DC bias on the Z-direction force required to offset the gravitational load. The system is used for strength
amplification. The human only senses a fraction of the total payload weight. For this example, the payload
weight is 2224 N which projects to a human force of 22.4 N when in static equilibrium with a 100:1
force amplification ratio. However, during the 120 sec simulation run, the maximum perturbation felt by
Copyright © 2005 by CRC Press LLC
Robot Simulation 21
-21
1.89
0 20 40 60 80 100 120
X-Direction Tip Position (m)
Time (sec)
1.888
1.886
1.884
1.882
1.88
1.878
1.876
FIGURE 21.18 X tip direction.
The primary motivation for this exercise is to show the potential for robot simulation. In the case above,
studying the control of a robot on the deck of a moving ship could prove quite costly in terms of either
deploying hardware on a ship or designing and constructing a testbed capable of emulating ship motion.
Clearly, the final step in the process is validation of the simulation through experimentation. However,
a high fidelity simulation can provide insight into many problems at a fraction of the cost. In addition,
simulation reduces the hazards associated with testing immature control concepts. No one gets hurt if the
simulation fails.
4
× 10
–3
200 40 60 80 100 120
Y-Direction Tip Position (m)
Time (sec)
3
2
1
−1
−2
−3
−4
−5
0
FIGURE 21.19 Y tip direction.
Copyright © 2005 by CRC Press LLC
21
-22 Robotics and Automation Handbook
1.22
200 40 60 80 100 120
Z-Direction Tip Position (m)
Time (sec)
1.215
1.21
1.205
1.195
1.19
1.185
1.18
1.2
FIGURE 21.20 Z tip direction.
21.5 Conclusion
This chapter focused on recent developments in robot simulation. The traditional role of robot simulation
in the automotive and aerospace industries still exists but has expanded in terms of its utility and flexi-
bility. However, more advanced engineering tools are providing the ability for rapid prototyping, design
optimization, and hardware in the loop-simulation. With the expanding use of robotics in areas such as
medicine and the battlefield, as well as advancements in haptics and man-machine interfaces, it is likely
that there will be greater interaction between humans and simulation in the near future.
References
Asada, H. and Slotine, J. (1989). Robot Analysis and Control, Wiley Interscience, New York.
Bridges, M. and Diamond, D. (1999). The financial impact of teaching surgical residents in the operating
room, Am. J. Surg., Vol. 177, pp. 28–32.
Craig, J. (1989). Introduction to Robotics: Mechanics and Control, 2nd ed., Addison-Wesley, Reading, MA.
Cushieri, A. (1995). Whither minimal access surgery: tribulations and expectations, Am. J. Surg., Vol. 69,
pp. 9–19.
Hawkings, K. and Astle, D. (2001). OpenGL Game Programming, Prima Publishing, Roseville, CA.
Spong, M. and Vidyasagar, M. (1989). Robot Dynamics and Control, John Wiley & Sons, New York.
Van Hoesen, D., Bzorgi, F., Kelsey, A., Wiles, C., and Beeson, K. (1995). Underground radioactive waste
tank remote inspection and sampling, DOE EM Fact Sheet, Gunite and Associated Tanks Operable
Unit at Waste Area Grouping 1 at Oak Ridge National Laboratory, The Department of Energy
Environmental Program.
Woo, M., Neider, J., Davis, T., and Shreiner, D. (1999). OpenGL Programming Guide: The Official Guide to
Learning OpenGL, 3rd ed., Addison-Wesley, Reading, MA.
Yong, Y. and Bonney, M. (1999). Offline programming, In: Handbook of Industrial Robotics, S. Nof, ed.,
John Wiley & Sons, New York, Chapter 19.
Yoshikawa, T. (1990). Foundations of Robotics: Analysis and Control, MIT Press, Cambridge.
Copyright © 2005 by CRC Press LLC
Robot Simulation 21
-23
Appendix A: Matlab Code for The Single DOF Example
syms ai alfai di thi mass g
syms q1 q1_d q1_dd
syms roll pitch yaw heave surge sway
syms roll_d pitch_d yaw_d heave_d surge_d sway_d
syms roll_dd pitch_dd yaw_dd heave_dd surge_dd sway_dd
syms th1 th1d th1dd
syms th2 th2d th2dd
syms th3 th3d th3dd
syms L1 I1x I1y I1z M
pi = sym('pi');
% Symbolically derive motion of base of robot on deck of ship experiencing
% 6 dof of sea motion (roll, pitch, yaw, heave, surge, sway).
R1s=[1 0 0;
0 cos(roll) sin(roll);
0 -sin(roll) cos(roll)];
R2s=[cos(pitch) 0 -sin(pitch);
010;
sin(pitch) 0 cos(pitch)];
R3s=[cos(yaw) sin(yaw) 0;
-sin(yaw) cos(yaw) 0;
0 0 1];
Hsea=[[simple(R1s*R2s*R3s) [surge;sway;heave;]];[0 0 0 1]];
% mask off 3 of 6 sea states (keep sea states in x-z plane)
Hsea=subs(Hsea,'roll',0);
Hsea=subs(Hsea,'yaw',0);
Hsea=subs(Hsea,'sway',0);
% compute kinematics of arm, start off by rotating z from vertical to horizontal
ai=0;alfai=pi/2;di=0;thi=0;
H1=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L1;alfai=0;di=0;thi=th1;
H2=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
% Homogeneous transform for the robot
H=simple(H1*H2);
% Full homogeneous transform of robot include sea state
H_full=simple(Hsea*H);
% state vector of robot
q=[th1];
% derivitive of state vector
qd=[th1d];
% sea state vector
qs=[pitch;heave;surge];
% sea state velocity
Copyright © 2005 by CRC Press LLC
21
-24 Robotics and Automation Handbook
qsd=[pitch_d;heave_d;surge_d];
% sea state acceleration
qsdd=[pitch_dd;heave_dd;surge_dd];
% velocity computation. Each velocity is velocity of the cg of the line wrt base
coordinate system
% velocity of cg of 2nd link
Ht=simple(Hsea*H1*H2); % homogeneous transform from base to cg of link 2
R2c=Ht((1:3),4); % pull out x, y, z (vector from base to link 2 cg)
V2c=Jacobian(R2c,[q(1);qs])*[qd(1);qsd]; % calculate velocity of cg
of link 2 wrt inertial frame (V = dR/dt = dR/dq * dq/dt)
% rotation matricies from base to each associated coordinate system
R1=transpose(H1(1:3,1:3));
R2=transpose(H2(1:3,1:3));
% angular velocity of each link about cg wrt local coordinate frame
Q1=[0;0;th1d]+[0;0;qsd(1)];
% inertia matrix for each link about center of gravity wrt coordinate frame of
line (same as homogeneous transform, just translated to cg)
I1=[I1x 0 0;0 I1y 0;0 0 I1z];
% Payload information (position/velocity)
Rtip=H_full(1:3,4);
Vtip=Jacobian(Rtip,[q;qs])*[qd;qsd];
% total kinetic energy: T = 1/2 qdot' * I * qdot + 1/2 V' M V
T=1/2*M*(transpose(Vtip)*Vtip)+1/2*transpose(Q1)*I1*Q1;
% potential energy due to gravity
V=M*g*Rtip(3);
%====================================================================
% Energy approach: d/dt(dT/dqd)-dT/dq+dV/dq=Q
% Note: d/dt() = d()/dq * qd + d()/dqd * qdd.
% thus, energy expansion: d(dT/dqd)/dqd * qdd + d(dT/dqd)/dq * qd
-dT/dq + dV/dq = Q.
% first term d(dT/dqd)/dqd is mass matrix, nonlinear terms (coriolis,
centripetal, gravity )
%====================================================================
% calculate dT/dqdot
dT_qdot=Jacobian(T,qd);
% extract out mass matrix
MassMatrix= simple(Jacobian(dT_qdot,qd));
% now finish off with remaining terms
NLT1=simple(Jacobian(dT_qdot,[q])*[qd]);
NLT2=simple(Jacobian(dT_qdot,transpose(qs))*qsd);
NLT3=simple(Jacobian(dT_qdot,transpose(qsd))*qsdd);
NLT4=simple(transpose(-1*(Jacobian(T,q))));
NLT5=simple(((Jacobian(V,q))));
NLT=simple(NLT1+NLT2+NLT3+NLT4+NLT5);
Appendix B: Matlab Code for The 3-DOF System, Full Sea State
syms ai alfai di thi mass g
syms q1 q1_d q1_dd
syms roll pitch yaw heave surge sway
Copyright © 2005 by CRC Press LLC
Robot Simulation 21
-25
syms roll_d pitch_d yaw_d heave_d surge_d sway_d
syms roll_dd pitch_dd yaw_dd heave_dd surge_dd sway_dd
syms th1 th1d th1dd
syms th2 th2d th2dd
syms th3 th3d th3dd
syms L1 L2 L3 L4 L1c L2c L3c L3x L3h L3y L4c
syms I1x I2x I3x I1y I2y I3y I1z I2z I3z
syms m1 m2 m3
pi = sym('pi');
% Symbolically derive motion of base of robot on deck of ship experiencing
% 6 dof of sea motion (roll, pitch, yaw, heave, surge, sway). Use DH
parameters
% to describe this motion in terms of homogeneous transforms.
R1s=[1 0 0;
0 cos(roll) sin(roll);
0 -sin(roll) cos(roll)];
R2s=[cos(pitch) 0 -sin(pitch);
010;
sin(pitch) 0 cos(pitch)];
R3s=[cos(yaw) sin(yaw) 0;
-sin(yaw) cos(yaw) 0;
0 0 1];
Hsea=[[simple(R1s*R2s*R3s) [surge;sway;heave;]];[0 0 0 1]];
ai=-L1;alfai=pi/2;di=0;thi=th1;
H1=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L2;alfai=0;di=0;thi=th2+pi/2;
H2=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L2c;alfai=0;di=0;thi=th2+pi/2;
H2c=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L3x;alfai=0;di=0;thi=th3-pi/2;
H3=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L3h;alfai=0;di=0;thi=th3-pi/2;
H3h=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L3c;alfai=0;di=0;thi=th3-pi/2;
H3c=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
Copyright © 2005 by CRC Press LLC
21
-26 Robotics and Automation Handbook
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L3y;alfai=0;di=0;thi=pi/2;
H4=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
ai=L4;alfai=-pi/2;di=0;thi=th4-pi/2;
H5=[cos(thi) -sin(thi)*cos(alfai) sin(thi)*sin(alfai) ai*cos(thi);
sin(thi) cos(thi)*cos(alfai) -cos(thi)*sin(alfai) ai*sin(thi);
0 sin(alfai) cos(alfai) di;
0 0 0 1];
H=simple(H1*H2*H3*H4*H5); % Homogeneous transform for the robot
H_full=simple(Hsea*H); % Full homogeneous transform of robot include sea state
q=[th1;th2;th3]; % state vector of robot
qd=[th1d;th2d;th3d]; % derivitive of state vector
qs=[roll;pitch;yaw;heave;surge;sway]; % sea state vector
qsd=[roll_d;pitch_d;yaw_d;heave_d;surge_d;sway_d]; % sea state velocity
qsdd=[roll_dd;pitch_dd;yaw_dd;heave_dd;surge_dd;sway_dd]; % sea
state acceleration
% velocity computation. Each velocity is the velocity of the cg of the line wrt
base coordinate system
% velocity of cg of 2nd link
Ht=simple(Hsea*H1*H2c); % homogeneous transform from base to cg of link 2
R2c=Ht((1:3),4); % pull out x, y, z (vector from base to link 2 cg)
V2c=Jacobian(R2c,[q(1:2);qs])*[qd(1:2);qsd]; % calculate velocity of cg of link 2
wrt inertial frame (V = dR/dt = dR/dq * dq/dt)
% velocity of cg of 3rd link}
Ht=simple(Hsea*H1*H2*H3c);
R3c=Ht((1:3),4);
V3c=simple(Jacobian(R3c,[q;qs])*[qd;qsd]);
% rotation matricies from base to each associated coordinate system
R1=transpose(H1(1:3,1:3));
R2=transpose(H2(1:3,1:3));
R3=transpose(H3(1:3,1:3));
% angular velocity of each link about cg wrt local coordinate frame
Q1=[0;0;th1d]+qsd(1:3);
Q2=simple(R2*R1*Q1+[0;0;th2d]);
Q3=simple([0;0;th3d]+R3*Q2);
% inertia matrix for each link about center of gravity wrt coordinate frame of
line (same as homogeneous transform, translated to cg)
I1=[I1x 0 0;0 I1y 0;0 0 I1z];
I2=[I2x 0 0;0 I2y 0;0 0 I2z];
I3=[I3x 0 0;0 I3y 0;0 0 I3z];
% Payload information (position/velocity)
syms M_payload;
Rtip=H_full(1:3,4);
Vtip=Jacobian(Rtip,[th1;th2;th3;roll;pitch;yaw;heave;surge;sway])*[th1d;th2d;th3d;
roll_d;pitch_d;yaw_d;heave_d;surge_d;sway_d];
Copyright © 2005 by CRC Press LLC
Robot Simulation 21
-27
% total kinetic energy: T = 1/2 qdot' * I * qdot + 1/2 V' M V
T=(1/2*transpose(Q1)*I1*Q1 + 1/2*transpose(Q2)*I2*Q2 + 1/2*transpose(Q3)*I3*Q3
+ 1/2*m2*transpose(V2c)*V2c + 1/2*m3*transpose(V3c)*V3c) +
1/2*M_payload*(transpose(Vtip)*Vtip);;
% potential energy due to gravity
V=m2*g*R2c(3)+m3*g*R3c(3)+M_payload*g*Rtip(3);
% calculate dT/dqdot
dT_qdot=Jacobian(T,qd);
% extract out mass matrix
MassMatrix= simple(Jacobian(dT_qdot,qd));
% now finish off with remaining terms
NLT1=simple(Jacobian(dT_qdot,[q])*[qd]);
NLT2=simple(Jacobian(dT_qdot,transpose(qs))*qsd);
NLT3=simple(Jacobian(dT_qdot,transpose(qsd))*qsdd);
NLT4=simple(-1*transpose((Jacobian(T,q))));
NLT5=simple(((Jacobian(V,q))));
% translate to C-code
MassMatrix_cc=ccode(MassMatrix);
NLT1_cc=ccode(NLT1);
NLT2_cc=ccode(NLT2);
NLT3_cc=ccode(NLT3);
NLT4_cc=ccode(NLT4);
NLT5_cc=ccode(NLT5);
% calculation of jacobian from tip frame to joint space
LDRDJacobian=simple(Jacobian(H(1:3,4),[th1;th2;th3]));
LDRDJacobian_cc=ccode(LDRDJacobian);
Copyright © 2005 by CRC Press LLC
22
A Survey of
Geometric Vision
Kun Huang
Ohio State University
Yi Ma
University of Illinois
22.1 Introduction
Camera Model and Image Formation
•
3-D Reconstruction
Pipeline
•
Further Readings
22.2 Two-View Geometry
Epipolar Constraint and Essential Matrix
•
Eight-Point Linear
Algorithm
•
Further Readings
22.3 Multiple-View Geometry
Rank Condition on Multiple Views of Point Feature
•
Linear
Reconstruction Algorithm
•
Further Readings
22.4 Utilizing Prior Knowledge of the Scene — Symmetry
Symmetric Multiple-View Rank Condition
•
Reconstruction
from Symmetry
•
Further Readings
22.5 Comprehensive Examples and Experiments
Automatic Landing of Unmanned Aerial Vehicles
•
Automatic
Symmetry Cell Detection, Matching and Reconstruction
•
Semiautomatic Building Mapping and Reconstruction
•
Summary
22.1 Introduction
Vision is one of the most powerful sensing modalities. In robotics, machine vision techniques have been
extensively used in applications such as manufacturing, visual servoing [7, 30], navigation [9, 26, 50, 51],
and robotic mapping [58]. Here the main problem is how to reconstruct both the pose of the camera
and the three-dimensional (3-D) structure of the scene. This reconstruction inevitably requires a good
understanding of the geometry of image formation and 3-D reconstruction. In this chapter, we provide
a survey of the basic theory and some recent advances in the geometric aspects of the reconstruction
problem. Specifically, we introduce the theory and algorithms for reconstruction from two views (e.g.,
see [29, 31, 33, 40, 67]), multiple views (e.g., see [10, 12, 23, 37, 38, 40]), and a single view (e.g., see
[1, 3, 19, 25, 28, 70, 73, 74]). Since this chapter can only provide a brief introduction to these topics,
the reader is referred to the book [40] for a more comprehensive treatment. Without any knowledge of
the environment, reconstruction of a scene requires multiple images. This is because a single image
is merely a 2-D projection of the 3-D world, for which the depth information is lost. When multiple
images are available from different known viewpoints, the 3-D location of every point in the scene can be
determined uniquely by triangulation (or stereopsis). However, in many applications (especially those for
robot vision), the viewpoints are also unknown. Therefore, we need to recover both the scene structure
Copyright © 2005 by CRC Press LLC
22
-6 Robotics and Automation Handbook
equation:
LE
s
.
=
x
1
1
⊗ x
1
2
T
x
2
1
⊗ x
2
2
T
.
.
.
x
n
1
⊗ x
n
2
T
E
s
= 0, L ∈ R
n×9
(22.5)
and choosing E
s
as the eigenvector of L
T
L associated with the eigenvalue 0.
2
E can then be obtained by
“unstacking” E
s
.
After obtaining E , we can project it to the space of essential matrix and decompose it to extract the
motion, which is summarized in the following algorithm [40].
Algorithm 22.1 (Eight-point structure from motion algorithm.) Given n(≥8) pairs of image
correspondence of points x
j
1
and x
j
2
( j =1, 2, , n), this algorithm recovers the motion [R, T] of the
camera (with the first camera frame being the reference frame) in three steps.
1. Compute a first approximation of the essential matrix. Construct the matrix L ∈R
n×9
as in
(22.5). Choose E
s
to be the eigenvector of L
T
L associated to its smallest eigenvalue: compute the
SVD of L = U
L
S
L
V
T
L
and choose E
s
to be the 9th column of V
L
. Unstack E
s
to obtain the 3 ×3
matrix E .
2. Project E onto the space of essential matrices. Perform SVD on E such that
E = Udiag{σ
1
, σ
2
, σ
3
}V
T
where σ
1
≥ σ
2
≥ σ
3
≥ 0 and U, V ∈SO(3). The projection onto the space of essential matrices is
UV
T
with =diag{1, 1,0}.
3. Recover motion from by decomposing the essential matrix. The motion R and T of the camera
can be extracted from the essential matrix using U and V such that
R = UR
T
z
±
π
2
V
T
,
T = UR
z
±
π
2
U
T
where R
z
(α) means rotation around z-axis by α counterclockwise.
The mathematical derivation and justification for the above projection and decomposition steps can be
found in [40].
The above eight-point algorithm in general gives rise to four solutions of motion [R, T]. However,
only one of them guarantees that the depths of all the 3-D points reconstructed are positive with respect
to both camera frames [40]. Therefore, by checking the depths of all the points, the unique physically
possible solution can be obtained. Also notice that T is recovered up to a scale. Without any additional
scene knowledge, this scale cannot be determined and is often fixed by setting T =1.
Given the motion [R, T], the next thing is to recover the structure. For point features, that means to
recover its depth with respect to the camera frame. For the jth point with depth λ
j
i
in the ith (i =1, 2)
camera frame, from the fact λ
j
2
x
j
2
=λ
j
1
Rx
j
1
+ T,wehave
M
j
λ
j
.
=
x
j
2
Rx
j
1
x
j
2
T
λ
j
1
1
= 0 (22.6)
So λ
j
1
can be solved by finding the eigenvector of M
j
T
M
j
associated to its smallest eigenvalue.
2
It can be shown that for n(≤8) points in general positions, L
T
L has only one zero eigenvalue.
Copyright © 2005 by CRC Press LLC
A Survey of Geometric Vision 22
-7
q
1
q
2
q
3
FIGURE 22.3 The two images of a calibration cube and two views of the reconstructed structure. The three angles
are θ
1
=89.7
◦
, θ
2
=92.3
◦
, and θ
3
=91.9
◦
.
Example 22.1
As shown in Figure 22.3, two images of a calibration cube are present. Twenty-three pairs of feature
points are extracted using Harris corner detector, and the correspondences are established manually. The
reconstruction is performed using the eight-point algorithm and depth calculation. The three angles for an
orthogonal corner are close to right angles. The coplanarity of points in each plane are almost preserved.
Overall, the structure is reconstructed fairly accurately.
22.2.2.1 Coplanar Features and Homography
Up to now, we assume that all the 3-D points are in general positions. In practice, it is not unusual that
all the points reside on the same 3-D plane, i.e., they are coplanar. The eight-point algorithm will fail due
to the fact that the matrix L
T
L (see (22.5)) will have more than one eigenvalue being zero. Fortunately,
besides epipolar constraint, there exists another constraint for coplanar points. This is the homography
between the two images, which can be described using a matrix H ∈R
3×3
:
H = R +
1
d
TN
T
(22.7)
with d > 0 denoting the distance from the plane to the first camera center and N being the unit normal
vector of the plane expressed in the first camera frame with λ
1
N
T
x
1
+ d =0.
3
It can be shown that the
3
The homography is not limited to two image points in two camera frames; it is for the coordinates of the 3-D point
on the plane expressed in any two frames (with one frame being the reference frame). In particular, if the second frame
is chosen with its origin lying on the plane, then we have a homography between the camera and the plane.
Copyright © 2005 by CRC Press LLC
A Survey of Geometric Vision 22
-11
where ⊗is the Kronecker product between two matrices. It can be shown that P
i
has rank 11 if n ≥ 6 points
in general positions are present. Therefore the solution of [R
i
, T
i
] will be unique up to a scale for n ≥6.
Obviously, if no noise is present on the images, the recovered motion and structure will be the same with
what can be recovered from the two-view eight-point algorithm. However, in the presence of noise, it is
desired to use the data from all the images. In order to use the data simultaneously, a reasonable approach
is to iterate between reconstructions of motion and structure, i.e., initializing the structure or motions,
then alternating between Equation (22.14) and Equation (22.15) until the structure and motion converge.
Motions [R
i
, T
i
] can then be estimated up to a scale by performing SVD on P
i
as in (22.15).
4
Denote
˜
R
i
and
˜
T
i
to be the estimates from the eigenvector of P
T
i
P
i
associated to the smallest eigenvalue. Let
˜
R
i
=U
i
S
i
V
T
i
be the SVD of
˜
R
i
. Then R
i
∈SO(3) and T
i
are given by
R
i
= sign
det
U
i
V
T
i
U
i
V
T
i
∈ SO(3) and T
i
=
sign
det
U
i
V
T
i
(det(S
i
))
1
3
∈ R
3
(22.16)
In this algorithm, the initializationcan be done using theeight-point algorithm for two views. The initial
estimate on the motion of second frame [R
2
, T
2
] can be obtained using the standard two-view eight-point
algorithm. Initial estimate of the point depth is then
α
j
=−
x
j
2
T
2
T
x
j
2
R
2
x
j
1
x
j
2
T
2
2
, j = 1, , n (22.17)
Inthe multiple-viewcase, the leastsquareestimatesof point depths α
j
=1/λ
j
1
, j =1, , n can be obtained
from Equation (22.14) as
α
j
=−
m
i=2
x
j
i
T
i
T
x
j
i
R
i
x
j
1
m
i=2
x
j
i
T
i
2
, j = 1, , n (22.18)
By iterating between the motion estimation and structure estimation, we expect that the estimates on
structure and motion converge. The convergence criteria may vary for different situations. In practice we
choose the reprojected images error as the convergence criteria. For the j th 3-D point, the estimate of its
3-D locationcan be obtained as λ
j
1
x
j
1
, andthe reprojectionon theith image is obtained
˜
x
i
j
∼ λ
j
1
R
i
x
j
1
+T
i
.
Its reprojection error is then
m
i =2
x
j
i
−
˜
x
i
j
2
. So the algorithm keep iterating until the summation of
reprojection errors over all points are below some threshold .
The algorithm is summarized below:
Algorithm 22.2 (A factorization algorithm for multiple-view reconstruction.) Given m(≥3)
images x
j
1
, x
j
2
, , x
j
m
, j = 1, 2, , n of n(≥8) points, the motions [R
i
, T
i
], i =2, , m and the structure
of the points with respect to the first camera frame α
j
, j =1, 2, , n can be recovered as follows:
1. Set the counter k =0. Compute [R
2
, T
2
] using the eight-point algorithm, then get an initial estimate
of α
j
from Equation (22.17) for each j =1, 2, , n. Normalize α
j
← α
j
/α
1
for j =1, 2, , n.
2. Compute [
˜
R
i
,
˜
T
i
]fromtheeigenvectorofP
T
i
P
i
corresponding to its smallest eigenvalue for
i =2, , m.
3. Compute [R
i
, T
i
] from Equation (22.16) for i =2, 3, , m.
4. Compute α
j
k+1
using Equation (22.18) for each j =1, 2, , n. Normalize so that α
j
←α
j
/α
1
and
T
i
←α
1
k+1
T
i
. Use the newly recovered α
j
’s and motion [R
i
, T
i
]’s to compute the reprojected image
˜
x for each point in all views.
5. If the reprojection error
x −
˜
x
2
> for some threshold >0, then k ← k + 1 and go to
step 2, otherwise stop.
4
Now we assume that the cameras are all calibrated, which is the case of Euclidean reconstruction. This algorithm
also works for uncalibrated case.
Copyright © 2005 by CRC Press LLC
22
-12 Robotics and Automation Handbook
FIGURE 22.5 The four images used to reconstruct the calibration cube.
The above algorithm is a direct derivation from the rank condition. There are techniques to improve its
numerical stability and statistical robustness for specific situations [40].
Example 22.2
The algorithm proposed in previous section has been tested extensively in both simulation [37] and
experiments [40]. Figure 22.5 shows the four images used to reconstruct the calibration cube. The points
are marked in circles.Two views of the reconstruction results are shown in Figure 22.6.The recovered angles
are more accurate than the results from Figure 22.3. Visually, the coplanarity of the points is preserved well.
q
1
q
2
q
3
FIGURE 22.6 Thetwoviewsofthereconstructedstructure.The threeangles areθ
1
=89.9
◦
,θ
2
=91.0
◦
,andθ
3
=90.6
◦
.
Copyright © 2005 by CRC Press LLC