Tải bản đầy đủ (.pdf) (550 trang)

Intermediate dynamics a linear algebraic approach

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (8.46 MB, 550 trang )

Mechanical Engineering Series
Frederick F. Ling
Editor-in-Chief


www.pdfgrip.com

Mechanical Engineering Series
J. Angeles, Fundamentals of Robotic Mechanical Systems:
Theory, Methods, and Algorithms, 2nd ed.
P. Basu, C. Kefa, and L. Jestin, Boilers and Burners: Design and Theory
J.M. Berthelot, Composite Materials:
Mechanical Behavior and Structural Analysis
I.J. Busch-Vishniac, Electromechanical Sensors and Actuators
J. Chakrabarty, Applied Plasticity
K.K. Choi and N.H. Kim, Structural Sensitivity Analysis and Optimization 1:
Linear Systems
K.K. Choi and N.H. Kim, Structural Sensitivity Analysis and Optimization 2:
Nonlinear Systems and Applications
G. Chiyssolouris, Laser Machining: Theory and Practice
V.N. Constantinescu, Laminar Viscous Flow
G.A. Costello, Theory of Wire Rope, 2nd Ed.
K. Czolczynski, Rotordynamics of Gas-Lubricated Journal Bearing Systems
M.S. Darlow, Balancing of High-Speed Machinery
J.F. Doyle, Nonlinear Analysis of Thin-Walled Structures: Statics,
Dynamics, and Stability
J.F. Doyle, Wave Propagation in Structures:
Spectral Analysis Using Fast Discrete Fourier Transforms, 2nd ed.
PA. Engel, Structural Analysis of Printed Circuit Board Systems
AC. Fischer-Cripps, Introduction to Contact Mechanics
A.C. Fischer-Cripps, Nanoindentations, 2nd ed.


J. Garcia de Jalon and E. Bayo, Kinematic and Dynamic Simulation of
Multibody Systems: The Real-Time Challenge
W.K. Gawronski, Advanced Structural Dynamics and Active Control of
Structures
W.K. Gawronski, Dynamics and Control of Structures: A Modal Approach
G. Genta, Dynamics of Rotating Systems
(continued after index)


www.pdfgrip.com

R. A. Rowland

Intermediate Dynamics:
A Linear Algebraic Approach

^ Sprimger


www.pdfgrip.com

R. A. Howland
University of Notre Dame
Editor-in-Chief
Frederick F. Ling
Earnest F. Gloyna Regents Chair Emeritus in Engineering
Department of Mechanical Engineering
The University of Texas at Austin
Austin, TX 78712-1063, USA
and

Distinguished William Howard Hart
Professor Emeritus
Department of Mechanical Engineering,
Aeronautical Engineering and Mechanics
Rensselaer Polytechnic Institute
Troy, NY 12180-3590, USA

Intermediate Dynamics: A Linear Algebraic Approach
ISBN 0-387-28059-6
e-ISBN 0-387-28316-1
ISBN 978-0387-28059-2

Printed on acid-free paper.

© 2006 Springer Science+Business Media, Inc.
All rights reserved. This work may not be translated or copied in whole or in part without
the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring
Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or
scholarly analysis. Use in connection with any form of information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks and similar terms,
even if they are not identified as such, is not to be taken as an expression of opinion as to
whether or not they are subject to proprietary rights.
Printed in the United States of America.
987654321
springeronline.com

SPIN 11317036



www.pdfgrip.com

Dedicated to My Folks


www.pdfgrip.com

Mechanical Engineering Series
Frederick F. Ling
Editor-in-Chief

The Mechanical Engineering Series features graduate texts and research monographs to
address the need for information in contemporary mechanical engineering, including
areas of concentration of applied mechanics, biomechanics, computational mechanics,
dynamical systems and control, energetics, mechanics of materials, processing, production systems, thermal science, and tribology.

Advisory Board/Series Editors
Applied Mechanics

F.A. Leckie
University of California,
Santa Barbara
D. Gross
Technical University of Darmstadt

Biomechanics

V.C. Mow
Columbia University


Computational Mechanics

H.T. Yang
University of California,
Santa Barbara

Dynamic Systems and Control/
Mechatronics

D. Bryant
University of Texas at Austin

Energetics

J.R. Welly
University of Oregon, Eugene

Mechanics of Materials

I. Finnic
University of California, Berkeley

Processing

K.K. Wang
Cornell University

Production Systems


G.-A. Klutke
Texas A&M University

Thermal Science

A.E. Bergles
Rensselaer Polytechnic Institute

Tribology

W.O. Winer
Georgia Institute of Technology


www.pdfgrip.com

Series Preface
Mechanical engineering, and engineering discipline bom of the needs of the industrial revolution, is once again asked to do its substantial share in the call for
industrial renewal. The general call is urgent as we face profound issues of productivity and competitiveness that require engineering solutions, among others.
The Mechanical Engineering Series is a series featuring graduate texts and research monographs intended to address the need for information in contemporary
areas of mechanical engineering.
The series is conceived as a comprehensive one that covers a broad range of
concentrations important to mechanical engineering graduate education and research. We are fortunate to have a distinguished roster of consulting editors, each
an expert in one of the areas of concentration. The names of the consulting editors
are listed on page vi of this volume. The areas of concentration are applied mechanics, biomechanics, computational mechanics, dynamic systems and control,
energetics, mechanics of materials, processing, thermal science, and tribology.


www.pdfgrip.com


Preface
A number of colleges and universities offer an upper-level undergraduate course
usually going under the rubric of "Intermediate" or "Advanced" Dynamics—
a successor to the first dynamics offering generally required of all students.
Typically common to such courses is coverage of 3-D rigid body dynamics and
Lagrangian mechanics, with other topics locally discretionary. While there are
a small number of texts available for such offerings, there is a notable paucity
aimed at "mainstream" undergraduates, and instructors often resort to utilizing sections in the first Mechanics text not covered in the introductory course,
at least for the 3-D rigid body dynamics. Though closely allied to its planar
counterpart, this topic is far more complex than its predecessor: in kinematics,
one must account for possible change in direction of the angular velocity; and
the kinetic "moment of inertia," a simple scalar in the planar formulation, must
be replaced by a tensor quantity. If elementary texts' presentation of planar
dynamics is adequate, their treatment of three-dimensional dynamics is rather
less satisfactory: It is common to expand vector equations of motion in components—in a particular choice of axes—and consider only a few special instances
of their application {e,g, fixed-axis rotation in the Euler equations of motion).
The presentation of principal coordinates is typically somewhat ad hoc, either
merely stating the procedure to find such axes in general, or even more commonly invoking the "It can be shown t h a t . . . " mantra. Machines seem not to
exist in 3-D! And equations of motion for the gyroscope are derived independently of the more general ones—a practice lending a certain air of mystery to
this important topic.
Such an approach can be frustrating to the student with any degree of curiosity and is counterproductive pedagogically: the component-wise expression of
vector quantities has long since disappeared from even Sophomore-level courses
in Mechanics, in good part because the complexity of notation obscures the
relative simplicity of the concepts involved. But the Euler equations can be
expressed both succinctly and generally through the introduction of matrices.
The typical exposition of principal axes overlooks the fact that this is precisely
the same device used to find the same "principal axes" in solid mechanics (explicitly through a rotation); few students recognize this fact, and, unfortunately,
few instructors take the opportunity to point this out and unify the concepts.
And principal axes themselves are, in fact, merely an application of an even
more general technique utilized in linear algebra leading to the diagonalization



www.pdfgrip.com

of matrices (at least the real, symmetric ones encountered in both solid mechanics and dynamics). These facts alone suggest a linear algebraic approach to the
subject.
A knowledge of linear algebra is, however, more beneficial to the scientist
and engineer than merely to be able to diagonalize matrices: Eigenvectors and
eigenvalues pervade both fields; yet, while students can typically find these
quantities and use them to whatever end they have been instructed in, few can
answer the simple question "What is an eigenvector?" As the field of robotics
becomes ever more mainstream, a facility with [3-D] rotation matrices becomes
increasingly important. Even the mundane issue of solving linear equations is
often incomplete or, worse still, inaccurate: "All you need is as many equations
as unknowns. If you have fewer than that, there is no solution." (The first of
these statements is incomplete, the second downright wrong!) Such fallacies are
likely not altogether the students' fault: few curricula allow the time to devote
a full, formal course to the field, and knowledge of the material is typically
gleaned piecemeal on an "as-need" basis. The result is a fractionated view with
the intellectual gaps alluded to.
Yet a full course may not be necessary: For the past several years, the
Intermediate Dynamics course at Notre Dame has started with an only 2-3
week presentation of linear algebra, both as a prelude to the three-dimensional
dynamics to follow, and for its intrinsic pedagogical merit—to organize the bits
and pieces of concepts into some organic whole. However successful the latter
goal has been, the former has proven beneficial.
With regard to the other topic of Lagrangian mechanics, the situation is
perhaps even more critical. At a time when the analysis of large-scale systems
has become increasingly important, the presentation of energy-based dynamical techniques has been surprisingly absent from most undergraduate texts
altogether. These approaches are founded on virtual work (not typically the

undergraduate's favorite topic!) and not only eliminate the need to consider the
forces at interconnecting pins [assumed frictionless], but also free the designer
from the relatively small number of vector coordinate systems available to describe a problem: he can select a set of coordinate ideally suited to the one at
hand.
With all this in mind, the following text commits to paper a course which
has gradually developed at Notre Dame as its "Intermediate Dynamics" offering. It starts with a relatively short, but rigorous, exposition of linear systems,
culminating in the diagonalization (where possible) of matrices—the foundation
of principal coordinates. There is even an [optional] section dealing with Jordan
normal form, rarely presented to students at this level. In order to understand
this process fully, it is necessary that the student be familiar with how the [matrix] representation of a linear operator (or of a vector itself) changes with a
transformation of basis, as well as how the eigenvectors—in fact the new axes
themselves—affect this particular choice of basis. That, at least in the case
of real, symmetric, square inertia matrices, this corresponds to a rotation of
axes requires knowledge of axis rotation and the matrices which generate such
rotations. This, in turn, demands an appreciation of bases themselves and.


www.pdfgrip.com

XI

particularly, the idea of linear independence (which many students feel deals
exclusively with the Wronskian) and partitioned matrix multiplication. By the
time this is done, little more effort is required to deal with vector spaces in
general.
This text in fact grew out of the need to dispatch a [perceived] responsibility to rigor {Le.proofs of theorems) without bogging down class presentation
with such details. Yet the overall approach to even the mathematical material
of linear algebra is a "minimalist" one: rather than a large number of arcane
theorems and ideas, the theoretical underpinning of the subject is provided
by, and unified through, the basic theme of linear independence—the echelon

form for vectors and [subsequently] matrices, and the rank of the latter. It can
be argued that these are the concepts the engineer and scientist can—should—
appreciate anyhow. Partitioning establishes the connection between vectors and
[the rows/columns of] matrices, and rank provides the criterion for the solution
of linear systems (which, in turn, fold back onto eigenvectors). In order to avoid
the student's becoming fixated too early on square matrices, this fundamental
theory is developed in the context of linear transformations between spaces of
arbitrary dimension. It is only after this has been done that we specialize to
square matrices, where the inverse, eigenvectors, and even properties of determinants follow naturally. Throughout, the distinction between vectors and tensors,
and their representations—one which is generally blurred in the student's mind
because it is so rarely stressed in presentation—is heavily emphasized.
Theory, such as the conditions under which systems of linear equations have
a solution, is actually important in application. But this linear algebra Part is
more than mere theory: Linear independence, for example, leads to the concept of matrix rank, which then becomes a criterion for predicting the number
of solutions to a set of linear equations; when the cross product is shown to
be equivalent to a matrix product of rank 2, indeterminacy of angular velocity and acceleration from the rotating coordinate system equations in the next
Part becomes a natural consequence. Similarly, rotation matrices first appear
as an example of orthogonal matrices, which then are used in the diagonalization of real symmetric matrices culminating the entire first Part; though this
returns in the next Part in the guise of principal axes, its inverse—the rotation
from principal axes to arbitrary ones—becomes a fundamental technique for the
determination of the inertia tensor.
Given the audience for which this course is intended, the approach has been
surprisingly successful: one still recalls the delight of one student who said that
she had decided to attack a particular problem with rotation matrices, and "It
worked!" Admittedly, such appreciation is often delayed until the part on rigid
body dynamics has been covered—yet another reason for trying to have some
of the more technical detail in the text rather than being presented in class.
This next Part on 3-D dynamics starts with a relatively routine exposition of
kinematics, though rather more detail than usual is given to constraints on the
motion resulting from interconnections, and there is a perhaps unique demonstration that the fundamental relation dr = dO x r results from nothing more

than the fixed distance between points in a rigid body. Theory from the first


www.pdfgrip.com

Xll

Part becomes integrated into the presentation in a discussion of the indeterminacy of angular velocity and acceleration without such constraints. Kinetics is
preceded by a review of particle and system-of-particles kinetics; this is done
to stress the particle foundation on which even rigid body kinetics is based as
much as to make the text self-contained. The derivation of the Euler equations is also relatively routine, but here the similarity to most existing texts
ends: these equations are presented in matrix form, and principal coordinates
dispatched with reference to diagonalization covered in the previous Part. The
flexibility afforded by this allows an arbitrary choice of coordinates in terms of
which to represent the relevant equations and, again, admits of a more transparent comprehension than the norm. 3-D rigid body machine kinetics, almost
universally ignored in elementary presentations, is also covered: the emphasis
is on integrating the kinematics and kinetics into a system of linear equations,
making the difference between single rigid bodies and "machines" quantitative
rather than qualitative. There is also a rather careful discussion of the forces at
[ball-and-socket] connections in machines; this is a topic often misunderstood
in Statics, let alone Dynamics! While the gyroscope equations of motion are
developed de nuvo as in most texts, they are also obtained by direct application of the Euler equations; this is to overcome the stigma possibly resulting
from independent derivation—the misconception that gyroscopes are somehow
"special," not covered by the allegedly more general theory. There is a brief
section detailing the use of the general equations of motion to obtain the variational equations necessary to analyze stability. Finally, a more or less routine
treatment of energy and momentum methods is presented, though the implementation of kinetic constants and the conditions under which vanishing force
or moment lead to such constants are presented and utilized; this is to set the
scene for the next Part, where Analytical Dynamics employs special techniques
to uncover such "integrals of the motion."
That next Part treats Analytical Mechanics. Lagrangian dynamics is developed first, based on Newton's equations of motion rather than functional minimization. Though the latter is mentioned as an alternative approach, it seems

a major investment of effort—and time—to develop one relatively abstruse concept (however important intrinsically), just to derive another almost as bad by
appeal to a "principle" (Least Action) whose origin is more teleological than
physical! Rather more care than usual, however, is taken to relate the concepts
of kinematic "constraints" and the associated kinematical equations among the
coordinates; this is done better to explain Lagrange multipliers. Also included
is a section on the use of non-holonomic constraints, in good part to introduce
Lagrange multipliers as a means of dealing with them; while the latter is typically the only methodology presented in connection with this topic, here it is
actually preceded by a discussion of purely algebraic elimination of redundant
constraints, in the hopes of demonstrating the fundamental issue itself. Withal,
the "spin" put on this section emphasizes the freedom to select arbitrary coordinates instead of being "shackled" by the standard vector coordinate systems
developed to deal with Newton's laws. The discussion of the kinetic constants
of energy and momentum in the previous Part is complemented by a discussion


www.pdfgrip.com

Xlll

of "integrals of the motion" in this one; this is the springboard to introduce Jacobi's integral, and ignorable coordinates become a means of uncovering other
such constants of the motion in Lagrangian systems—at least, if one can find
the Right Coordinates! The Part concludes with a chapter on Hamiltonian
dynamics. Though this topic is almost universally ignored by engineers, it is
as universal in application as Conservation of Energy and is the lingua franca
of Dynamical Systems, with which every modern-day practitioner must have
some familiarity. Unlike most introductions to the field, which stop after having obtained the equations of motion, this presentation includes a discussion of
canonical transformations, separability, and the Hamilton-Jacobi Equation: the
fact that there is a systematic means of obtaining those variables—"ignorable"
coordinates and momenta—in which the Hamiltonian system is completely soluble is, after all, the raison d^etre for invoking a system in this form, as opposed
to the previous Lagrangian formulation, in the first place. Somewhat more
attention to time-dependent Hamiltonian systems is given than usual. Additionally, like the Lagrangian presentation, there is a discussion of casting time

as an ordinary variable; though this is often touched upon by more advanced
texts on analytical mechanics, the matter seems to be dropped almost immediately, without benefit of examples to demonstrate exactly what this perspective
entails; one, demonstrating whatever benefits it might enjoy, is included in this
chapter.
A N o t e on Notation. We shall occasionally have the need to distinguish between "unordered" and "ordered" sets. Unordered sets will be denoted
by braces: {a, 6} = {&, a}, while ordered sets will be denoted by parentheses:
(a, b) ^ (6, a). This convention is totally consistent with the shorthand notation
for matrices: A = (cnj)'
Throughout, vectors are distinguished by boldface: V; unit vectors are distinguished with a "hat" and are generally lower-case letters: i. Tensor/matrix
quantities are written in a boldface sans-serif font, typically in upper-case: I
(though the matrix representation of a vector otherwise denoted with a lowercase boldface will retain the case: v ~ t;).
Material not considered essential to the later material is presented in a
smaller type face; this is not meant to denigrate the material so much as to
provide a visual map to the overall picture. Students interested more in applications are typically impatient with a mathematical Theorem/Proof format,
yet it surely is necessary to feel that a field has been developed logically. For
this reason, each section concludes with a brief guide of what results are used
merely to demonstrate later, more important results, and what are important
intrinsically; it is hoped this will provide some "topography" of the material
presented.
A N o t e on Style. There have been universally two comments by students
regarding this text: "It's all there," and "We can hear you talking." In retrospect, this is less a "text" than lectures on the various topics. Examples are
unusually—perhaps overwhelmingly!—complete, with all steps motivated and


www.pdfgrip.com

XIV

virtually all intermediate calculations presented; though this breaks the "onepage format" currently in favor, it counters student frustration with "terseness."
And the style is more narrative than expository. It is hoped that lecturers do

not find this jarring.
R.A. Rowland
South Bend, Indiana


www.pdfgrip.com

Contents
Preface

I

Linear Algebra

Prologue

ix

1
3

1 Vector Spaces
1.1 Vectors
1.1.1 The "Algebra" of Vector Spaces
1.1.2 Subspaces of a Vector Space
1.2 The Basis of a Vector Space
1.2.1 Spanning Sets
1.2.2 Linear Independence
A Test for Linear Independence of n-tuples: Reduction to
Echelon Form

1.2.3 Bases and the Dimension of a Vector Space
Theorems on Dimension
1.3 The Representation of Vectors
1.3.1 n-tuple Representations of Vectors
1.3.2 Representations and Units
1.3.3 Isomorphisms among Vector Spaces of the Same Dimension

5
6
7
12
13
14
16
18
28
28
32
34
37
38

2 Linear Transformations on Vector Spaces
2.1 Matrices
2.1.1 The "Partitioning" and Rank of Matrices
The Rank of a Matrix
2.1.2 Operations on Matrices
Inner Product
Transpose of a Matrix Product
Block Multiplication of Partitioned Matrices

Elementary Operations through Matrix Products
2.2 Linear Transformations

41
43
44
44
48
49
51
52
54
61


www.pdfgrip.com

xvi

CONTENTS

2.3

2.4
3

Domain and Range of a [Linear] Transformation and their
Dimension
2.2.1 Linear Transformations: Basis and Representation . . . .
Dyadics

2.2.2 Null Space of a Linear Transformation
Dimension of the Null Space
Relation between Dimensions of Domain, Range, and Null
Space
Solution of Linear Systems
"Skips" and the Null Space
2.3.1 Theory of Linear Equations
Homogeneous Linear Equations
Non-homogeneous Linear Equations
2.3.2 Solution of Linear Systems—Gaussian Elimination . . . .
Linear Operators—Differential Equations

64
65
68
72
74
74
77
82
84
84
85
88
90

Special Case—Square Matrices
97
The "Algebra" of Square Matrices
98

3.1 The Inverse of a Square Matrix
99
Properties of the Inverse
102
3.2 The Determinant of a Square Matrix
103
Properties of the Determinant
105
3.3 Classification of Square Matrices
Ill
3.3.1 Orthogonal Matrices—Rotations
113
3.3.2 The Orientation of Non-orthonormal Bases
115
3.4 Linear Systems: n Equations in n Unknowns
116
3.5 Eigenvalues and Eigenvectors of a Square Matrix
118
3.5.1 Linear Independence of Eigenvectors
123
3.5.2 The Cayley-Hamilton Theorem
128
3.5.3 Generalized Eigenvectors
131
3.5.4 Application of Eigenvalues/Eigenvectors
137
3.6 Application—Basis Transformations
140
3.6.1 General Basis Transformations
141

Successive Basis Transformations—Composition
144
3.6.2 Basis Rotations
151
3.7 Normal Forms of Square Matrices
156
3.7.1 Linearly Independent Eigenvectors—Diagonalization . . . 157
Diagonalization of Real Symmetric Matrices
162
3.7.2 Linearly i^ependent Eigenvectors—Jordan Normal
Form
163

Epilogue

171


www.pdfgrip.com

CONTENTS

xvii

II

173

3-D Rigid Body Dynamics


Prologue

175

4

Kinematics
4.1 Motion of a Rigid Body
4.1.1 General Motion of a Rigid Body
Differentials
4.1.2 Rotation of a Rigid Body
Differential Rigid Body Rotation
Angular Velocity and Acceleration
Time Derivative of a Unit Vector with respect to Rotation
4.2 Euler Angles
4.2.1 Direction Angles and Cosines
Vector Description
Coordinate System Description
4.2.2 Euler Angles
Vector Description
Coordinate System Description
4.3 Moving Coordinate Systems
4.3.1 Relative Motion: Points
4.3.2 Relative Motion: Coordinate Systems
Time Derivatives in Rotating Coordinate Systems . . . .
Applications of Theorem 4.3.1
Rotating Coordinate System Equations
Distinction between the ">1/^" and "re/" Quantities . . .
The Need for Rotating Coordinate Systems
4.4 Machine Kinematics

4.4.1 Motion of a Single Body
A Useful Trick
The Non-slip Condition
The Instantaneous Center of Zero Velocity
4.4.2 Kinematic Constraints Imposed by Linkages
Clevis Connections
Ball-and-socket Connections
4.4.3 MotionofM^tep/e Rigid Bodies ("Machines")
Curved Interconnections
General Analysis of Universal Joints

177
177
178
178
180
181
185
187
191
192
192
193
194
194
196
197
198
199
200

201
202
203
209
210
211
218
220
225
228
228
237
238
247
248

5

Kinetics
5.1 Particles and Systems of Particles
5.1.1 Particle Kinetics
Linear Momentum and its Equation of Motion
Angular Momentum and its Equation of Motion
Energy
A Caveat regarding Conservation

253
255
255
255

257
258
262


www.pdfgrip.com

xviii

CONTENTS
5.1.2

5.2

5.3
5.4

Particle System Kinetics
Kinetics relative to a Fixed System
Kinetics relative to the Center of Mass
Equations of Motion for Rigid Bodies
5.2.1 Angular Momentum of a Rigid Body—the Inertia Tensor
Properties of the Inertia Tensor
Principal Axes
5.2.2 Equations of Motion
Forces/Moments at Interconnections
Determination of the Motion of a System
5.2.3 A Special Case—the Gyroscope
Gyroscope Coordinate Axes and Angular Velocities . . . .
Equations of Motion

Special Case—Moment-free Gyroscopic Motion
General Case—Gyroscope with Moment
Dynamic Stability
Alternatives to Direct Integration
5.4.1 Energy
Kinetic Energy
Work
Energy Principles
5.4.2 Momentum
5.4.3 Conservation Application in General

Epilogue

III

Analytical Dynamics

263
264
267
272
273
280
287
304
312
325
327
327
328

333
338
346
352
353
353
355
357
366
372
381

383

Prologue

385

6

389
389
391
392

Analytical Dynamics: Perspective
6.1 Vector Formulations and Constraints
6.2 Scalar Formulations and Constraints
6.3 Concepts from Virtual Work in Statics


7 Lagrangian Dynamics: Kinematics
397
7.1 Background: Position and Constraints
397
Categorization of Differential Constraints
403
Constraints and Linear Independence
405
7.2 Virtual Displacements
408
7.3 Kinematic vs. Kinetic Constraints
412
7.4 Generalized Coordinates
416
Derivatives of r and v with respect to Generalized Coordinates and Velocities
420


www.pdfgrip.com

CONTENTS

xix

8

Lagrangian D y n a m i c s : Kinetics
8.1 Arbitrary Forces: Euler-Lagrange Equations
Notes on the Euler-Lagrange Equations
8.2 Conservative Forces: Lagrange Equations

Properties of the Lagrangian
8.3 Differential Constraints
8.3.1 Algebraic Approach to Differential Constraints
8.3.2 Lagrange Multipliers
Interpretation of the Lagrange Multipliers
8.4 Time as a Coordinate

423
424
427
443
453
455
456
458
465
467

9

Integrals of Motion
9.1 Integrals of the Motion
9.2 Jacobi's Integral—an Energy-like Integral
9.3 "Ignorable Coordinates" and Integrals

471
471
473
478


10 Hamiltonian Dynamics
10.1 The Variables
Solution for q{q,p\t)
10.2 The Equations of Motion
10.2.1 Legendre Transformations
10.2.2 q and p as Lagrangian Variables
10.2.3 An Important Property of the Hamiltonian
10.3 Integrals of the Motion
10.4 Canonical Transformations
10.5 Generating Functions
10.6 Transformation Solution of Hamiltonians
10.7 Separability
10.7.1 The Hamilton-Jacobi Equation
10.7.2 Separable Variables
Special Case—Ignorable Coordinates
10.8 Constraints in Hamiltonian Systems
10.9 Time as a Coordinate in Hamiltonians

483
483
484
486
488
490
492
495
497
503
509
517

517
519
519
522
524

Epilogue

533

Index

535


www.pdfgrip.com

Part I

Linear Algebra


www.pdfgrip.com

Prologue
The primary motivation for this part is to lay the foundation for the next one,
deahng with 3-D rigid body dynamics. It will be seen there that the "inertia," I, a quantity which is a simple scalar in planar problems, blossoms into a
"tensor" in three-dimensional ones. But in the same way that vectors can be
represented in terms of "basis" vectors i, j , and fc, tensors can be represented as
3x3 matrices, and formulating the various kinetic quantities in terms of these

matrices makes the fundamental equations of 3-D dynamics far more transparent and comprehensible than, for example, simply writing out the components
of the equations of motion. (Anyone who has seen older publications in which
the separate components of moments and/or angular momentum written out
doggedly, again and again, will appreciate the visual economy and conceptual
clarity of simply writing out the cross product!) More importantly, however,
the more general notation allows a freedom of choice of coordinate system.
While it is obvious that the representation of vectors will change when a different basis is used, it is not clear that the same holds for matrix representations.
But it turns out that there is always a choice of basis in which the inertia tensor
can be represented by a particularly simple matrix—one which is diagonal in
form. Such a choice of basis—tantamount to a choice of coordinates axes—is
referred to in that context as "principle axes." These happen to be precisely
the same "principle axes" the student my have encountered in a course in Solid
Mechanics; both, in turn, rely on techniques in the mathematical field of "linear
algebra" aimed at generating a standard, "canonical" form for a given matrix.
Thus the following three chapters comprising this Part culminate in a discussion of such canonical forms. But to understand this it is also necessary
to see how a change of basis affects the representation of a matrix (or even a
vector, for that matter). And, since it turns out that the new principle axes
are nothing more than the eigenvectors of the inertia matrix—and the inertia
matrix in these axes nothing more than that with the eigenvalues arrayed along
the diagonal—it is necessary to understand what these are. (If one were to walk
up to you on the street as ask the question "Yes, but just what is an eigenvector?", could you answer [at least after recovering from the initial shock]?) To
find these, in turn, it is necessary to solve a "homogeneous" system of linear
equations; under what conditions can a solution to such a system—or any other
linear system, for that matter—be found? The answer lies in the number of
"linearly independent" equations available. And if this diagonalization depends


www.pdfgrip.com

on the "basis" chosen to represent it, just what is all this basis business about

anyhow?
In fact, at the end of the day, just what is a vector?
The ideas of basis and linear independence are fundamental and will pervade
all of the following Part. But they are concepts from the field of vector spaces^
so these will be considered first. That "vectors" are more general than merely
the directed line segments—"arrows"—that most scientists and engineers are
famihar with will be pointed out; indeed, a broad class of objects satisfy exactly
the same properties that position, velocity, forces, et al (and the way we add and
multiply them) do. While starting from this point is probably only reasonable
for mathematicians, students at this level from other disciplines have developed
the mathematical sophistication to appreciate this fact, if only in retrospect.
Thus the initial chapter adopts the perspective viewing vectors as those objects
(and the operations on them) satisfying certain properties. It is hoped that this
approach will allow engineers and scientists to be conversant with their more
"mathematical" colleagues who, in the end, do themselves deal with the same
problems they do!
The second chapter then discusses these objects called tensors—"linear transformations^^ between vector spaces—and how at least the "second-order" tensors
we are primarily interested in can be represented as matrices of arbitrary (nonsquare) dimension. The latter occupy the bulk of this chapter, with a discussion
of how they depend on the very same basis the vectors do, and how a change
of this basis will change the representations of both the vectors and the tensors
operating on them. If such transformations "map" one vector to another, the
inverse operation finds where a given vector has been transformed from; this is
precisely the problem of determining the solution of a [linear] equation.
Though the above has been done in the context of transformations between
arbitrary vector spaces, we then specialize to mappings between spaces of the
same dimension—represented as square matrices. Eigenvectors and eigenvalues,
inverse, and determinants are treated—more for sake of completeness than anything else, since most students will likely already have encountered them at this
stage. The criterion to predict, for a given matrix, how many linearly independent eigenvectors there are is developed. Finally all this will be brought together
in a presentation of normal forms—diagonalized matrices when we have available a full set of linearly independent eigenvectors, and a block-diagonal form
when we don't. These are simply the [matrix] representations of linear transformations in a distinguishing basis, and actually what the entire Part has been

aiming at: the "principle axes" used so routinely in the following Part on 3D rigid-body dynamics are nothing more than those which make the "inertial
tensor" assume a diagonalized form.


www.pdfgrip.com

Chapter 1

Vector Spaces
Introduction
When "vectors" are first introduced to the student, they are almost universally
motivated by physical quantities such as force and position. It is observed
that these have a magnitude and a direction, suggesting that we might define
objects having these characteristics as "vectors." Thus, for most scientists and
engineers, vectors are directed line segments—"arrows," as it were. The same
motivation then makes more palatable a curious "arithmetic of arrows": we
can find the "sum" of two arrows by placing them together head-to-tail, and
we can "multiply them by numbers" through a scaling, and possibly reversal
of direction, of the arrow. It is possible to show from nothing more than this
that these two operations have certain properties, enumerated in Table 1.1 (see
page 7).
At this stage one typically sees the introduction of a set of mutually perpendicular "unit vectors" (vectors having a magnitude of 1), i, j , and k—themselves
directed line segments. One then represents each vector A in terms of its [scalar]
"components" along each of these unit vectors: A = A^i + Ayj -\- Azk. (One
might choose to "order" these vectors i, j , and k; then the same vector can be
written in the form of an "ordered triple" A = (Ax.Ay^Az).)
Then the very
same operations of "vector addition" and "scalar multiplication" defined for
directed line segments can be reformulated and expressed in the Cartesian (or
ordered triple) form. Presently other coordinate systems may be introduced—in

polar coordinates or path-dependent tangential-normal coordinates for example,
one might use unit vectors which move with the point of interest, but all quantities are still referred to a set of fundamental vectors.
[The observant reader will note that nothing has been said above regarding
the "dot product" or "cross product" typically also defined for vectors. There
is a reason for this omission; suffice it to point out that these operations are
defined in terms of the magnitudes and (relative) directions of the two vectors,
so hold only for directed line segments.]


www.pdfgrip.com

CHAPTER 1. VECTOR

6

SPACES

The unit vectors introduced are an example of a '^basis^^] the expression of
A in terms of its components is referred to as its representation in terms of that
basis. Although the facts are obvious, it is rarely noted that a) the representation
of A will change for a different choice oft, j , and fc, but the vector A remains the
same—it has the same magnitude and direction regardless of the representation;
and b) that, in effect, we are representing the directed line segment A in terms
of real numbers Ax, Ay, and Az. The latter is particularly important because it
allows us to deal with even three-dimensional vectors in terms of real numbers
instead of having to utilize trigonometry {spherical trigonometry, at that!) to
find, say, the sum of two vectors.
In the third chapter, we shall see how to predict the representation of a vector
given in one basis when we change to another one utilizing matrix products.
While talking about matrices, however, we shall also be interested in other

applications, such as how they can be used to solve a system of linear equations.
A fundamental idea which will run through these chapters—which pervades all
of the field dealing with such operations, "linear algebra"—is that of "linear
independence." But this concept, like that of basis, is one characterizing vectors.
Thus the goal if this chapter is to introduce these ideas for later application.
Along the way, we shall see precisely what characterizes a "basis"—what is
required of a set of vectors to qualify as a basis. It will be seen to be far more
general than merely a set of mutually perpendicular unit vectors; indeed, the
entire concept of "vector" will be seen to be far broader than simply the directed
line segments formulation the reader is likely familiar with.
To do this we are going to turn the first presentation of vectors on its ear:
rather than defining the vectors and operations and obtaining the properties in
Table 1.1 these operations satisfy, we shall instead say that any set of objects,
on which there are two operations satisfying the properties in Table 1.1, are
vectors. Objects never thought of as being "vectors" will suddenly emerge
as, in fact, being vectors, and many of the properties of such objects, rather
than being properties only of these objects, will become the common to all
objects (and operations) we define as a ''vector space.^^ It is a new perspective,
but one which, with past experience, it is possible to appreciate in retrospect.
Along the way we shall allude to some of the concepts of "algebra" from the
field of mathematics. Though not the sort of thing which necessarily leads
to the solution of problems, it does provide the non-mathematician with some
background and jargon used by "applied mathematicians" in their description
of problems engineers and scientists actually do deal with, particularly in the
field of non-linear dynamics—an area at which the interests of both intersect.

1.1

Vectors


Consider an arbitrary set of elements 1^. Given a pair of operations [corresponding to] "vector addition," 0 , and "scalar multiplication," 0 , ^ and its
two operations are said to form a vector space if, for v, Vi, V2, and Vs in 7^,
and real numbers a and 6, these operations satisfy the properties in Table 1.1.


www.pdfgrip.com

1.1.

VECTORS

vector addition:
(Gl)
(G2)
(G3)
(G4)
(05)

Vi 0 ^2 ^ ' ^ (closure)
(i^i 0 1^2) 0 '^3 == 1^1 0 (t^2 0 '^3) (associativity)
30 G 7^ : t; 0 0 = i; for all i; G 7^ (additive identity element)
for each v e V^3v e V \v ^v = ^
(additive inverse)
1^10x^2 ='^2 0*^1
(commutativity)

scalar multiplication:
(VI)
(V2)
(V3)

(V4)
(V5)

aQv e V (closure)
a 0 (vi 0 V2) = (a 0 Vi) 0 (a 0 V2) (distribution)
(a + 6) 0 1; = (a 0 t;) 0 (6 0 t;) (distribution)
{a-h) (z)v = aQ) {hQ)v)
(associativity)
\Q)v

=V

Table 1.1: Properties of Vector Space Operations

1.1.1

T h e "Algebra" of Vector Spaces^

There are several things to be noted about this table. The first is that the five "G"
properties deal only with the " 0 " operation. The latter "V" properties relate that
operation to the " 0 " one.
"Closure" is generally regarded by non-mathematicians as so "obvious" that it
hardly seems worth mentioning. Yet, depending on how the set ^ is defined, reasonably non-pathological cases in which this is violated can be found. But in properties
(V2) for example, (vi0V2) is an element of ^ by (Gl), while (a 0 v) and (h 0 v) are
by (VI); thus we know that the " 0 " operation on the left-hand side of this equation
is defined (and the result is known to be contained in 7^), while the two terms on the
right-hand side not only can be added with 0 , but their "vector sum" is also in ^ . It
seems a little fussy, but all legal (and perhaps somewhat legalistic).
In the same way, to make a big thing about "associative" laws always seems a little
strange. But remember that the operations we are dealing with here are "binary": they

operate on only two elements at a time. The associative laws group elements in pairs
(so the operations are defined in the first place), and then show that it makes no
difference how the grouping occurs.
It is important to note that there are actually four types of operations used in this
table: in addition to the 0 operation between numbers and vectors, and the 0 between
vectors themselves, there appear "+" and "•" between the real numbers; again, we are
just being careful about which what operations apply to what elements. Actually,
what all of the equalities in the Table imply is not just that the operations on each
side are defined, but that they are, in fact, equall
^ Note (see page xiii in the Foreword) that "optional" material is set off by using a smaller
typeface.


www.pdfgrip.com

8

CHAPTER 1. VECTOR SPACES

Property (V5) appears a little strange at first. What is significant is that the
number "1" is that real number which, when it [real-]multiplies any other real number,
gives the second number back—the "multiplicative identity element" for real numbers.
That property states that " 1 " will also give v back when it scalar multiplies v.
It all seems a little fussy to the applied scientist and engineer, but what is really
being done is to "abstract" the structure of a given set of numbers, to see the essence
of what makes these numbers combine the way they do. This is really what the term
"algebra" (and thus "linear algebra") means to the mathematician. While it doesn't
actually help to solve any problems, it does free one's perspective from the prejudice of
experience with one or another type of arithmetic. This, in turn, allows one to analyze
the essence of a given system, and perhaps discover unexpected results unencumbered

by that prejudice.
In fact, the "G" in the identifying letters actually refers to the word "group"—
itself a one-operation algebraic structure defined by these properties. The simplest
example is the integers: they form a group under addition (though not multiplication:
there is no multiplicative inverse; fractions—rational numbers—are required to allow
this). In this regard, the last property, (G5), defined the group as a "commutative" or
"Abelian" group. We will shortly be dealing with the classic instance of an operation
which forms a group with appropriate elements but which is not Abelian: the set of
n X n matrices under multiplication. The [matrix] product of two matrices A and B
is another square matrix [of the same dimension, so closed], but generally AB y^ BA.
One last note is in order: All of the above has been couched in terms of "real
numbers" because this is the application in which we are primarily interested. But,
in keeping with the perspective of this section, it should be noted that the actual
definition of a vector space is not limited to this set of elements to define scalar
multiplication. Rather, the elements multiplying vectors are supposed to come from a
"field"—yet another algebraic structure. Unfortunately, a field is defined in terms of
still another algebraic structure, the ^^ring^^:
Recall that a vector space as defined above really dealt with two sets of elements:
elements of the set 1^ itself, and the [set of] real numbers; in addition, there were
two operations on each of these sets: "addition," "©" and "+" respectively, and
"multiplication," "•" on the reals and " 0 " between the reals and elements of 1/. In
view of the fact that we are examining the elements which "scalar multiply" the vectors,
it makes sense that we focus on the two operations "+" and "•" on those elements:
Definition 1.1.1. A ring is a set of elements ^ and two operations "+" and "•" such
that elements of i^ form an Abelian group under "+," and, for arbitrary vi, 7*2, and
7*3 in ^ , "•" satisfies the three properties in Table 1.2:

(RMl)
(RM2)
(RM3)

(RM4)

ri-r2 e ^ (closure)
(ri ' r2) ' rs = ri ' {r2 ' rs) (associativity)
(ri + r2) • ra = (ri • rs) + (r2 • rs) (distribution)
ri • (r2 + rs) == (ri • r2) + (ri • rs) (distribution)

Table 1.2: Properties of Multiplication in Rings
Note that the "•" operation is not commutative (though the "+" operation is, by
the ring definition that the elements and "+" form an Abelian group); that's why


×