Tải bản đầy đủ (.pdf) (744 trang)

research design

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (4.45 MB, 744 trang )

Research Design
Explained
SEVENTH EDITION
Mark L. Mitchell
Janina M. Jolley
Australia • Brazil • Japan • Korea • Mexico • Singapore • Spain • United Kingdom • United States
Research Design Explained, Seventh
Edition
Mark L. Mitchell and Janina M. Jolley
Senior Sponsoring Editor, Psychology: Jane Potter
Assistant Editor: Rebecca Rosenberg
Editorial Assistant: Nicolas Albert
Media Editor: Rachel Guzman
Marketing Manager: Tierra Morgan
Marketing Assistant: Molly Felz
Executive Marketing Communications Manager:
Talia Wise
Senior Content Project Manager: Pat Waldo
Creative Director: Rob Hugel
Art Director: Vernon Boes
Print Buyer: Karen Hunt
Rights Acquisitions Account Manager, Text:
Margaret Chamberlain-Gaston
Rights Acquisitions Account Manager, Image:
Don Schlotman
Production Service: Babita Yadav,
Macmillan Publishing Solutions
Photo Researcher: Nina Smith, Pre-PressPMG
Copy Editor: MPS
Illustrator: MPS


Cover Designer: Lisa Henry
Cover Image: © Getty Images/Art Wolfe
Chapter Opening Photograph: © Getty Images/
Mark Segal
Compositor: Macmillan Publishing Solutions
© 2010, 2007 Wadsworth, Cengage Learning
ALL RIGHTS RESERVED. No part of this work covered by the
copyright herein may be reproduced, transmitted, stored, or used
in any form or by any means graphic, electronic, or mechanical,
including but not limited to photocopying, recording, scanning,
digitizing, taping, Web distribution, information networks, or
information storage and retrieval systems, except as permitted
under Section 107 or 108 of the 1976 United States Copyright Act,
without the prior written permission of the publisher.
For product information and technolog y assistance, contac t us at
Cengage Learning Customer & Sales Support, 1-800-354-9706.
For permission to use material from this text or product,
submit all requests online at www.cengage.com/permissions.
Further permissions questions can be e-mailed to

Library of Congress Control Number: 2008943371
Student Edition:
ISBN-13: 978-0-495-60221-7
ISBN-10: 0-495-60221-3
Wadsworth
10 Davis Drive
Belmont, CA 94002-3098
USA
Cengage Learning is a leading provider of customized learning
solutions with office locations around the globe, including

Singapore, the United Kingdom, Australia, Mexico, Brazil, and
Japan. Locate your local office at www.cengage.com/
international.
Cengage Learning products are represented in Canada by Nelson
Education, Ltd.
To learn more about Wadsworth visit www.cengage.com/
Wadsworth
Purchase any of our products at your local college store or at our
preferred online store www.ichapters.com.
Printed in the United States of America
12345671312111009
We dedicate this book to our parents—Anna, Glen, Zoë,
and Neal—and to our daughter, Moriah.
BRIEF CONTENTS
PREFACE XVII
ABOUT THE AUTHORS XXIII
1 Science, Psychology, and You 1
2 Validity and Ethics: Can We Know, Should We Know, and Can
We Afford Not to Know?
35
3 Generating and Refining Research Hypotheses 61
4 Reading, Reviewing, and Replicating Research 96
5 Measuring and Manipulating Variables: Reliability and
Validity
126
6 Beyond Reliability and Validity: The Best Measure for Your
Study
175
7 Introduction to Descriptive Methods and Correlational
Research

203
8 Survey Research 253
9 Internal Validity 304
10 The Simple Experiment 334
11 Expanding the Simple Experiment: The Multiple-Group
Experiment
382
iv
12 Expanding the Experiment: Factorial Designs 416
13 Matched Pairs, Within-Subjects, and Mixed Designs 463
14 Single-n Designs and Quasi-Experiments 504
15 Putting It All Together: Writing Research Proposals and
Reports
543
APPENDIX A APA Format Checklist 570
APPENDIX B Sample APA-Style Paper 581
APPENDIX C A Checklist for Evaluating a Study’s Validity 595
APPENDIX D Practical Tips for Conducting an Ethical and Valid Study 604
For help on almost all the “nuts and bolts” of planning and conducting a
study, go to www.cengage.com/psychology/mitchell
APPENDIX E Introduction to Statistics 605
For help on choosing, interpreting, or conducting statistical tests, go to
www.cengage.com/psychology/mitchell
APPENDIX F Statistics and Random Numbers Tables 606
GLOSSARY 622
REFERENCES 631
INDEX 641
vBRIEF CONTENTS
CONTENTS
PREFACE XVII

ABOUT THE AUTHORS XXIII
1 Science, Psychology, and You 1
Chapter Overview 2
Why Psychology Uses the Scientific Approach 3
Science’s Characteristics 3
Psychology’s Characteristics 11
The Importance of Science to Psychology: The Scientific Method
Compared to Alternative Ways of Knowing 19
Why You Should Understand Research Design 25
To Understand Psychology 25
To Read Research 26
To Evaluate Research 27
To Protect Yourself From “Quacks” 27
To Be a Better Psychologist 27
To Be a Better Thinker 28
To Be Scientifically Literate 28
To Increase Your Marketability 29
To Do Your Own Research 30
Concluding Remarks 30
Summary 32
Key Terms 33
Exercises 33
Web Resources 34
vi
2 Validity and Ethics: Can We Know, Should We Know,
and Can We Afford Not to Know?
35
Chapter Overview 36
Questions About Applying Techniques From Older Sciences
to Psychology 37

Internal Validity Questions: Did the Treatment Cause a Change in
Behavior? 39
Construct Validity Questions: Are the Variable Names Accurate? 43
External Validity Questions: Can the Results Be Generalized? 48
Ethical Questions: Should the Study Be Conducted? 49
Concluding Remarks 58
Summary 58
Key Terms 59
Exercises 59
Web Resources 60
3 Generating and Refining Research Hypotheses 61
Chapter Overview 62
Generating Research Ideas From Common Sense 62
Generating Research Ideas From Previous Research 64
Specific Strategies 65
Conclusions About Generating Research Ideas From Previous
Research 69
Converting an Idea Into a Research Hypothesis 69
Make It Testable 70
Make It Supportable 71
Be Sure to Have a Rationale: How Theory Can Help 71
Demonstrate Its Relevance: Theory Versus Trivia 72
Refine It: 10 Time-Tested Tips 73
Make Sure That Testing the Hypothesis Is Both Practical and Ethical 90
Changing Unethical and Impractical Ideas Into Research
Hypotheses 90
Make Variables More General 91
Use Smaller Scale Models of the Situation 92
Carefully Screen Potential Participants 92
Use “Moderate” Manipulations 93

Do Not Manipulate Variables 93
Concluding Remarks 93
Summary 94
Key Terms 94
Exercises 94
Web Resources 95
CONTENTS vii
4 Reading, Reviewing, and Replicating Research 96
Chapter Overview 97
Reading for Understanding 97
Choosing an Article 98
Reading the Abstract 98
Reading the Introduction 99
Reading the Method Section 104
Reading the Results Section 106
Reading the Discussion 110
Developing Research Ideas From Existing Research 111
The Direct Replication 112
The Systematic Replication 115
The Conceptual Replication 120
The Value of Replications 122
Extending Research 122
Concluding Remarks 123
Summary 123
Key Terms 124
Exercises 124
Web Resources 125
5 Measuring and Manipulating Variables:
Reliability and Validity
126

Chapter Overview 127
Choosing a Behavior to Measure 128
Errors in Measuring Behavior 129
Overview of Two Types of Measurement Errors: Bias and Random
Error 130
Errors Due to the Observer: Bias and Random Error 133
Errors in Administering the Measure: Bias and Random Error 137
Errors Due to the Participant: Bias and Random Error 137
Summary of the Three Sources and Two Types of Measurement
Error 142
Reliability: The (Relative) Absence of Random Error 143
The Importance of Being Reliable: Reliability as a Prerequisite to
Validity 143
Using Test–Retest Reliability to Assess Overall Reliability: To What Degree
Is a Measure “Random Error Free”? 144
Identifying (and Then Dealing With) the Main Source of a Measure’s
Reliability Problems 147
Conclusions About Reliability 155
Beyond Reliability: Establishing Construct Validity 157
Content Validity: Does Your Test Have the Right Stuff? 157
viii CONTENTS
Internal Consistency Revisited: Evidence That You Are Measuring One
Characteristic 158
Convergent Validation Strategies: Statistical Evidence That You Are
Measuring the Right Construct 159
Discriminant Validation Strategies: Showing That You Are Not Measuring
the Wrong Construct 161
Summary of Construct Validity 164
Manipulating Variables 165
Common Threats to a Manipulation’s Validity 165

Pros and Cons of Three Common Types of Manipulations 169
Conclusions About Manipulating Variables 171
Concluding Remarks 171
Summary 171
Key Terms 172
Exercises 172
Web Resources 174
6 Beyond Reliability and Validity: The Best Measure
for Your Study
175
Chapter Overview 176
Sensitivity: Will the Measure Be Able to Detect the Differences You
Need to Detect? 178
Achieving the Necessary Level of Sensitivity: Three Tips 178
Conclusions About Sensitivity 185
Scales of Measurement: Will the Measure Allow You to Make the
Kinds of Comparisons You Need to Make? 186
The Four Different Scales of Measurement 187
Why Our Numbers Do Not Always Measure Up 192
Which Level of Measurement Do You Need? 193
Conclusions About Scales of Measurement 198
Ethical and Practical Considerations 199
Concluding Remarks 200
Summary 200
Key Terms 201
Exercises 201
Web Resources 202
7 Introduction to Descriptive Methods
and Correlational Research
203

Chapter Overview 204
Uses and Limitations of Descriptive Methods 205
Descriptive Research and Causality 205
CONTENTS ix
Description for Description’s Sake 209
Description for Prediction’s Sake 209
Why We Need Science to Describe Behavior 209
We Need Scientific Measurement 210
We Need Systematic, Scientific Record-Keeping 210
We Need Objective Ways to Determine Whether Variables Are
Related 210
We Need Scientific Methods to Generalize From Experience 211
Conclusions About Why We Need Descriptive Research 212
Sources of Data 212
Ex Post Facto Data: Data You Previously Collected 212
Archival Data 213
Observation 219
Tests 222
Analyzing Data From Descriptive Studies: Looking at
Individual Variables 224
Analyzing Data From Descriptive Studies: Looking at Relationships
Between Variables 229
Comparing Two Means 229
Correlation Coefficients 234
The Coefficient of Determination 238
Determining Whether a Correlation Coefficient Is Statistically
Significant 239
Interpreting Significant Correlation Coefficients 241
Interpreting Null (Nonsignificant) Correlation Coefficients 244
Nonlinear Relationships Between Two Variables 245

Relationships Involving More Than Two Variables 246
Concluding Remarks 249
Summary 250
Key Terms 251
Exercises 251
Web Resources 252
8 Survey Research 253
Chapter Overview 254
Questions to Ask Before Doing Survey Research 255
What Is Your Hypothesis? 255
Can Self-Report Provide Accurate Answers? 260
To Whom Will Your Results Apply? 262
Conclusions About the Advantages and Disadvantages of Survey
Research 263
The Advantages and Disadvantages of Different Survey
Instruments 263
Written Instruments 263
Interviews 267
x CONTENTS
Planning a Survey 272
Deciding on a Research Question 272
Choosing the Format of Your Questions 272
Choosing the Format of Your Survey 276
Editing Questions: Nine Mistakes to Avoid 278
Sequencing Questions 280
Putting the Final Touches on Your Survey Instrument 283
Choosing a Sampling Strategy 283
Administering the Survey 289
Analyzing Survey Data 290
Summarizing Data 290

Using Inferential Statistics 293
Concluding Remarks 301
Summary 301
Key Terms 302
Exercises 302
Web Resources 303
9 Internal Validity 304
Chapter Overview 305
Problems With Two-Group Designs 308
Why We Never Have Identical Groups 308
Conclusions About Two-Group Designs 319
Problems With the Pretest–Posttest Design 319
Three Reasons Participants May Change Between Pretest and
Posttest 321
Three Measurement Changes That May Cause Scores to Change Between
Pretest and Posttest 323
Conclusions About Trying to Keep Everything Except the Treatment
Constant 326
Ruling Out Extraneous Variables 328
Accounting for Extraneous Variables 328
Identifying Extraneous Variables 329
The Relationship Between Internal and External Validity 329
Concluding Remarks 331
Summary 331
Key Terms 332
Exercises 332
Web Resources 333
CONTENTS xi
10 The Simple Experiment 334
Chapter Overview 335

Logic and Terminology 335
Experimental Hypothesis: The Treatment Has an Effect 337
Null Hypothesis: The Treatment Does Not Have an Effect 337
Conclusions About Experimental and Null Hypotheses 340
Manipulating the Independent Variable 340
Experimental and Control Groups: Similar, but Treated Differently 341
The Value of Independence: Why Control and Experimental Groups
Shouldn’t Be Called “Groups” 342
The Value of Assignment (Manipulating the Treatment) 344
Collecting the Dependent Variable 345
The Statistical Significance Decision: Deciding Whether to Declare That a
Difference Is Not a Coincidence 345
Statistically Significant Results: Declaring That the Treatment Has an
Effect 346
Null Results: Why We Can’t Draw Conclusions From Nonsignificant
Results 347
Summary of the “Ideal” Simple Experiment 349
Errors in Determining Whether Results Are Statistically
Significant 349
Type 1 Errors: “Crying Wolf” 350
Type 2 Errors: “Failing to Announce the Wolf” 352
The Need to Prevent Type 2 Errors: Why You Want the Power to Find
Significant Differences 352
Statistics and the Design of the Simple Experiment 353
Power and the Design of the Simple Experiment 353
Conclusions About How Statistical Considerations Impact Design
Decisions 356
Nonstatistical Considerations and the Design of the Simple
Experiment 357
External Validity Versus Power 357

Construct Validity Versus Power 358
Ethics Versus Power 359
Analyzing Data From the Simple Experiment: Basic Logic 360
Estimating What You Want to Know: Your Means Are Sample
Means 361
Why We Must Do More Than Subtract the Means From Each Other 362
How Random Error Affects Data From the Simple Experiment 362
When Is a Difference Too Big to Be Due to Random Error? 365
Analyzing the Results of the Simple Experiment: The t Test 368
Making Sense of the Results of a t Test 369
Assumptions of the t Test 374
Questions Raised by Results 376
Questions Raised by Nonsignificant Results 376
Questions Raised by Significant Results 377
xii CONTENTS
Concluding Remarks 377
Summary 377
Key Terms 379
Exercises 380
Web Resources 381
11 Expanding the Simple Experiment:
The Multiple-Group Experiment
382
Chapter Overview 383
The Advantages of Using More Than Two Values of an Independent
Variable 383
Comparing More Than Two Kinds of Treatments 383
Comparing Two Kinds of Treatments With No Treatment 385
Comparing More Than Two Amounts of an Independent Variable to
Increase External Validity 386

Using Multiple Groups to Improve Construct Validity 393
Analyzing Data From Multiple-Group Experiments 398
Analyzing Results From the Multiple-Group Experiment: An Intuitive
Overview 399
Analyzing Results From the Multiple-Group Experiment: A Closer
Look 401
Concluding Remarks 412
Summary 412
Key Terms 413
Exercises 413
Web Resources 415
12 Expanding the Experiment: Factorial Designs 416
Chapter Overview 417
The 2 Â 2 Factorial Experiment 419
Each Column and Each Row of the 2 Â 2 Factorial Is Like a Simple
Experiment 421
How One Experiment Can Do More Than Two 422
Why You Want to Look for Interactions: The Importance of Moderating
Variables 425
Examples of Questions You Can Answer Using the 2 Â 2 Factorial
Experiment 431
Potential Results of a 2 Â 2 Factorial Experiment 433
One Main Effect and No Interaction 434
Two Main Effects and No Interaction 439
Two Main Effects and an Interaction 440
An Interaction and No Main Effects 443
An Interaction and One Main Effect 444
CONTENTS xiii
No Main Effects and No Interaction 446
Analyzing Results From a Factorial Experiment 446

What Degrees of Freedom Tell You 447
What F and p Values Tell You 447
What Main Effects Tell You: On the Average, the Factor Had an
Effect 448
What Interactions Usually Tell You: Combining Factors Leads to Effects
That Differ From the Sum of the Individual Main Effects 449
Putting the 2 Â 2 Factorial Experiment to Work 450
Looking at the Combined Effects of Variables That Are Combined in Real
Life 450
Ruling Out Demand Characteristics 450
Adding a Replication Factor to Increase Generalizability 450
Using an Interaction to Find an Exception to the Rule: Looking at a
Potential Moderating Factor 452
Using Interactions to Create New Rules 453
Conclusions About Putting the 2 Â 2 Factorial Experiment to Work 453
Hybrid Designs: Factorial Designs That Allow You to Study
Nonexperimental Variables 454
Hybrid Designs’ Key Limitation: They Do Not Allow Cause–Effect
Statements Regarding the Nonexperimental Factor 454
Reasons to Use Hybrid Designs 454
Concluding Remarks 459
Summary 459
Key Terms 460
Exercises 460
Web Resources 462
13 Matched Pairs, Within-Subjects, and Mixed Designs 463
Chapter Overview 464
The Matched-Pairs Design 466
Procedure 466
Considerations in Using Matched-Pairs Designs 466

Analysis of Data 471
Conclusions About the Matched-Pairs Design 473
Within-Subjects (Repeated Measures) Designs 474
Considerations in Using Within-Subjects Designs 474
Four Sources of Order Effects 476
Dealing With Order Effects 478
Randomized Within-Subjects Designs 481
Procedure 481
Analysis of Data 482
Conclusions About Randomized Within-Subjects Designs 482
Counterbalanced Within-Subjects Designs 483
Procedure 483
xiv CONTENTS
Advantages and Disadvantages of Counterbalancing 484
Conclusions About Counterbalanced Within-Subjects Designs 492
Choosing the Right Design 493
Choosing a Design When You Have One Independent Variable 493
Choosing a Design When You Have More Than One Independent
Variable 494
Concluding Remarks 500
Summary 501
Key Terms 502
Exercises 502
Web Resources 503
14 Single-n Designs and Quasi-Experiments 504
Chapter Overview 505
Inferring Causality in Randomized Experiments 505
Establishing Covariation: Finding a Relationship Between Changes in the
Suspected Cause and Changes in the Outcome Measure 505
Establishing Temporal Precedence: Showing That Changes in the Suspected

Cause Come Before Changes in the Outcome Measure 506
Battling Spuriousness: Showing That Changes in the Outcome Measure
Are Not Due to Something Other Than the Suspected Cause 506
Single-n Designs 507
Battling Spuriousness by Keeping Nontreatment Factors Constant:
The A–B Design 511
Variations on the A–B Design 515
Evaluation of Single-n Designs 518
Conclusions About Single-n Designs 522
Quasi-Experiments 522
Battling Spuriousness by Accounting for—Rather
Than Controlling—Nontreatment Factors 523
Time-Series Designs 528
The Nonequivalent Control-Group Design 535
Conclusions About Quasi-Experimental Designs 539
Concluding Remarks 540
Summary 540
Key Terms 541
Exercises 541
Web Resources 542
15 Putting It All Together: Writing Research Proposals
and Reports
543
Chapter Overview 544
Aids to Developing Your Idea 544
CONTENTS xv
The Research Journal 544
The Research Proposal 545
Writing the Research Proposal 546
General Strategies for Writing the Introduction 546

Specific Strategies for Writing Introduction Sections for Different Types of
Studies 550
Writing the Method Section 556
Writing the Results Section 559
Writing the Discussion Section 560
Putting on the Front and Back 561
Writing the Research Report 563
What Stays the Same or Changes Very Little 563
Writing the Results Section 564
Writing the Discussion Section 567
Concluding Remarks 568
Summary 568
Key Terms 569
Web Resources 569
APPENDIX A APA Format Checklist 570
APPENDIX B Sample APA-Style Paper 581
APPENDIX C A Checklist for Evaluating a Study’s Validity 595
APPENDIX D Practical Tips for Conducting an Ethical and Valid Study 604
For help on almost all the “nuts and bolts” of planning and conducting a
study, go to www.cengage.com/psychology/mitchell
APPENDIX E Introduction to Statistics 605
For help on choosing, interpreting, or conducting statistical tests, go to
www.cengage.com/psychology/mitchell
APPENDIX F Statistics and Random Numbers Tables 606
GLOSSARY 622
REFERENCES 631
INDEX 641
xvi CONTENTS
PREFACE
This book focuses on two goals: (1) helping students evaluate the internal,

external, and construct validity of studies and (2) helping students write a
good research proposal. To accomplish these goals, we use the following
methods:

We use numerous, clear examples—especially for concepts with which
students have trouble, such as statistical significance and interactions.

We focus on important, fundamental concepts; show students why those
concepts are important; relate those concepts to what students already
know; and directly attack common misconceptions about those concepts.

We show the logic behind the process of research design so that students
know more than just terminology—they learn how to think like research
psychologists.

We explain statistical concepts (not computations) because (a) students
seem to have amnesia for what they learned in statistics class, (b) some
understanding of statistics is necessary to understand journal articles, and
(c) statistics need to be considered before doing research, not afterward.
FLEXIBLE ORGANIZATION
We know that most professors share our goals of teaching students to be able
to read, evaluate, defend, and produce scientific research. We also know that
professors differ in how they go about achieving these goals and in the
emphasis professors place on each of these goals. For example, although
about half of all research methods professors believe that the best way to
help students understand design is to cover nonexperimental methods first,
about half believe that students must understand experimental methods first.
xvii
To accommodate professor differences, we have made the chapters relatively
self-contained modules. Because each chapter focuses on ethics, construct

validity, external validity, and internal validity, it is easy to skip chapters or
cover them in different orders. For example, the first chapter that some pro-
fessors assign is the last.
CHANGES TO THE SEVENTH EDITION
The changes to this edition, although extensive, are evolutionary rather than
revolutionary. As before, our focus is on helping students think scientifically,
read research critically, and write good research proposals. As before, we
have tried to encourage students to think along with us; consequently, we
have tried to make the book sound more like a persuasive essay or a “how-
to” book than a textbook. However, in this edition, we have made this book
a more powerful and flexible tool for improving students’ thinking, reading,
writing, and researching skills by

making each chapter a stand-alone module,

providing many additional modules on the book’s website,

integrating the book more closely with its website, and

adding more examples from recent journal articles.
CHAPTER-BY-CHAPTER CHANGES
Chapter 1, “Science, Psychology, and You,” now emphasizes the distinction
between the scientific method and other ways of knowing (e.g., see the new
box: Box 1.2), explains why psychology is a science (including a new section
that explains the wisdom of applying general rules to individual cases), and
explains how students can benefit from understanding research methods.
You can link this chapter to two web appendixes: (1) one on the value of
research design for getting a job and getting into graduate school, and
(2) another that responds to Kuhn and other critics of science.
Chapter 2, “Validity and Ethics: Can We Know, Should We Know, and

Can We Afford Not to Know?,” has been revised to help students understand
the connection between validity and ethics. In addition, it has been expanded
to help students understand more about (a) the history of ethics in research
(e.g., see the new box: Box 2.1), (b) obstacles to establishing internal validity,
and (c) how randomized experiments can be internally valid. You can link
this chapter to our extensive discussion of how to deal with IRBs in our web
appendix on conducting ethical research (Appendix D) and to our web
appendix on the debate between quantitative and qualitative research.
Chapter 3, “Generating and Refining Research Hypotheses,” was revised
to give students even more help in developing experimental hypotheses. In
addition, because so much research today involves either mediating or moder-
ating variables, we expanded our discussion of the distinction between those
two types of variables. You can link this chapter to our Web Appendix F:
Using Theory to Generate Hypotheses and to Web Appendix D: Practical
Tips for Conducting an Ethical and Valid Study.
Chapter 4, “Reading, Reviewing, and Replicating Research,” was revised
to make it a self-contained module. Material that students might not have
xviii PREFACE
had the background to understand prior to reading the rest of the book was
either rewritten or moved to Appendix C: Checklist for Critically Reading
Articles. You can link this chapter to Appendix C , as well as to Web Appendix B:
Searching the Literature.
Chapter 5, “Measuring and Manipulating Variables: Reliability and
Validity,” was changed to add more practical tips for evaluating, improving,
and using measures.
Chapter 6, “Beyond Reliability and Validity: The Best Measure for Your
Study,” was reorganized to help students better understand how to refine and
select measures. Students can now use this chapter in conjunction with the
student section of this chapter’s website to download and evaluate measures.
Chapter 7, “Introduction to Descriptive Methods and Correlational

Research,” was made clearer and more engaging by using more and better
examples from current research (e.g., research from Psychological Science on
happiness) as well as from current controversies (e.g., the autism–vaccine
link). In addition, we provided more tips on how to develop descriptive
hypotheses, and we explained many of the technical terms that students will
see in published reports of correlational studies.
Chapter 8, “Survey Research,” has been updated to continue to keep up
with the technological changes (cell phones, web surveys) that have affected
survey research. In addition, we have provided even more practical tips on
how to conduct survey research.
Chapter 9, “Internal Validity,” is a discussion of Campbell and Stanley’s
eight threats to validity. Although this chapter may be skipped, it helps stu-
dents understand (a) why they should not leap to cause–effect conclusions,
(b) why they should appreciate simple experiments (Chapter 10), and
(c) why researchers using within-subject designs (Chapter 13), as well as
researchers using either single-n or quasi experimental designs (Chapter 14)
cannot merely assume that they will have internal validity. We improved this
chapter by (a) putting more emphasis on the value of causal research, (b) add-
ing real-life examples to illustrate the importance of understanding regression
toward the mean, (c) putting more emphasis on how mortality can harm
internal validity, (d) adding real-life examples to illustrate the importance of
understanding the testing effect, and (e) providing additional examples and
explanations to help students understand why, in many circumstances,
researchers prize internal validity over external validity.
Chapter 10, “The Simple Experiment,” was revised to give students even
more heuristics for generating research ideas for simple experiments and now
includes examples from recent, interesting research articles—articles that stu-
dents can read using the guides on the book’s website. We have also
expanded our discussion of power to include more about choosing levels of
the independent variable and about trade-offs between power and validity.

Professors can link this chapter to our web appendix on field experiments.
Chapter 11, “Expanding the Simple Experiment: The Multiple-Group
Experiment,” was improved in two ways. First, we included even more tips
to help students design multiple-group experiments. Second, we included
more examples of published multiple-group experiments, especially examples
that illustrated the value of control groups for (a) boosting construct validity
and (b) determining whether one group was scoring higher than another
because of a positive effect of its treatment or because of a negative effect of
the other group’s treatment.
PREFACE xix
Chapter 12, “Expanding the Experiment: Factorial De signs,” was improved
by providing even more (a) explanations and examples of interactions, (b)
tips for helping students interpret 2 Â 2 tables, and (c) strategies students
can use to develop ideas for factorial experiments. Professors who want to
go into more depth about interactions can assign Web Appendix F: Ordinal
Interactions.
Chapter 13, “Matched Pairs, Within-Subjects, and Mixed Designs,” was
edited to accommodate professors who assigned this chapter early in the
term. Although we did not delete any material, we added a few examples
that make the material easier to understand.
Chapter 14, “Single-n Designs and Quasi-Experiments,” now includes a
better explanation of how single-n designs differ from case studies and a new
box highlighting the problems with case studies. Professors can link this chap-
ter to our web appendix on field experiments.
Chapter 15, “Putting It All Together: Writing Research Proposals and
Reports,” because reviewers were so pleased with it, is essentially unchanged.
Appendix A, “APA Format Checklist,” is also, due to reviewer demand,
essentially unchanged. If you have your students hand in a filled-out copy of
this checklist along with their paper, the quality of their papers will improve.
The old Appendix B, “Searching the Literature,” has been put online so

that students can access it while doing their online searches and to make it
easier for students to use the links to other online resources. The new Appen-
dix B, “Sample APA-Style Paper,” is a good model for students to follow—
and an interesting article to read.
Appendix C: A Checklist for Evaluating a Study’s Validity is a new
appendix that we hope will be as successful as our APA Format Checklist. If
you use Appendix C with our web guides that help students read particular
articles, students will develop confidence and competence in reading and criti-
cally evaluating research.
Appendix D: Practical Tips for Conducting an Ethical and Valid Study
not only discusses the APA ethical code and IRB issues but also gives practi-
cal advice for how to conduct an ethical and valid study.
Appendix E: Introduction to Statistics provides an introduction to statis-
tics. In addition to helping students understand and conduct analyses that stu-
dents often use (e.g., t tests), we have included material that might help
students understand statistical issues (e.g., our box discussing the statistical
significance controversy), logic (e.g., how researchers make the case for a
mediator variable and how some correlational researchers make the case that
a variable has an effect), and techniques (e.g., multiple regression and factor
analysis) that students will encounter when they read journal articles. Please
note that the Test Bank contains test items for Appendix E.
Appendix F: Statistics and Random Numbers Tables contains statistical
tables and instructions on how to use those tables. For example, Appendix F
tells students how to draw random samples, how to randomly assign partici-
pants, and how to do post hoc tests.
THE STUDENT WEBSITE
The student website includes many goodies that make it almost impossible for
a diligent student to get lower than a “C” in the course. For each chapter, the
site contains a concept map, a crossword puzzle, learning objectives, a pretest
xx PREFACE

and a posttest quiz for each chapter based on those learning objectives, and
answers to the text’s even-numbered exercises.
THE PROFESSOR’S WEBSITE
The professor site has PowerPoint® lectures, chapter summaries, learning
objectives, crossword puzzles, demonstrations, and links to videos. In addi-
tion, for each chapter, we have a list of articles to assign, a summary of each
article, and a “reading guide”—a handout that defines terms, explains con-
cepts, and translates particularly tough passages—so that students can read
and understand those articles.
ACKNOWLEDGMENTS
Writing Research Design Explained was a monumental task that required
commitment, love, effort, and a high tolerance for frustration. If it had not
been for the support of our friends, family, publisher, and students, we could
not have met this challenge.
Robert Tremblay, a Boston journalist, and Lee Howard, a Connecticut
journalist, have our undying gratitude for the many hours they spent critiqu-
ing the first six editions of this book. We are also grateful to Darlynn Fink,
an English professor and Jamie Phillips, a philosophy professor, for their
work on this edition, as well as to the folks at Cengage for sharing and
nurturing our vision. In addition to thanking Bob, Lee, Darlynn, Jamie, and
Cengage, we need to thank three groups of dedicated reviewers, all of whom
were actually coauthors of this text.
First, we would like to thank the competent and conscientious professors
who shared their insights with us. We are grateful to the reviewers whose
constructive comments strengthened this seventh edition: Jeff Adams, Trent
University; Anne DePrince, University of Denver; Karen Fragedakis, Campbell
University; Glenn Geher, SUNY–New Paltz; Paula Goolkasian, University of
North Carolina–Charlotte; Shelia Kennison, Oklahoma State University;
Eugene Packer, William Paterson University; Jodie Royan, University of
Victoria; and Donna Stuber-McEwen, Friends University. In addition, we

thank the reviewers of past editions: Ruth Ault, Davidson College; Louis Ban-
deret, Quin sigamond Community College; James H. Beaird, West ern Oregon
State College; John P. Brockway, Davidson College; Tracy L. Brown, Univer-
sity of North Carolina–Asheville; Edward Caropreso, Clarion University;
Walter Chromiak, Dickinson College; James R. Council, North Dakota State
University; Helen J. Crawford, University of Wyoming; Raymond Ditrichs,
Northern Illinois University; Patricia Doerr, Louisiana State University; Linda
Enloe, Idaho State University; Mary Ann Foley, Skidmore College; George
Goedel, Northern Kentucky University; George L. Hampton III, University of
Houston–Downtown; Robert Hoff, Mercyhurst College; Lynn Howerton,
Arkansas State University; John C. Jahnke, Miami University; Randy Jones,
Utah State University; Sue Kraus, Fort Lewis College; Scott A. Kuehn, Clarion
University; R. Eric Landrum, Boise State University; Kenneth L. Leicht, Illinois
State University; Charles A. Levin, Baldwin-Wallace College; Joel Lundack,
Peru State College; Steven Meier, University of Idaho; Charles Meliska,
PREFACE xxi
University of Southern Indiana; Kenneth B. Melvin, University of Alabama;
Stephen P. Mewaldt, Marshall University; John Nicoud, Marion College of
Fond du Lac; Jamie Phillips, Clarion University; David Pittenger, Marietta
College; Carl Ratner, Humboldt State University; Ray Reutzel, Brigham Young
University; Anrea Richards, University of California–Los Angeles; Margaret
Ruddy, Trenton State College; James J. Ryan, University of Wisconsin–La
Crosse; Rick Scheidt, Kansas State University; Gerald Sparkman, University of
Rio Grande; Sylvia Stalker, Clarion University; Ann Stearns, Clarion University;
Sandra L. Stein, Rider College; Ellen P. Susman, Metropolitan State College
of Denver; Russ A. Thompson, University of Nebraska; Benjamin Wallace,
Cleveland State University; Paul Wellman, Texas A&M University; and Chris-
tine Ziegler, Kennesaw State College.
Second, we would like to thank our student reviewers, especially Susanne
Bingham, Mike Blum, Shannon Edmiston, Chris Fenn, Jess Frederick, Kris

Glosser, Melissa Gregory, Barbara Olszanski, Shari Poza, Rosalyn Rapsinski,
Katelin Speer, and Melissa Ustik.
Third, we would like to thank the English professors who critiqued the
previous editions of our book: William Blazek, Patrick McLaughlin, and
John Young. In addition to improving the writing style of the book, they
also provided a valuable perspective—that of the intelligent, but naïve,
reader.
Finally, we would like to thank our daughter Moriah for allowing us the
time to complete this project.
xxii PREFACE
ABOUT THE AUTHORS
After graduating summa cum laude from Washington and Lee University,
Mark L. Mitchell received his MA and PhD degrees in psychology at The
Ohio State University. He is currently a professor at Clarion University.
Janina M. Jolley graduated with “Great Distinction” from California State
University at Dominguez Hills and earned her MA and PhD in psychology
from The Ohio State University. She is currently an executive editor of The
Journal of Genetic Psychology and Genetic Psychology Monographs. Her
first book was How to Write Psychology Papers: A Student’s Survival Guide
for Psychology and Related Fields, which she wrote with J. D. Murray and
Pete Keller.
In addition to working on this book for more than 100 dog years, Dr. Mitchell
and Dr. Jolley coauthored Developmental Psychology: A Topical Approach.
More recently, they collaborated with Robert O’Shea to write Writin g for
Psychology: A Guide for Students (3rd ed.).
Dr. Mitchell and Dr. Jolley are married to research, teaching, and each
other—not necessarily in that order. You can write to them at the Depart-
ment of Psychology, Clarion University, Clarion, PA 16214, or send e-mail
to them at either or
xxiii

This page intentionally left blank

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×