LECTURE 9.1

LOOK BEFORE LEAPING TO CAUSAL CONCLUSIONS


I. Three prerequisites for inferring causality
A. Observe change
B. Know that treatment occurred before change, not after
C. Everything except treatment stayed the same

II. Keeping everything else the same is very difficult (psychology research can’t be done in a vacuum chamber) Millions of things can change. Fortunately, these millions of things fall into eight categories:
A. History: Events other than the treatment that change during the course of the study, such as world and local events.

B. Maturation: Biological events unrelated to the treatment manipulation, such as fatigue, illness, physical development.

C. Testing: The act of being tested may cause changes in the participant; these changes will then be reflected on the retest.
For example, being tested on some vocabulary words may cause you to look up words you didn’t know. Thus, when you are retested on the list, you may do better (whether or not you are given a pill designed to improve memory).

D. Instrumentation: the measuring instrument itself changes during the course of the study and those changes result in changes in participants’ scores.
Examples: Between pretest and posttest, a rater may become more lenient, an interviewer may ask more probing questions, a scale may be revised. Thus, even if the participant’s behavior stays the same, the participant’s score may change.

E. Regression (to the mean): Participants who have extreme scores will tend to have less extreme scores when retested because some extreme scores are extreme, in part, due to random error. Since random error is inconsistent, chances are that random error will not make participants’ scores quite so extreme the second time around. Thus, lucky streaks, shooting streaks, runs of "heads" when flipping coins, etc. all end. Similarly, if you took a group of people who got 100% on the first exam in a class, you would find that they would score lower on the second exam.

F. Selection: The groups were different to start with. The researcher was comparing "apples with oranges."
Research that compares people who volunteer for a program with people who don’t participate in a program is particularly vulnerable to selection.

G. Selection by maturation interactions: Groups that are similar to start with may naturally grow apart (e.g., two groups of prisoners may be matched on type of crime committed, but the treatment group may be older).

H. Mortality: Participants dropping out of the study. Often, participants may drop out of the treatment group, but not the no-treatment group. Thus, statements like "Graduates of our program..." may be meaningless if most people who started dropped out.


III. Problems in comparing a treatment to a no-treatment group. Group differences may not be due to the treatment, but to:
A. Selection: Groups being different before study began.
Sources of selection bias:
1. Self-assignment to group (volunteers differ from non-volunteers).
Ex: Studies comparing drug-users not non-users.

2. Researcher-assignment to group: Researcher may put higher scores in the treatment group, thus "stacking the deck" in favor of the hypothesis.

3. Arbitrary assignment to group: Because picking groups based on their differences.
Ex: J. V. Brady found that "executive monkeys" were more likely to have ulcers whereas Seligman found that having control decreased stress. Why the difference? Brady arbitrarily assigned the monkeys that were quickest to learn to avoid the shock to be "executive monkeys," whereas Seligman used random assignment.

B. Selection by maturation: Even if groups started out similarly they may naturally grow apart.
Ex: (1) a longitudinal study matching nursing home patients and preschoolers on a memory task; (2) long-term Head Start research project comparing middle-class and disadvantaged children.

C. Regression (to different means): Even if groups started out with similar pretest scores, they may score differently on posttest scores.
Ex: Some Head Start studies comparing disadvantaged children matched with middle-class children find that, after Head Start, the Head Start children do worse than the middle-class children.

D. Selective mortality: Participants may drop out of the treatment group, but not out of the no-treatment group. Consequently, researcher may be comparing best of the treatment group with the no-treatment group.

IV. Problems with before-after designs
A. Three reasons participants may change between pretest and posttest--even without the treatment.
1. History
2. Maturation
3. Testing (Imagine taking the same psychology test over and over.)
B. Three ways that measurement changes may cause scores to change between pretest and posttest--even though participants themselves don’t change

1. Changes in how participants are measured: Instrumentation (a real threat when you haven’t standardized the administration and/or scoring of your measure)

2. Changes in the extent to which measurement is affected by random error: Regression (Because of regression, beware of studies comparing people who have extreme attitudes, extreme scores on a test, or who have "hit bottom.")

3. Changes in how many participants are measured: Mortality (The lower scoring participants may leave the study due to failure to follow instructions, lack of motivation, or failing health.)



Back to Chapter 9 Main Menu