Chapter 11 Glossary

**between-groups**** variance
(treatment variance, variability between group means, Mean Square Treatment,
Mean Square Between): **at one level, between-groups variance is just a measure
of how much the group means differ from each other. Thus, if all the groups had
the same mean, between-groups variance would be zero. At another level,
between-groups variance is an estimate of the combined effects of the two
factors that would make group means differ: treatment effects plus random error.
(p. 433)

**within**** groups variance (error
variance, variability within groups, Mean Square Error, Mean Square
Within): **at one level, within-groups variance is just a measure of the
degree to which scores within each group differ from each other. A small
within-groups variance means that participants within each group are all
scoring similarly. At another level, within-groups variance is an estimate of
the effects of random error (because participants in the same treatment group
score differently due to random error, not due to treatment). Thus, within-groups
variance is also called **error variance. **(p. 436)

**analysis**** of variance
(ANOVA): **a statistical test that is especially useful when data are
interval, and there are more than two groups. For the experiments discussed in
this chapter, ANOVA involves dividing between-groups variance by within-groups variance.
(p. 439)

** F ratio: **at the numerical level, the

If
the treatment has no effect, the *F* ratio will tend to be close to 1.0,
indicating that the difference between the groups could be due to random error.
If the treatment had an effect, the *F* ratio will tend to be substantially
above 1.0, indicating that the difference between the groups is bigger than
would be expected if only random error were at work. To find out how whether an
F is significant, you could consult an *F* table. (p. 440)

**confounding**** variables: **variables,
other than the independent variable, that may be responsible for the differences
between your conditions. There are two types of confounding variables: ones that
are manipulation-irrelevant and ones that are the result of the manipulation.

Confounding variables that are irrelevant to the treatment manipulation threaten internal validity. For example, the difference between groups may be due to one group being older than the other, rather than to the treatment. Researchers deal with treatment-irrelevant variables by (1) using random assignment to turn the effects of treatment irrelevant variables into random effects and (2) using statistics to account for those random effects.

Confounding variables that are produced by the treatment manipulation hurt the construct validity of the study. They hurt the construct validity because even though we may know that the treatment manipulation had an effect, we do not know what it was about the treatment manipulation that had the effect. For example, we may know that an "exercise" manipulation increases happiness (internal validity), but not know whether the "exercise" manipulation worked because people exercised more, got more encouragement, had a more structured routine, practiced setting and achieving goals, or met new friends. In such a case, construct validity is harmed because we do not know what variable(s) are being manipulated by the "exercise" manipulation. (p. 427)

**hypothesis-guessing****: **participants
trying to figure out what the study is designed to prove. Hypothesis-guessing can
hurt a study's construct validity. (p. 430)

**empty control group: **a group
that gets no treatment, not even a placebo. Usually, you should try to avoid
control groups: They hurt construct validity because they do not allow you
to discount the effects of treatment-related, confounding variables. For
example, empty control groups may make your study vulnerable to
hypothesis-guessing. (p. 430)

**functional**** relationship: **the
shape of the relationship between two variables. For example, the functional relationship
between the independent and dependent variables might be a straight line (linear) or
a line with at least one bend in it (nonlinear). (p. 421)

**linear**** relationship: **a
functional relationship between an independent and dependent variable that is
graphically represented by a straight line. (p. 421)

**nonlinear**** relationship (curvilinear
relationship): **a functional relationship between an independent and dependent
variable that is graphically represented by a line that has at least one bend in
it: a curved line. (p. 422)

**post**** hoc test: **a statistical
test done **after** (1) doing a general test such as an ANOVA and (2) finding a *significant
*effect. Post hoc tests are used to **follow up on significant results **obtained from a more general test. Because a significant ANOVA says only that
at least two of the groups are significantly different from one another, post
hoc tests may be performed to find out which groups are significantly different
from one another. (p. 446)

**post**** hoc trend analysis: **a
type of post hoc test designed to determine whether

a linear or curvilinear relationship is statistically significant
(reliable). Note that a graph of your data may look like a curvilinear
relationship, but you would need to do a post hoc trend analysis to see whether
there really is such a relationship between your variables. (p. 447)