I. The problem with random error
A. Plays havoc with individual scores
B. Like static or unwanted noise, hides the message that the data are sending you--but doesn't bias the results.
C. Sources of random error
2. Testing situations vary
3. Individuals vary (weight, mood, etc. fluctuate)
II. Assessing random error to see if it is a problem
A. The technique for assessing random error is based on one assumptions about the nature of people: underlying traits and behavior are basically stable and consistent.
B. Test-retest reliability coefficient ranges from 0-1, with higher numbers indicating higher reliability
1. What to do if it's over .90
2. What to do if it's under .60
III. Assessing random error due to the observer
A. Interjudge agreement
B. Inter-observer reliability
1. ranges from 0-1
2. puts a ceiling (but not a floor) on test-retest reliability
IV. What to do if there is too much random error due to the observer
A. Train and motivate raters
B. Use instruments
V. Dealing with random error due to non-observer sources
A. Reduce random error due to the testing situation by standardizing how the measure is administered
B. Reduce random error due to the participant by
1. Interviewing participants to find out if certain questions are confusing or hard to answer. Since participants may be guessing on those items, that guessing will lead to random error. Consequently, deleting those items may increase reliability.
2. Using more items or larger samples of behavior so that random error will have more chances to balance out. That is, scores on a multiple-choice test made up of 1 item will be more influenced by random error than a 100-item test.
A. Reliability does not guarantee validity; it is only a prerequisite for validity. A reliable measure can be consistently measuring (or being influenced by) the wrong thing, such as:
1. Another construct
2. Observer bias
B. Low reliability waters down, but does not poison, validity.
Back to Chapter 5 Main Menu