[qdeck] [q] operational definition [a] recipe for how you are going to measure or manipulate a variable [q] bias [a] systematic errors that push scores in a certain direction [q] observer or scorer bias [a] seeing, remembering, or counting in a way that supports prejudice of the observer [q] blind or masked technique [a] preventing observer bias by making observer unaware of participant characteristics related to the hypothesis [q] random measurement error [a] inconsistent and unsystematic errors that may be due to carelessness [q] reliable [a] consistent, relatively unaffected by random error [q] test retest reliability [a] testing and retesting the same participants to see if the measure is reliable [q] interobserver agreement, interjudge aggreement [a] a measure of observer reliability computed by looking at the percentage of times the raters agree. If it is low, you probably want to take steps to make your measure more objective [q] interobserver reliability [a] a measure of observer reliability computed by looking at correlations between observers' ratings [q] internal consistency [a] the degree to which answers to questions on a test agree with each other. If all the tests are judging the same thing, the judges/questions should agree with each other. [q] subject(participant) biases [a] Participants biasing the results by either acting to make themselves look good or acting to support what they think the hypothesis is. [q] social desirability bias [a] a type of participant bias in which the participants tries to act better than he or she is. [q] obeying demand characteristics [a] a type of participant bias in which the participants tries to act in a way that supports what he or she thinks the hypothesis is. [q] unobtrusive measurement [a] trying to record a behavior without letting the participant know you are measuring that behavior. [q] construct validity [a] the degree to which an operational definition captures the construct it was intended to capture. Making a case for a measure's reliability, its content, convergent, and discriminant validity, are all ways of making a case for a measure's construct validity. [q] content validity [a] expert judgment that the measure covers the right stuff. [q] convergent validity [a] showing that a measure correlates with other measures of the construct or other behaviors associated with the construct. [q] known groups technique [a] a convergent validity technique involving seeing that groups known to differ on your construct also differ on your measure. [q] discriminant validity [a] showing that the measure is not measuring the wrong construct. [/qdeck]