Review Sheet for Exam 1


Preparing for Exam 1

What is the Format?

  The test will have 50 multiple-choice questions and 10 true-false questions.

What will be covered?

Chapters 1-6 will be on the test. So, be sure you read and reread those chapters and that you go through the web activities (especially the quizzes) that are on the student website

What are the most important points to know from each chapter?

From Chapter 1, you should have learned what the characteristics of science are, and you should be able to explain to a skeptical layperson that psychology has those qualities. For example, you should be able to explain how operational definitions help psychology be an objective science, and you should be able to ask basic questions of a claim, such as "What is the objective evidence for and against the claim?"

From Chapter 2, you should have learned what constructs are, what internal, external, and construct validity are, what the main threats to those three validities are, and what the main ways of dealing with those threats are. In addition, you should have learned the main principles of the APA ethical code as it relates to research for both human and nonhuman animals. By this point, you should be more sophisticated in your ability to evaluate claims. Specifically, you should be able to go beyond just asking whether there is objective evidence for a claim to assessing the internal, external, and construct validity of a claim.

From Chapter 3, you should have learned what a hypothesis is, how to improve a hypothesis, what a null hypothesis is, and why the null hypothesis cannot be proven. In addition, you should be able to define and give examples of the following terms: moderator variable, mediating variable, and functional relationship. You should also understand the relationship between moderator variables and external validity.

From Chapter 4, you should have been able to apply what you learned about construct, internal, and external validity in Chapters 2 and 3 to evaluate and  improve existing research.  In addition, you should have learned what the parts of an article are, what the different kinds of replication are, and which validit(ies) each type of replication can help. In addition, you should know how to extend a study.

From Chapter 5, you should have learned the difference between random error and bias (e.g., that one of these errors is much more serious than the others and that blind techniques that eliminate one error may not eliminate the other error), the difference between social desirability bias and obeying demand characteristics (i.e., that not all subject biases are the same), the difference between subject bias and observer bias (e.g., a rating scale might reduce one of these errors but not the other),  the difference between reliability and validity (consistency is different from accuracy), what the relationship is between reliability and validity (one is usually a prerequisite for the other), what steps to take to evaluate and improve reliability, and how to determine whether a measure or manipulation is valid. You should know why we want  discriminant validity and how it differs from convergent validity. Do not to confuse internal validity with internal consistency, ontent validity for construct validity, or internal consistency with content validity. Making concept map (or referring to one of our concept maps) may help you avoid confusing similar sounding terms. You should know what steps we take to validate a manipulation and how those steps are similar to those that we use to validate a measure. For example, you should know why researchers use manipulation checks.

Note that Chapter 5 is all about construct validity: the challenge of having valid operational definitions of your variables. Consequently, all the other validities discussed in that chapter (content validity, convergent validity, discriminant validity) are all just ways to build the case for a measure's construct validity. Similarly,  making your measure  reliable, internally consistent (note that internal consistency has nothing to do with internal validity), and free of bias are all ways of increasing the chances that your measure has construct validity.

From Chapter 6, you should know that at least six different factors should affect your choice of a measure: validity, sensitivity, scales of measurement (yes, scales of measurement and sensitivity are two different things), ethical concerns, practical concerns, and the measure's susceptibility to bias.  You should understand why we value sensitivity, how we can maximize sensitivity, what the different scales of measurement are, which types of measures produce which  scale of measurement, and how your research hypothesis dictates which  level of measurement yu need (e.g., if you need to need to know that one group changes more than another, you need to be able to measure "how much" change, so you need at least interval data).


Is there another way of thinking about what we have covered?

Chapter 1 argues that psychology is  a science because it met the criteria of

science, (see Table 1.2  on p. 22 for a review). Note that, like other sciences, psychology

Chapter 1 also argues that although some other ways of knowing sometimes seem easier or faster ways to truth than the scientific method, they all have serious flaw (see Table 1.3, pp. 23-25 for a review).

Admittedly, when we try to use the scientific method to study the mind, we must make inferences about mental states (e.g., anger, love, intelligence). Because we really can't directly observe constructs, we must rely on operational definitions.

 Chapter 2 admits  that there are two basic problems about doing research to get answers to questions about human behavior:

 1. The study you do may be unethical

 2. The study you do may not answer the question. 

At a more fundamental level, there is only one problem: Is the study ethical?  

To address this question, start by reducing the potential for harm. Following APA's nine recommendations/guidelines (see Box 2.1, p. 59) can help reduce the potential for harm.  

Second, make sure you have the validity you need to address your research question.

Depending on the research question, you may be interested mainly in only one of these kinds of validity. Sometimes, you may want to have two of these kinds of validity. Rarely, however, will a study have all three types of validity.  

Chapter 3 discusses ways to generate research ideas. One technique for generating research ideas is to simply question assumptions that other people have made--whether those assumptions were made in editorials, old sayings, songs, popular magazines, self-help books, or television commercials.

If you have an idea about the general relationship between two variables, expand on this idea by:

 1. seeing if you can make any interesting predictions about the specific functional relationship between the two variables;

 2. seeing if you can postulate any moderator variables that would weaken or maybe even reverse the relationship between your variables;

 3. seeing if you can identify the mediating variables that account for the relationship (the cognitive or physiological variables that are the mechanism for the connection between your two variables).

Once you have a research idea, you can refine it into a research hypothesis by responding to the questions in Table 3.3 (p. 105). The biggest problems that students have with their hypotheses is that students often

a. cannot explain why they expect their hypothesis to be supported.

 b. can't explain why their hypothesis is interesting (perhaps because students aren't familiar with the relevant research and theory and have limited experience with applying psychology to real life situations) .

 Once you have an idea, pages 104-107 described ways to developing your ideas into ethical, practical, testable research hypotheses.

Chapter 4 focuses primarily on four ways of generating research ideas from reading published research: 

 1. See if the internal validity can be strengthened. For example, if we find that suicide rates for towns where they play lots of country music on the air waves have higher suicide rates than other towns, we can't conclude that country music causes depression. However, we might want to do an experiment to find out if, in a lab or field situation, country music causes people to be less happy.

2. See if the external validity can be strengthened. Often, the study will have a small, biased, or unusual sample of subjects, be done in a non-real world setting, use extremely unusual levels of the treatment, and look at the short-term effects. Thus, it may be dangerous to generalize the results of the study to other people or other situations. In such cases, you could redo the study using better samples, more realistic settings, more realistic amounts of the treatment, and  looking for longer-term effects. Put another way, you might suspect that type of participant (e.g., working versus retired), type of setting (workplace versus lab),  amount of treatment, or time might be moderator variables.  

3. See if the construct validity should be improved. Improving construct validity may involve doing a conceptual replications that uses better measures than the original study used. Alternatively, you might improve construct validity by doing a systematic replication that adds blind techniques to the original design, thereby reducing participant bias.

4. Taking advantage of the fact that science involves building on other people's work by doing studies that (a) the authors suggest, (b) look for variable that moderate or mediate the relationship the authors found,  (c) test practical applications of the findings, or (d) pinpointing the nature of a relationship involving a general variables (e.g., IQ) by using measures of tap specific dimensions of that variable (e.g., vocabulary, short term memory capacity, etc.). In addition, you may want to repeat the study to see if the results actually do replicate.

Chapter 5 focuses on construct validity

One of the hardest things in psychology is to establish construct validity. For example, how do you show that you really are measuring love? You start off by getting an operational definition--a recipe, a concrete set of steps or procedures that you will follow to get a score for each subject. Your operational definition should, at least, get you an objective measure. However, don't just assume that your measure is objective. You try to see if observer bias is a threat to your measure. If observer bias is a threat, you either modify your measure or you try to get objectivity by making your observers "blind."

Once you establish that your measure is not vulnerable to observer bias, you still can't say that your measure is perfect. One common first step is to see if your measure is reliable. That is, does it produce consistent scores? If you are measuring a construct that is stable over time (IQ), then subjects who take your test today and six months from now should get basically the same score each time. Test-retest reliability can tell us to what extent subjects are getting the same score each time. Typically, you expect a high test-retest reliability coefficient (between .85 and 1.0) for an established measure.

But what if you got a low test-retest reliability coefficient? For example, what if it was below .60? Then, your measure is being affected by something other than the stable construct.

What is this something else? Inconsistent, erratic, random error!

What do you do about this problem due to inconsistent, unstable random error?

 1. Ditch the measure.

 2. See if the random error is due to inconsistencies due to the observer.

Calculating inter observer reliability will tell you if this is a problem. If

inter observer reliability is low, you can apply any of the remedies

suggested in Table 5-1 (p. 153).

 3. See if you can do anything to reduce any inconsistencies there are in

how the measure is administered. In technical terminology, try to

standardize the administration of your measure.



1. Random error and bias are both errors, but bias is much more serious threat to validity than random error.

 2. Reliability and validity are two different concepts. A valid measure will be reliable, but a reliable measure is not necessarily valid.

Next, you get to worry about subject biases. Instead of giving you their true feelings, your participants may try to make themselves look good (social desirability) or make you look good (by following demand characteristics). Table 5-2 (p. 157) suggests some ways to avoid subject biases. Note that unobtrusive measures are one particularly clever way of reducing subject biases.  

Once you have shown that  your measure is reliable--that it is not too influenced by random error, you can start to make a case that your measure is measuring what you say it is measuring. That is, you can make the case that your measure has construct validity. However, to do so, you may be called upon to show that:

 Your measure has content validity: it has items that measure all the relevant dimensions of your construct and there are enough items for each dimension.

 Your measure has internal consistency: all the items seem to be measuring the same thing. The evidence for this is that participants respond to all the items in a similar way. For example, participants who strongly agree with item 1, should also strongly agree with item 2, and item 3, etc.

Your measure has convergent validity: the measure correlates with other indicators of the construct. (If it walks like a duck and looks like a duck, it may be a duck). For example, people who score high on your measure should also score higher on other measures of the construct, high scorers should do more of the behaviors associated with your construct than low scorers, and people who are known to be high on your construct should score higher on your measure than people known to be low.

Your measure has discriminant validity. Thus, people who score high on your IQ measure should not also score high on outgoingness, modesty, social desirability, moodiness, etc. If you can show that you're not measuring the wrong thing, it helps build the case that you may be measuring the right thing.

That brings you up to Chapter 6 and notes relating to it. The Chapter 6 material will also be on the test.