Chapter 8 Glossary

Types of survey research


questionnaire: a survey that respondents read. (p. 286)


self-administered questionnaire: a questionnaire filled out in the absence of an investigator. (p. 286)


interview: a survey in which the researcher orally asks questions. (p. 286)


unstructured interview: when the interviewer has no standard set of questions that he or she asks each participant. The unstructured survey is a virtually worthless approach if the goal is to collect scientifically valid data. (p. 302)


semi-structured interview: an interview constructed around a core of standard questions; however, the interviewer may expand on any question in order to explore a given response in greater depth. (p. 301)


structured interview: an interview in which all respondents are asked a standard list of questions in a standard order. For collecting valid data, the structured survey is superior to the semi-structured and unstructured surveys. (p. 301)


survey research: a nonexperimental design in which you design a questionnaire, test, or interview and administer it to either the group you are interested in or to a sample of that group. This nonexperimental design can be useful for describing what people are thinking, feeling, or doing.

     Note that using a questionnaire, test, or interview in your study does not mean that you are doing survey research. For example, in experimental research, you may estimate the effect of a treatment by administering a questionnaire, test, or interview after administering the treatment manipulation. Many experiments use questionnaires, tests, and interviews rather than observation to measure participants' responses. (p. 276)


 Sampling Techniques and External Validity

population: everyone to whom you want to generalize your results. Depending on your goals, a population could be everyone who votes in the presidential election, everyone at your school, or everyone who spends more than $100 a year on videos. Because you usually do not have the time to survey everyone in your population, you will usually survey a sample of those individuals.

   (p. 276)


convenience sampling: choosing to include people in your sample simply because they are convenient to survey. It is hard to generalize the results accurately from a study that used convenience sampling. (p. 312)


quota sampling: making sure you get the desired number of (meet your quotas for) certain types of people (certain age groups, minorities, etc.). This method does not involve random sampling and usually gives you a less representative sample than random sampling would. It may, however, be an improvement over convenience sampling. (p. 313)


random sampling: a sample that has been randomly selected from a population. If you want to generalize your results, random sampling is better than both quota sampling and convenience sampling. (p. 309)


proportionate stratified random sampling (often called stratified random sampling): like quota sampling, a strategy for ensuring that the proportion of certain subgroups (e.g., men and women) is the same in the sample as it is in the population. However, it goes beyond quota sampling because it involves random sampling. For example, if the population of interest was 75% women and 25% men, you might obtain a list of all the women and randomly sample 75 from that list. Then, you would obtain a list of all the men in that population and randomly sample 25 from that list. Proportionate stratified random sampling has all the advantages of random sampling with even greater accuracy. Thus, a relatively small proportionate stratified random sample may more accurately represent the population than a larger, non-stratified random sample. (p. 311)


nonresponse bias: the problem caused by people who were in your sample refusing to participate in your study. Nonresponse bias is one of the most serious threats to a survey design's external validity. (p. 286)


demographics: characteristics of an individual or group, such as gender, age, and social class. If you know the demographics of the population, you can compare those to the demographics of your sample. A big discrepancy would indicate sampling bias or nonresponse bias. (p. 280)

 Threats to Construct Validity


 double-barreled question: a single questionnaire item that actually (because of a conjunction like "and" or "but") contains at least two questions. Responses to a double-barreled question are difficult to interpret. For example, if someone answers, "No" to the question, "Are you hungry and thirsty?," we do not know whether he is hungry, but not thirsty; not hungry, but thirsty; or neither hungry nor thirsty. (p. 303)


leading question: a question that leads respondents to the answer the researcher wants (such as, "You like Research Design Explained, don't you?"). (p. 303)


interviewer bias: when the interviewer influences participant's responses. For example, the interviewer might verbally or nonverbally reward the participant for giving responses that support the hypothesis. Alternatively, the interviewer might adopt a more enthusiastic tone of voice when reading the desired response than when reading the less desired response. (p. 291)


demand characteristics: Demand characteristics are aspects of the research situation that lets participants know that behaving a certain way will help "prove" the  researcher's hypothesis. Because participants want to help the researcher, participants may act in accordance with the demand characteristics rather than act the way they really feel.  A leading question shows participants what the demand characteristics are for that question. (p. 285)


social desirability bias: rather than giving accurate answers, participants may give answers that make themselves look good. For example, they may overstate the extent to which they help others out and understate the extent to which they lose their temper. (p. 285)


response set: pattern of responding to questions that is independent of the particular question's content (for instance, a participant might always check "agree" no matter what the statement is). (p. 285)


retrospective self-report: participants telling you what they said, did, or believed in the past. In addition to problems with ordinary self-report (response sets, giving the answer that a leading question suggests, etc.), retrospective self-report is vulnerable to memory biases. Thus, retrospective self-reports should not be accepted at face value. (p. 284)

Types of questions

fixed-alternative items: questions on a test or questionnaire in which a person must choose an answer from among a few specified alternatives. Multiple-choice, true-false, and rating-scale questions are all fixed-alternative items. (p. 297)


open-ended items: questions that do not provide fixed-response alternatives. Essay and fill-in-the-blank questions are open-ended questions. (p. 300)


nominal-dichotomous items: items that lead to concluding that the participant belongs to a certain category or has a certain quality. Data from qualitative items cannot be scored in terms of saying that one participant has more of a certain quality than another. Questions asking about gender and group membership are nominal items. Data from nominal items are analyzed differently than data from Likert-type items. (p.297)


Likert-type item: a question that asks participants whether they strongly agree, agree, are neutral, disagree, or strongly disagree with a certain statement. Psychologists often assume that Likert-type items produce interval data. (p. 298)


summated scores: when you have several Likert-type questions that all tap the same dimension (such as attitude toward democracy), you could add up (sum) each participant's responses from the different questions to get an overall (summated) score. (p.299)


parameters: characteristics of populations rather than of samples. For example, the mean of a sample is not a parameter, but the mean of the entire population is a parameter. (p. 320)


parameter estimation: using measurements obtained from a sample to estimate the true characteristics of the entire population. For example, you might use your sample mean to estimate the population mean. (p. 320)


standard error of the mean: an index of the degree to which random error may cause the sample mean to be an inaccurate estimate of the population mean. (p. 320)


95% confidence interval: the range in which, given a sample mean, the true population mean would fall 95% of the time. Often, the 95% confidence interval extends from two standard errors of the mean below the sample mean to two standard errors of the mean above the sample mean. (p. 320)


hypothesis testing: using inferential statistics to determine whether the relationship found between two or more variables in a particular sample holds true in the population. (p. 320)


power: the ability to find relationships among variables. If your study has little power, hypothesis testing will probably not be productive. (p. 298)


chi square: a statistical test used to do hypothesis testing on nominal data (e.g., data from nominal-dichotomous questions). If data were interval, you would probably use a t test instead. (p. 325)


factor analysis: an advanced statistical technique used to determine whether questions seem to be measuring the same construct.   (p. 323)