The Five Ordered Steps of Problem-Solving

Step 1: Define the problem.

Why is this step is the most important step?

Because defining the problem defines the solution. That is, the diagnosis determines the treatment (if you are diagnosed with the flu, you get a different treatment than if you are diagnosed with a cold). 

    Quotes illustrating the importance of defining the problem:

    Examples of insights due to re-defining the problem

Why is this step so difficult? 7 pitfalls in defining the problem.

  1. Not accepting that there is a problem
  2. Narrowly defining the problem in a way that eliminates options or not realizing that a problem can  be defined in several ways.
  3. Defining the problem in a way that is too vague. Example: A student says "I am having trouble with the course" or " I did not do well on the last exam." This would be like a physician telling a patient that "there is something wrong" or a psychologist saying that a patient "has issues."
  4. Biases may cause us to misidentify the cause of the problem. As Maslow wrote, "it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."
    Examples:
    Five particularly common and powerful biases that hurt our ability to find the real cause of a problem:
    1. The "Not me" bias: We often don't take responsibility for our contribution to the problem. For example, you have heard people say things like:

      • "It's not my fault." 
      • "Look at what you made me do!"
      • "You are making me mad."
      • "That's a nasty question."
      • "Fake news!"
      One way to own your problem is suggested by Timothy Ferris: "...tell my story to myself from the perspective of a victim, then I tell the exact same story from a place of 100 percent responsibility."
    2. The fundamental attribution error. Personalizing problems: Blaming people rather than situations. As anyone who has been stuck in traffic or in a bad job knows, bad environments can make even mature, rational people do immature, irrational things.
    3. Preconceptions bias interpretations: We see what we expect to see. If you expect Joe to be a trouble maker, you may interpret his behavior more negatively than if you expect Joe to be a team player.
    4. Preconceptions bias perception and memory, as shown by the confirmation bias: We look for and remember evidence that is consistent with our beliefs. If we believe that Joe is a bad employee, we will be more likely to notice and remember the times when Joe makes mistakes than we are to notice and remember times when Joe does an average or good job.
    5. Preconceptions create reality: If the teacher expects a student to do poorly, that student is more likely to do poorly than if a teacher expects that student to do well. 
  5. We are misled because  
  6. Incorrectly identifying what  kind of problem we have, so we try to solve one type of problem when we should be trying to solve a different type of problem.
  7.               Example 1:

                         What rule is determining the sequence of these numbers?    8,5, 4, 9, 1, 6, 7, 10, 3, 2

                          Two other examples: Think of the last time you applied the wrong formula to a word-problem or heard of a friend who was misdiagnosed by a doctor.

    Sometimes, we misidentify the kind of problem we have because of the representativeness heuristic: a general rule used when people decide whether something is an example of a category. If  what we are looking at  matches our memory of a typical instance of a category, we will classify that  thing as being a member of that category. For example, you determine whether someone is a child or an adult based on their appearance matching your memorized examples of children and adults. The advantage of the representativeness heuristic is that people can take advantage of their experiences and their expertise. For example, a doctor can quickly diagnose a patient who has a disease that the doctor has seen hundreds of times before. However, problems may look similar, but be different. So, the representativeness heuristic may lead not only to stereotyping, but to misdiagnosing the problem by overlooking key differences between this new problem and old problems.

    The representativeness heuristic may also cause us to ignore important information. For example, a doctor might, seeing that the patient's symptoms matched malaria, use the representativeness heuristic to diagnose the patient as having malaria, even though the patient probably didn't have malaria given one important fact: There hadn't been a malaria case where the patient lived in over 50 years.

  8. Not testing your assumptions about the cause of the problem because of the confirmation bias.

 


Step 2: Generate options

Using existing solutions:

1. Algorithms: a problem-solving strategy that--if all the steps are followed correctly--is guaranteed to eventually lead to a solution.

Problems with algorithms:

1. They may involve many steps (and doing many steps takes time and uses up the limited space in short-term memory). Because they are often  time-consuming and involve many steps, they are called "inefficient." However, they are not ineffective: Indeed, they are foolproof formulas.

2. They only fit problems where there is one right answer. Thus, there are algorithms for solving some math problems and playing certain simple games like tic-tac-toe, but not for problems with human relationships.

3. So, even though they are often called "foolproof formulas", you can go wrong using an algorithm if

  1. The algorithm is not the right one for that problem
  2. You skip one of the steps (easy to do when there are many steps)
  3. You mess up one of the steps

2. Heuristics: a general rule that guides problem-solving, but does not guarantee a perfect solution. You can think of heuristics as mental shortcuts, hunches, or as educated guesses. (Click here for a weather-related heuristic.

 Examples of useful heuristics:

5 Barriers to generating new solutions

  1. Fixation/Set: a rigidity in problem-solving due to wanting to continue to do things the old way.
    Examples:

  2. We don't think of as many options as we should. This is partly because short term memory is so limited that we can't think of many options at once (but some of this is laziness and arrogance). One way to get around the problem of not coming up with enough options is to force yourself to write down at least 3 options. For interpersonal problems (e.g., dealing with a messy roommate), you usually have at least three options: (1): Adjust to the situation--tolerate the mess, (2) Change the situation: Make the roommate clean up, and (3) avoid the situation (stay in your room or move out).
  3. Putting limits on yourself, such as saying you can't do it (due to learned helplessness, depression, or lack of self-efficacy) or that you can't change (due to having a fixed mindset rather than a growth mindset).
  4. Putting limits on the solution by seeing the problem in win/lose terms when there might be a win/win solution.
  5. "All or none" thinking -- Looking only at extreme options ("I will quit school or quit the band" when less extreme options are available such as going to school part time or devoting more time to the band during the summer).
  6. Prematurely dismissing solutions. We reject an idea rather than developing it. Remember, evaluating ideas should come after--not during-- the generation step.


Step 3:  Evaluate options

("For every problem, there is a solution that is simple, quick, and wrong" -- Paul Ylvisaker. True, even before Trump suggested the COVID could be cured with chloroquine or reportedly suggested that hurricanes should be nuked.)

Why we "satisfice" (choose the first satisfisfactory option)

rather than "optimize" (choose the best [optimum] option)

What it takes to optimize:

  1. Consider all the options
  2. Consider all the pros and cons of all the options
  3. Determine the probabilities of each of those pros and cons
  4. Correctly weight the importance of all those pros and cons
  5. Combine all the  information about the pros and cons of all the options to arrive at the best (optimal) choice
  6.  

Table illustrating complexity of making an optimal choice: An oversimplified example of choosing among apartments. Note that there are probably more than 3 places that you could consider and that you probably care about more than price, proximity to campus, and landlord. For example, you probably care about how quiet it is, how safe it is, how big it is, and how nice it is. However, even this oversimplified example shows you how complicated optimizing is.

 

Options Price Score on Price Price's Importance Location Location's Score Location's Importance Landlord's Reputation Landlord's Score Landlord's ImportanceTotal score
1 500/month 3 4 2 miles from campus 2 2 Excellent 5 4 36 (3 * 4) + (2 * 2) + (5 * 4)
2 400/month 4 4 5 miles from campus 1 2 Average 3 4 30 (4 * 4) + (1 * 2) + (3 * 4)
3 700/month 1 4 next to campus 5 2 Poor 1 4 18 (1 * 4) + (5 * 2) + (1 * 4)

Why we fail to optimize (besides the fact that optimizing is stressful):

  1. Partly because of the limits of short term memory, we do poorly at:

    Considering all the options (thinking of more than 7 is tough)

    Considering all the pros and cons of each option (even with 2 options, considering the pros and cons could give us much more than 7 things to keep in mind)

  2. To get around some of the limits of short-term memory, you might just write down all your options as well as their pros and cons (Example).

    To get around more of the limits of short-term memory, you could use this decision making program to help you make decisions.

  3. We are bad at estimating the frequency of events (and thus how likely something is to happen) for a variety of reasons, including
    1. the availability heuristic:  using the rule that  how often something happens based on how easy it is to remember examples of that event happening. The problem is that some events, even if they don't occur very often, are easy to recall. So, recent and vivid events are seen as more likely than they really are (e.g., airplane crashes).
    2. How politicians and some in the media have used the availability heuristic against us.

      • In 2016, Trump ran on a vision of America being unsafe due to violent crime, but, in fact, America's violent crime rate was almost half of what it had been in 1990.
      • Trump acted like cities near the Mexican border are extremely dangerous places, largely due to undocumented immigrants from Mexico. In fact, it seems that immigrants are less likely to commit crimes and that some southern border towns (e.g., El Paso) are among the safest cities in the country whereas cities far from the Mexican border (e.g., Baltimore and Detroit) are among the most dangerous U.S. cities.
      • Trump has convinced some people that ANTIFA are a bunch of murderers. In fact, as of this writing,  ANTIFA is responsible for only one death (and that may have been in self-defense).  In general, right wing extremists are responsible for much more violence than the left-wing extremists. (link to more recent data).
      • Some have argued that police are being gunned down at high rates and that COVID-19 is a hoax. However, recent figures show 101 police officers died from  COVID-19 and 82 died from all other causes combined (e.g., car accidents, being shot, etc.). In fact, some reports have 5X as many police officers dying from COVID than from gunfire.
      • Police are about 3/4 as likely to die from a car crash as from a shooting; yet many officers do not wear seat belts.
      • Being a police officer is a dangerous job. However, there are at least 18 jobs that are more dangerous. Jobs that are more than 2X as dangerous as being a police officer include commercial fisherman and fisherwomen (more than 7X as dangerous as the police officer job), loggers (more than 6X as dangerous), pilots (more than 3X as dangerous), roofers, steel workers, truck drivers, and garbage collectors.
      • As Kristoff (March 23, 2021) writes, "In a typical year in the U.S., more preschoolers are shot dead in America (about 75) than police officers are."
      • *Note that the availability heuristic also fools us about whether we have a problem--or what the problem is. For example, many people think the U.S. has an immigration problem due to having too many immigrants. In fact, the U.S. does have an immigration problem due to a lack of immigrants. The number of Americans who were born in a foreign country has shrunk by more than half since the 1990s--and, if not fixed, this immigrant shortage will have dramatic negative effects on social security (and, probably, on the entire economy--Japan has learned how a lack of immigrants sinks an economy)

    3. We have trouble using base-rate information: what typically happens.
      • We often incorrectly assume that averages don't apply to us because we are unique. Usually, we exhibit an optimism bias: that we are uniquely less likely to have bad things happen to us, this is especially true when we do have some control over the outcome. Thus, many people prefer to drive rather than to fly, when driving is riskier. You may be able to control the optimism bias by asking what the risk would be to other people.
      • However, sometimes, averages do not apply to our situation (e.g., Although, in general, wearing a mask to prevent the spread of COVID-19 was a good idea in 2020, it was probably not necessary for a young person biking in a rural area where COVID rates were low to wear a mask).
    4. The anchoring effect Even an obviously wrong bit of information (a bad anchor) can influence us.
    5. The confirmation bias: Once you get the idea that something is risky--or not risky--your tendency will be to find evidence that supports your view. To fight this tendency, seek out information that opposes  your view. Thanks to Google, this is easy to do.
    6. We give some information too little or too much weight.
      • Fear and greed, especially when it comes to putting too much weight on extremely unlikely outcomes (e.g., that you will die from getting a COVID vaccine or that buying a lottery ticket will make you a millionaire).
      • Loss aversion: losses feel about twice as bad as gains feel good.
      • Our own stuff is overvalued.
      • People care about price of gas because they are more likely to pay cash for gas.
      • Weighting characteristics that are important now but may not be important later.
      • If we are verbally justifying our decision to ourselves or to others, we overweight what information and dimensions that we can easily verbalize and underweight information that is harder to verbalize. As a result, we may ignore our intuition (our implicit knowledge). Note: Although intuition is not completely trustworthy, if our intuition is based on experience and rapid feedback, it can be accurate.
      • We look at price in comparison to other things rather than in comparison to absolute value.
      • We compare against irrelevant information-- may buy item because the "sale price" is lower than the "manufacturer's price."
    7. We don't combine all the information necessary to make a good decision
      • Satisficing: Going with the first option that is satisfactory rather than optimizing: going for the best (optimum) option.
      • Mindlessness: Thinking that we are thinking when we are really just on autopilot. May be more likely to occur if we are multitasking.
      • Decision fatigue: If we are tired of making decisions, we may not look at all our options or evaluate them carefully. For example, as Myers (2021) notes, " Physicians also more often prescribe quick-fix opioids if tired and time-pressed at the day’s end, rather than at its beginning."
      • Looking at only one criteria when many factors have to be taken into account: Examples:
        • "I will take the one with the lowest price"--disregarding quality, convenience, and other criteria.
        • "I will take Dr. X's class because it fits into my schedule."
      • Looking only at the solution's effect on your immediate problem. That is, not engaging in systems thinking--realizing that everything is connected to everything else. The good systems thinker is thinking several steps ahead.
        • We look at short-term effects rather than long term effects.
        • We don't consider side effects and unintended consequences.
          • People may change their behavior in ways that defeat our solution (e.g., they may find loopholes in rules, rebel against rules, or retaliate against sanctions imposed for breaking the rules).
          • Your solution affects more people than you considered (e.g., eliminating homelessness might hurt hotel and other business owners)
          • Heroine was once considered a solution to opium addiction.
          • President Trump urged people to take chloroquine, arguing, essentially, "what do you have to lose?" However, chloroquine has side effects and at least one study suggested that people taking chloroquine were more likely to die than those not taking that drug.
          • We may exercise more to try to lose weight but end up eating more (because we are hungrier or because we "earned it"), so we end up gaining weight.
      • "All or nothing" thinking: We see a solution as all good or all bad. The result may be that we see all the options as terrible. For example, since both Democrats and Republicans have some dishonest members, we may say, "They are all terrible." As the saying goes, "the perfect is the enemy of the good." Similarly, we may hold out for the perfect solution--which will never come. Examples of all-or-nothing thinking hurting decision making
        • "I will only buy it if it is absolutely free."
        • "There must be no risk whatsoever." (So, we won't consider a vaccine that will save thousands of lives if it may kill 10 people.)
        • "No compromising."
      • Having a hard time making tradeoffs between options.
      • We are vulnerable to to framing effects: the way the problem is worded affects the decision that we will make. We are vulnerable to framing effects because we are loss adverse: we hate to think that we might lose something. We like to gain, but we HATE to lose. Insurance companies and bankers love us for this. Framing effects. Because we are loss averse (the pain of losing something is much more than the pleasure of gaining something), we can be manipulated by how the problem is framed (stated). For example, patients (and doctors!) are more likely to endorse a surgery in which 90% of patients survive than one in which 10% of patients die (Obviously, if 90% survive, 10% die).
      • Overconfidence: We are not nearly as accurate about predicting the future as we think we are. So, we confidently make bad predictions.
      • We have an optimism bias, so we overestimate the chances that our solution will work. Examples of optimism bias:
        • Businesses think that mergers will be successful, even though 84% of merger deals did not boost shareholder return.
        • President Trump said that the COVID-19 would go away by April, 2020.
        • People dying by taking unproven cancer "cures" when they could have been saved by traditional medicine.
        • The planning fallacy: Things take much longer than we think that they will.
      • Group decision making has additional problems, including
        • Peer pressure (conformity)
        • Overemphasizing information that all group members know.
        • Ignored members pouting or sabotaging the group.
        • Less consideration of options if group members are fairly similar to each other.
        • Groupthink: People do not raise objections because they want to get along.
        •  

Step 4: Making a Decision

Due to impulsivity, being pressured by others, or feeling like you have to do something, you may rush into decisions when you could wait.

On the other hand, you may have trouble making a decision and acting on it because you worry that you may make the wrong decision or that your decision may have consequences that you did not anticipate.
To help you make decisions with less stress,

Step 5: Evaluate the Solution: Is it working?

We may not be able to ask  "Is our solution working?" because

We may not bother to ask "Is it working?" because

We may ask "Is it working?" but get the wrong answer because


By now, you should be able to:

  1. List the 5 steps of the problem solving model.

  2. Explain why defining the problem is the most important step in problem solving.

  3. Give at least one example of a "problem" that our society may have incorrectly defined.

  4. Explain three errors that people commonly make in defining a problem.

  5. Describe how expert problem solvers differ from non-expert problem solvers.

  6. Describe the difference between algorithms and heuristics.

  7. Give two reasons why people tend to use heuristics rather than algorithms.

  8. Describe the advantages and disadvantages of using the representativeness heuristic.

  9. Describe the phenomenon of "set."

  10. Explain how functional fixedness is a particular type of set .

  11. Explain why STM's limitations interfere with our ability to generate solutions to problems.

  12. Explain why people "satisfice" rather than optimize.

  13. Tell someone a strategy they could use so that they could optimize.

  14. Explain why knowing the probability of different outcomes is essential to being able to make the best choice among alternatives.

  15. Explain how the availability heuristic may cause us to make poor decisions.

  16. Explain how people can persuade us to do things by taking advantage of framing effects.
  17. See how we can use computers to get around our limited ability to realize how much we should weight information.
  18. Consult a decision-making site (like this one) to get some tips on how to make better decisions.
  19. Use this decision making program to get around some of STM problems that limit decision making.

 


Back to Lecture Notes Page