Quotes illustrating the importance of defining the problem:
Examples of insights due to re-defining the problem
Example 1:
What rule is determining the sequence of these numbers? 8,5, 4, 9, 1, 6, 7, 10, 3, 2
Two other examples: Think of the last time you applied the wrong formula to a word-problem or heard of a friend who was misdiagnosed by a doctor.
Sometimes, we misidentify the kind of problem we have because of the representativeness heuristic: a general rule used when people decide whether something is an example of a category. If what we are looking at matches our memory of a typical instance of a category, we will classify that thing as being a member of that category. For example, you determine whether someone is a child or an adult based on their appearance matching your memorized examples of children and adults. The advantage of the representativeness heuristic is that people can take advantage of their experiences and their expertise. For example, a doctor can quickly diagnose a patient who has a disease that the doctor has seen hundreds of times before. However, problems may look similar, but be different. So, the representativeness heuristic may lead not only to stereotyping, but to misdiagnosing the problem by overlooking key differences between this new problem and old problems.
The representativeness heuristic may also cause us to ignore important information. For example, a doctor might, seeing that the patient's symptoms matched malaria, use the representativeness heuristic to diagnose the patient as having malaria, even though the patient probably didn't have malaria given one important fact: There hadn't been a malaria case where the patient lived in over 50 years.
1. Algorithms: a problem-solving strategy that--if all the steps are followed correctly--is guaranteed to eventually lead to a solution.Problems with algorithms:
1. They may involve many steps (and doing many steps takes time and uses up the limited space in short-term memory). Because they are often time-consuming and involve many steps, they are called "inefficient." However, they are not ineffective: Indeed, they are foolproof formulas.
2. They only fit problems where there is one right answer. Thus, there are algorithms for solving some math problems and playing certain simple games like tic-tac-toe, but not for problems with human relationships.
3. So, even though they are often called "foolproof formulas", you can go wrong using an algorithm if
- The algorithm is not the right one for that problem
- You skip one of the steps (easy to do when there are many steps)
- You mess up one of the steps
2. Heuristics: a general rule that guides problem-solving, but does not guarantee a perfect solution. You can think of heuristics as mental shortcuts, hunches, or as educated guesses. (Click here for a weather-related heuristic.)
Examples of useful heuristics:
- Change how you view the problem
- Try to solve a simpler version of the problem.
- Break the big problem into several smaller problems.
- Make a picture or diagram of the problem.
- Think of an analogy or metaphor for the problem
- Imagine the problem solving itself.
- Think of the problem as an opportunity.
- Use or adapt a solution that has worked in the past.
- Google it -- or ask an AI program like ChatGPT
- Ask a friend what to do.
- Ask "what would (a successful person or someone you admire) do?" For example, you might use Kobe Bryant's approach to problem-solving.
- Ask "How have I solved similar problems?"
- Trial and error
- Ask "How could I make the problem worse?" -- then do the opposite.
- Work backwards--Think of the final result you want (i.e., imagine the problem perfectly solved) and then figure out what steps it would take to get that result.
Fixation/Set: a rigidity in problem-solving due to wanting to
continue to do things the old way.
Examples:
Why we "satisfice" (choose the first satisfisfactory option)
rather than "optimize" (choose the best [optimum] option)
What it takes to optimize:
Table illustrating complexity of making an optimal choice: An oversimplified example of choosing among apartments. Note that there are probably more than 3 places that you could consider and that you probably care about more than price, proximity to campus, and landlord. For example, you probably care about how quiet it is, how safe it is, how big it is, and how nice it is. However, even this oversimplified example shows you how complicated optimizing is.
- Consider all the options
- Consider all the pros and cons of all the options
- Determine the probabilities of each of those pros and cons
- Correctly weight the importance of all those pros and cons
- Combine all the information about the pros and cons of all the options to arrive at the best (optimal) choice
Options Price Score on Price Price's Importance Location Location's Score Location's Importance Landlord's Reputation Landlord's Score Landlord's Importance Total score 1 500/month 3 4 2 miles from campus 2 2 Excellent 5 4 36 (3 * 4) + (2 * 2) + (5 * 4) 2 400/month 4 4 5 miles from campus 1 2 Average 3 4 30 (4 * 4) + (1 * 2) + (3 * 4) 3 700/month 1 4 next to campus 5 2 Poor 1 4 18 (1 * 4) + (5 * 2) + (1 * 4)
Why we fail to optimize (besides the fact that optimizing is stressful):
- Partly because of the limits of short term memory, we do poorly at:
Considering all the options (thinking of more than 7 is tough)Considering all the pros and cons of each option (even with 2 options, considering the pros and cons could give us much more than 7 things to keep in mind)
To get around some of the limits of short-term memory, you might just write down all your options as well as their pros and cons (Example).
To get around more of the limits of short-term memory, you could use this decision making program to help you make decisions.
- We are bad at estimating the frequency of events (and thus how likely something is to happen) for a variety of reasons, including
- the availability heuristic: using the rule that how often something happens based on how easy it is to remember examples of that event happening. The problem is that some events, even if they don't occur very often, are easy to recall. So, recent and vivid events are seen as more likely than they really are (e.g., airplane crashes).
How politicians and some in the media have used the availability heuristic against us.
- In 2016, Trump ran on a vision of America being unsafe due to violent crime, but, in fact, America's violent crime rate was almost half of what it had been in 1990.
- Trump acted like cities near the Mexican border are extremely dangerous places, largely due to undocumented immigrants from Mexico. In fact, it seems that immigrants are less likely to commit crimes and that some southern border towns (e.g., El Paso) are among the safest cities in the country whereas cities far from the Mexican border (e.g., Baltimore and Detroit) are among the most dangerous U.S. cities.
- Trump has convinced some people that ANTIFA are a bunch of murderers. In fact, as of this writing, ANTIFA is responsible for only one death (and that may have been in self-defense). In general, right wing extremists are responsible for much more violence than the left-wing extremists. (link to more recent data).
- Some have argued that police are being gunned down at high rates and that COVID-19 is a hoax. However, recent figures show 101 police officers died from COVID-19 and 82 died from all other causes combined (e.g., car accidents, being shot, etc.). In fact, some reports have 5X as many police officers dying from COVID than from gunfire.
- Police are about 3/4 as likely to die from a car crash as from a shooting; yet many officers do not wear seat belts.
- Being a police officer is a dangerous job. However, there are at least 18 jobs that are more dangerous. Jobs that are more than 2X as dangerous as being a police officer include commercial fisherman and fisherwomen (more than 7X as dangerous as the police officer job), loggers (more than 6X as dangerous), pilots (more than 3X as dangerous), roofers, steel workers, truck drivers, and garbage collectors.
- As Kristoff (March 23, 2021) writes, "In a typical year in the U.S., more preschoolers are shot dead in America (about 75) than police officers are."
*Note that the availability heuristic also fools us about whether we have a problem--or what the problem is. For example, many people think the U.S. has an immigration problem due to having too many immigrants. In fact, the U.S. does have an immigration problem due to a lack of immigrants. The number of Americans who were born in a foreign country has shrunk by more than half since the 1990s--and, if not fixed, this immigrant shortage will have dramatic negative effects on social security (and, probably, on the entire economy--Japan has learned how a lack of immigrants sinks an economy)
- We have trouble using base-rate information: what typically happens.
- We often incorrectly assume that averages don't apply to us because we are unique. Usually, we exhibit an optimism bias: that we are uniquely less likely to have bad things happen to us, this is especially true when we do have some control over the outcome. Thus, many people prefer to drive rather than to fly, when driving is riskier. You may be able to control the optimism bias by asking what the risk would be to other people.
- However, sometimes, averages do not apply to our situation (e.g., Although, in general, wearing a mask to prevent the spread of COVID-19 was a good idea in 2020, it was probably not necessary for a young person biking in a rural area where COVID rates were low to wear a mask).
- The anchoring effect Even an obviously wrong bit of information (a bad anchor) can influence us.
- The confirmation bias: Once you get the idea that something is risky--or not risky--your tendency will be to find evidence that supports your view. To fight this tendency, seek out information that opposes your view. Thanks to Google, this is easy to do.
- We give some information too little or too much weight.
- Fear and greed, especially when it comes to putting too much weight on extremely unlikely outcomes (e.g., that you will die from getting a COVID vaccine or that buying a lottery ticket will make you a millionaire).
- Loss aversion: losses feel about twice as bad as gains feel good.
- Our own stuff is overvalued.
- People care about price of gas because they are more likely to pay cash for gas.
- Weighting characteristics that are important now but may not be important later.
- If we are verbally justifying our decision to ourselves or to others, we overweight what information and dimensions that we can easily verbalize and underweight information that is harder to verbalize. As a result, we may ignore our intuition (our implicit knowledge). Note: Although intuition is not completely trustworthy, if our intuition is based on experience and rapid feedback, it can be accurate.
- We look at price in comparison to other things rather than in comparison to absolute value.
- We compare against irrelevant information-- may buy item because the "sale price" is lower than the "manufacturer's price."
- We don't combine all the information necessary to make a good decision
- Satisficing: Going with the first option that is satisfactory rather than optimizing: going for the best (optimum) option.
- Mindlessness: Thinking that we are thinking when we are really just on autopilot. May be more likely to occur if we are multitasking.
- Decision fatigue: If we are tired of making decisions, we may not look at all our options or evaluate them carefully. For example, as Myers (2021) notes, " Physicians also more often prescribe quick-fix opioids if tired and time-pressed at the day’s end, rather than at its beginning."
- Looking at only one criteria when many factors have to be taken into account: Examples:
- "I will take the one with the lowest price"--disregarding quality, convenience, and other criteria.
- "I will take Dr. X's class because it fits into my schedule."
- Looking only at the solution's effect on your immediate problem. That is, not engaging in systems thinking--realizing that everything is connected to everything else. The good systems thinker is thinking several steps ahead.
- We look at short-term effects rather than long term effects.
- We don't consider side effects and unintended consequences.
- People may change their behavior in ways that defeat our solution (e.g., they may find loopholes in rules, rebel against rules, or retaliate against sanctions imposed for breaking the rules).
- Your solution affects more people than you considered (e.g., eliminating homelessness might hurt hotel and other business owners)
- Heroine was once considered a solution to opium addiction.
- President Trump urged people to take chloroquine, arguing, essentially, "what do you have to lose?" However, chloroquine has side effects and at least one study suggested that people taking chloroquine were more likely to die than those not taking that drug.
- We may exercise more to try to lose weight but end up eating more (because we are hungrier or because we "earned it"), so we end up gaining weight.
- "All or nothing" thinking: We see a solution as all good or all bad. The result may be that we see all the options as terrible. For example, since both Democrats and Republicans have some dishonest members, we may say, "They are all terrible." As the saying goes, "the perfect is the enemy of the good." Similarly, we may hold out for the perfect solution--which will never come. Examples of all-or-nothing thinking hurting decision making
- "I will only buy it if it is absolutely free."
- "There must be no risk whatsoever." (So, we won't consider a vaccine that will save thousands of lives if it may kill 10 people.)
- "No compromising."
- Having a hard time making tradeoffs between options.
- We are vulnerable to to framing effects: the way the problem is worded affects the decision that we will make. We are vulnerable to framing effects because we are loss adverse: we hate to think that we might lose something. We like to gain, but we HATE to lose. Insurance companies and bankers love us for this. Framing effects. Because we are loss averse (the pain of losing something is much more than the pleasure of gaining something), we can be manipulated by how the problem is framed (stated). For example, patients (and doctors!) are more likely to endorse a surgery in which 90% of patients survive than one in which 10% of patients die (Obviously, if 90% survive, 10% die).
- Overconfidence: We are not nearly as accurate about predicting the future as we think we are. So, we confidently make bad predictions.
- We have an optimism bias, so we overestimate the chances that our solution will work. Examples of optimism bias:
- Businesses think that mergers will be successful, even though 84% of merger deals did not boost shareholder return.
- President Trump said that the COVID-19 would go away by April, 2020.
- People dying by taking unproven cancer "cures" when they could have been saved by traditional medicine.
- The planning fallacy: Things take much longer than we think that they will.
- Group decision making has additional problems, including
- Peer pressure (conformity)
- Overemphasizing information that all group members know.
- Ignored members pouting or sabotaging the group.
- Less consideration of options if group members are fairly similar to each other.
- Groupthink: People do not raise objections because they want to get along.
Due to impulsivity, being pressured by others, or feeling like you have to do something, you may rush into decisions when you could wait.
On the other hand, you may have trouble making a decision and
acting on it because you worry that you may make the wrong decision or that your decision may
have consequences that you did not anticipate.
To help you make decisions with
less stress,