Comparison Errors: 4 Critical Thinking Errors We Make When Recommending
Jan 05, 2022Critical thinking is providing a robust answer to a question. One of the most basic and widely used ways we come to answers is to learn from the past and from others. We call these comparison-based recommendations – when your recommendation for the present situation is based on your beliefs about what happened either in the past or elsewhere. For example:
“We tried upgrading our sales software to that product 5 years ago and it didn’t generate the savings we expected, so it’s not worth trying it again now.”
Comparison-based recommendations are excellent critical thinking shortcuts. Rather than analyze the current situation and project the expected outcomes of a decision into the future with uncertainty, we can apply lessons already learned to our current situation. Here’s another example:
“Our main competitor has achieved a 3% margin on their primary product line so we should be able to as well.”
When Comparison-Based Recommendations Become Critical Thinking Errors
However, the simplicity and efficiency of comparison-based recommendations lead to comparison errors: applying lessons from one situation to another to which they don’t apply. We compare apples to oranges without realizing we’re talking about two different fruits. (This isn’t the only time critical thinking shortcuts lead to critical thinking errors. Here’s another example.)
To revisit an example from above, maybe the sales software implementation failed 5 years ago because the sales team at the time consisted of significantly older salespeople who struggled to adapt to new technology – something the current team would not experience. Or maybe the software of 5 years ago contained significant glitches that have since been resolved.
Comparison errors occur when we assume we can apply lessons from one situation to another, but the situations are different enough that the lessons don’t apply. To avoid these critical thinking errors, we must slow down and analyze the two situations to determine if they are similar enough to make the lessons from one transferrable to the other.
To slow down and encourage yourself to do a real comparison of the two scenarios, change your default response to comparison-based recommendations to ‘Yes, but…’
“Yes, the sales software didn’t lead to revenue gains 5 years ago, but the context was different then because…”
4 Types of Comparison Errors
The ‘Yes, but’ response guides you to identify the differences between the situations being compared and determine whether they are different enough to make the lesson in question non-transferable.
To identify these differences, analyze 4 elements of the situations:
- People: Do the people have different skills or capacities?
- Processes: Is the organization using different routines or protocols?
- Technology: Does the technology have different features or functionality?
- Externalities: Is the external environment different in a meaningful way?
Let’s look at an example of people comparison errors.
In college and professional sports, it is relatively common to see a very successful coach transition to a new team and struggle for a few years. While there can be many reasons for this, one of the key explanations is a people comparison error. The coach assumes that the strategy and game plan that worked with the former team should work with the new team without acknowledging that the people (i.e., the players) are different. The plays and strategies that made the coach successful with the previous team were designed around a set of players with specific strengths. The new team often has different strengths. As a result, it can take a successful coach 4 or 5 years to get the players that make the former strategy work again.
Now, let’s look at an example of process comparison errors.
Your marketing team has achieved a 25% conversation rate on online sales of new products during launch week for the last 3 years. As a result, you build your staffing and inventory projections off of a 25% conversion rate. However, over the last 3 years, the marketing team has always done an in-person event pre-launch. Because of COVID, you decide to skip the in-person pre-launch event. You get a 12% conversion rate instead. The process you used was different from the previous product launches, making the conversion rate non-transferrable.
Technology comparison errors can be easier to spot.
In your work for a private contractor that helps a government agency launch a new service to its beneficiaries, you achieve your goal of an average turnaround time for claims processing of 10 days. Using this turnaround time to project future rates, you help the agency hire a specific number of customer service representatives. However, when you offload customer service responsibilities to state employees, the turnaround time increases to 14.5 days. To maintain the 10-day turnaround time, the state must hire more staff, exceeding their budget.
What caused the 4.5-day difference between the two turnaround times? The state employees are using older computers with older software that includes significant lags that your team – with its updated technology – didn’t experience.
Finally, let’s look at an externality comparison error.
Your global expansion team at a beverage company develops a market entry plan into a country the company hasn’t sold drinks in before. To estimate projected sales, your team averages the company’s first month sales in surrounding countries and normalizes the average by population size.
However, the first month’s sales in this country are a quarter the normalized average of sales in the surrounding countries. Subsequent research shows that a large portion of people in this country don’t drink alcoholic beverages for religious reasons.
How Scientists Minimize These Critical Thinking Errors
Critical thinking requires us to be warier of the transferability of lessons from one situation to the next. In scientific research, scientists work hard to ensure the people they select for their trials are representative of the populations they intend for their findings to apply to. Is the sample group comparable enough to the population for the trial findings to apply?
Understand the Situation in Question
To ensure that their sample is representative of the broader population of interest, scientists first get clear about the population of interest: what are the characteristics of the people they want the trial findings to apply to? In the same way, we must be clear and conscious of the characteristics of the situation we are analyzing before looking to other comparisons. Specifically, we should spell out:
- The skills and capacities of the people involved
- The processes we are using
- The technologies, their features, and functionalities
- The external market conditions
Consider How You Select Comparisons
Next, when scientists select a sample to use in their trial, they often try to select the sample randomly. Random selection outperforms other forms of selection because other selection methods often lead to an overrepresentation of people who find it easiest to access that selection method. For example, a survey of veterans administered via a mobile app will likely over-include younger vets, while a survey administered at a local VFW Hall is likely to over-include older veterans.
Just as scientists must be careful that their selection process doesn’t cause their sample to be non-representative, we must be careful that the way we pick comparisons doesn’t lead us to make repeated critical thinking errors. For example, if all of your company’s commercial real estate work has been in cities with populations over half a million, but now you’re starting work in smaller cities, you’re likely to make externality comparison errors repeatedly: wrongfully transferring lessons from big cities to small cities.
Review More Than One Comparison
Scientists also pay significant attention to the size of their sample. If the sample size is too small, the results are not only more likely to be considered insignificant, but the sample is more likely to misrepresent the population in interest.
When making comparison-based recommendations, we may be in the habit of looking to just one comparison. This increases the likelihood of comparison errors. Instead, we should base our recommendations on 3 or more comparisons. We talk more about the importance of selecting the right number of alternatives here.
This attention to the differences between trial participants and the broader population in interest helps scientists avoid these types of critical thinking errors. We will do well to follow their example. Comparison-based recommendations offer a wonderful critical thinking shortcut, but we will cost ourselves more than we’ll save if we gloss over differences between the situation in question and the comparison situation.
Instead, make your default response to comparison-based recommendations ‘Yes, but…’ and analyze the differences between people, processes, technology, and externalities in the two situations.