Interpreting Data & Statistics
It’s seems that as soon as one election season ends the next one begins the next day, even though it won’t actually occur for another two years. During that two year gap, news agencies and pollsters bombard us with all sorts of data and statistics predicting one thing or another. But, as you’re going to learn, thanks to this lesson, you need to be very careful when interpreting data and statistics, as looks can be very deceiving.
Example: The 1963 Presidential Election
The 1936 Presidential Election was a notable example of this. It was between Democratic nominee Franklin D. Roosevelt and Republican nominee Alf Landon. Let me summarize the polling data compiled by The Literary Digest prior to Election Day: Alf Landon will defeat FDR by 57 to 43%. The sample was obtained from a large enough number of people using questionnaires that were mailed to addresses obtained from telephone directories.
How can we interpret this data? Well, it seems that a large enough sample was obtained to justify the statistics. That’s good, but what about the methodology involved here? This is the hidden part about all these lies, darn lies, and statistics. The way the stats were compiled misconstrues the voting public’s true overall opinion. Questionnaires were sent to people with telephones, but back then poor people who had telephones were in relatively short supply, and many of FDR’s supporters were poor.
The outcome of the election? FDR won with a massive 61% of the vote. So the next time you read any statistics telling you one thing or another, find out exactly how the data was obtained, and how that in itself can really skew the results.
Example: The Blood Clot Pill
Here’s another wicked example of making sure you interpret data and statistics correctly. What if I told you that a newly-approved, second generation pill increases the risk of getting a life-threatening blood clot by 100%? Meaning it has literally double the chances that you’ll get a blood clot compared to a now discontinued, first generation pill. How likely would it be that you would take the new pill? Not likely, right? I mean, that’s pretty scary – the thought of having to take a new pill increases your chances of probably dying by 100% or double that of an old pill.
But what if I were to give you the real data behind this? Of the 7,000 people who took the first generation pill, only 1 had a life-threatening blood clot. Of the 7,000 people who took the second generation pill, two had a life-threatening blood clot. An increase from 1 to 2 is indeed a ‘doubling’ – an increase of 100%. However, the actual risk of getting a blood clot increased from 0.014% to 0.029%. Would you be scared of taking the second generation pill now, if the chances that you’ll get a life-threatening blood clot are still way less than 1%? Probably not.
Example: The Popular Toothpaste
And here’s another cool example. Not long ago, a famous toothpaste company (Brand X we’ll just call them) claimed in its ad that 80% of dentists recommend their toothpaste. Logically, this would make you think that 20% would recommend an entirely different brand, right? In other words, 80% of dentists recommend Brand X in preference to all other brands. Well, again, you need to be careful how you interpret statements like this. If you were to carefully look at the collection methodology for this study, you’d see that dentists could choose more than one brand that they liked – not just one brand. This meant that Brand X wasn’t recommended in preference to all other brands – it was just one of many brands recommended. And in actuality, there was another brand that received almost as many recommendations as Brand X as a result of the survey methodology.