These days we are constantly bombarded by vast amounts of data that can be interpreted (and mis-interpreted) in many different and confusing ways. The problem of misleading data has become a hot topic as of late. This past month the Washington Post published an interview with Alberto Cairo, a professor of visual journalism at the University of Miami, on his book, “How Charts Lie” which is geared toward helping people make better sense of the charts and data that surround us. Cairo’s article got me thinking, how can we as research professionals help clients better navigate this complicated space?
To that end, here are some easy techniques anyone can use to help separate credible vs. “fake” findings that could negatively impact decision making.
-
Check the scales! Manipulating a chart’s scale is a fairly simple way to change a storyline. For example, take the graphs below:
Both charts use the exact same data. The only differences are the headline and scale on the x-axis.
By including a larger dollar range and noting that the 2 companies have the same overall average sales in the headline, a quick glance at Chart A makes it appear that the companies are almost the same. In contrast, by using a much tighter x-axis scale and exaggerated headline, Chart B shows large data differences that may or may not be significant – but surely appear so.
When interpreting charts always be sure to look beyond the graphics no matter how obvious the story seems.
-
Check your base! Especially when results include a percentage increase or decrease, knowing what the change is based on is vital for understanding the implications of research findings. For example:
Here, Chart C appears to clearly show how much better Brand Y is doing over Brand X. However, look what happens to the findings when we add the base (dollar sales) to the chart.
By including base sales and not just the percent change, we see that Brand X had a greater increase in dollar sales, creating a much different picture that can have significant implications on what business decisions are made.
-
Know your audience! Reliable research results will always include a description of the sample, since who is included in the sample will have significant implications on results.
For this example, let’s say you are a tool manufacturer and have decided to update one of the products in your portfolio to make it more durable without added costs. You want to measure interest in buying this “new and improved” version, so you field an online survey to n=400 participants who report that they regularly purchase tools used in construction. A survey is fielded with the updated tool concept, but when results come in, they are unexpected – only 10% of respondents say they would be interested in purchasing.
Now there are many reasons why unexpected results occur – if you always knew what to expect there would be no reason for research – however, in this case the problem was that inappropriate respondents were included in the analysis. Of the 400 “builders” surveyed, it turns out 75% were DIYers and the updated tool was created with the professional builder in mind. When the data was re-cut to just include the appropriate audience, purchase intent jumped from 10% to 40%.
Charts can lie and results can be misleading, but if you keep these three quality checks in mind you are well on your way to weeding out the “fake news” in research.