Tips for presenting credible data
The information technology revolution has resulted in an explosion in the amount of business data available. To become useful, these data must be carefully gathered, processed, and summarized. Accordingly, means, medians, and modes must be computed and measurements of data dispersion examined. Then, these summary and descriptive number scan be interpreted and presented in forms such as graphs and tables. Numeracy—the ability to effectively manipulate and present numbers—is a critical skill. A good business leader knows what to look out for when numbers are presented—and how to most effectively and honestly interpret and present statistics to facilitate sound decision-making.
As business leaders, we receive and deliver presentations daily that are saturated with statistics. Unfortunately, the business world is awash in misleading or useless numbers. Sometimes, unscrupulous operators skew the data to achieve a desired outcome. Other times, a distorted perspective results from simple misinterpretation.
Telling a story
It is critical that the audience of any report or presentation believes that prudent steps were taken to gather, manipulate, and analyze the data. This entails identifying the sources, explaining how and why the statistics were computed, and discussing sample size. Failure to adequately explain the statistical processes will cast doubt on the conclusions and recommendations.
Similarly, the assumptions made during analysis should be carefully documented. For example, a financial model might assume a specific future inflation rate, which cannot be known with certainty. While assumptions cannot be proven, they should have a basis in reality. But finding this basis can be tricky. When choosing an inflation rate, for instance, one might examine government indices such as the consumer price index or a historical rate. When using a historical rate, the period over which inflation might be averaged must be carefully chosen. An effective leader understands the need to think carefully through such issues and document the resulting assumptions.
It is important to define your metrics, justify their use over other possibilities, and clearly identify biases. In the previous inflation example, if using the government’s core inflation indices, a successful business leader would be sure to note that energy and food prices generally are excluded from these measures because of their volatility. When food and energy are key inputs to the is so poor that a large portion of their business, core inflation indices likely are incomes is devoted to paying medical not the best measure of inflation to use.
Identifying any biases is equally critical. For example, the US government’s definition of poverty—having income that is less than 50 percent of the median family income—is a relative measure that implicitly incorporates the notion of income equality.
Using numbers to justify a business decision is akin to the elementary school exercise of connecting dots to reveal a pattern. As with a single dot, a single statistic seldom tells the whole story. Business leaders should be wary of arguments that rely on just one or a few statistics to make a case for action.
The recent US health care debate revealed the dangers of this overreliance on a single statistic. During the debate, it often was reported that the number of uninsured Americans was almost 50 million. But this figure is nearly meaningless and requires digging deeper. It overstates the problem, in that only a fraction of the 50 million are chronically uninsured. The figure includes those eligible for government health insurance but who have not enrolled; it includes illegal immigrants; it includes many who have the means to pay for insurance, but choose not to buy it; and it includes Americans who go without insurance for relatively short periods. But the figure also understates the problem, in that it does not consider the underinsured—those workers whose insurance costs not covered by their policies.
Business leaders should be alert to the credibility busters they may encounter when working with and presenting statistics. These are poor statistical practices that can cast doubt on conclusions and recommendations. One credibility buster is false precision when presenting statistics; that is, expressing a number with more significant digits— non-placeholder numbers that carry meaning—than is appropriate.
Doing so implies additional accuracy that is false, and it can cause overconfidence in the numbers being presented. Although usually inadvertent and oft en harmless, such false precision can create doubt in the more discerning members of your audience and further resistance from those predisposed against your conclusions and recommendations. The solution is to use a number of significant digits commensurate with the accuracy of the data or to provide a range rather than a single figure.
A second credibility buster is the improper use of graph scales to exaggerate or understate variations in the data—for example, upward or downward historical trends. This includes starting the y-axis scale at a number other than zero or using a tight plotting range for the y-axis variable. Nearly every data set can be made to look highly variable if the plotting range is set tightly enough. Conversely, a data set can be made to show little variability by setting a wide plotting range for the y-variable. Starting the y-axis scale at a number other than zero is acceptable if it is clearly annotated and necessary for the data. Similarly, it is permissible to set a tight or loose plotting range, as long as it is not taken too far to distort the picture.
Another credibility buster is a lack of statistical significance with figures. Statistical significance refers to the degree of confidence that can be placed in numbers. This level of confidence can in fact be computed, usually taking the form of a percentage (for example, a 95 percent confidence level). A lack of statistical significance can result from any number of factors, but it usually arises when sample sizes are too small. Fortunately, there are formulas and easy-to-compute methods to determine minimum sample sizes for specific confidence levels.
A final credibility buster is confusing correlation with causality. When two variables change in a predictable way in relation to one another, they are said to be correlated. However, the relationship does not necessarily mean that a change in one variable causes a change in the other variable. Many such non-causal relationships exist in business. In these situations, a third, oft en unidentified, causal factor is at work driving the correlated behavior in the other variables. The goal is to tease out the underlying causal factors and their effects on the dependent variables.
Statistics can provide powerful support for business decisions, but they must be used in the right way. You and the key people in your organization should strive to become literate in quantitative concepts and gain the ability to effectively convey them.
Tim Becker is the founder of Probity Business Group LLC, a consultancy focused on strategy, growth, technology, and operations improvement. He was previously a partner with Accenture. Prior to that, he was a consultant with AT Kearney and Halliburton. Becker began his career as a US Navy nuclear submarine officer. He may be contacted at firstname.lastname@example.org or +1-404-401-2653.