Without statistics, we don’t know anything; with statistics, we don’t know everything

This will be a short essay in which I’ll try to explain and justify the modern social sciences’ focus on statistical and quantitative analysis. I wanted to write this both to clarify a number of misunderstandings I hear in my conversations with others, and also to explain why I’ve recently become so much more interested in social science, as opposed to the humanities that so animated me in college. I also recently read a short book on the philosophy of modern social sciences, The Logic of Causal Order, to help inform my thinking about this.

To begin: social science and the statistical methods on which it is based have a proper place. Their proper place is to help us gain descriptive (or ‘positive’) knowledge about how the world works. They cannot answer for us normative, moral, aesthetic questions like, “What is beauty? What literature is the most meaningful and powerful? What are our moral obligations to other people? What forms and stages of life deserve protection? What should our and our society’s ideals be?” These are not questions about what the world is; they are questions about what we want it be and how we ought to live as humans. They can’t be answered with descriptive analysis, because they’re not descriptive questions. As such, these questions are the province of the humanities and not the social sciences. I want to concede and insist upon all of this up front, because I fear that the strongest objections most people have to social science comes from those areas where social science collides with fraught aesthetic, moral, and political questions. If social science does pretend to answer questions like these, then it’s not really being social science.

But for most descriptive questions that we need to answer, our best hope for getting a good answer is sophisticated statistical analysis — and we ought to put as many of our beliefs to statistical tests as possible. And most arguments against this idea are poorly reasoned and deeply problematic. They way I want to articulate this position is to argue against my imaginary friend, Bob:

Bob is a local businessman, and a proponent of the idea that statistical analysis and theory are generally rubbish.He believes in the practical knowledge he has accumulated throughout his life. He has experience, not statistics — you can prove anything with statistics, but experience is solid. He gets practical results — no abstract theories required. He knows that he can tell a good wine from a bad one (even though social psychologists have shown that wine connoisseurs fail in blind taste tests), because he tastes the difference every time he picks up a glass. And he also knows quite a bit about business, which is why now, after his long career as a sole proprietor, he hires himself out as a consultant. For example, Bob says he has practical knowledge that “the essence of running a good retailing business is inventory turnover.” When he took over a failing retailing business 5 years ago, he accelerated its inventory turnover to 40 turnovers per year, and the company quickly turned around.

Problem question: Does Bob have the knowledge that he claims? Definitely not. It’s never fun or nice (or very good for your career) to puncture the self-confidence of successful businessmen, but when somebody tells you (s)he has ‘practical knowledge’ with disdain for statistical proof, you should turn away and never hire that person. Why do I say this?

Let’s just start with Bob’s claim about his business knowledge and come at the problem this way: Let’s think very carefully and explicitly about the claims Bob has made. First, he has a historical claim: he increased the inventory turnover rate at his retail business, and the company flourished. Fine. But what should be obvious is that Bob can’t have any confidence that the increased turnover was the cause of the company’s turnaround. Any company’s prospects at any moment are influenced by thousands of factors. Maybe his company turned around because economic conditions improved globally — the recession came to an end and consumer demand increased. Maybe things just got better locally — fracking started nearby or something. Maybe some really socially influential person in town just happened to discover his store and told all of his friends. Maybe some cool innovation in textiles pushed down the costs of the raw materials, and that cost decrease didn’t get passed on to consumers right away. Maybe some other factor had just been artificially depressing demand for the retailer’s ware the year before Bob took over, and that factor suddenly went away in that year. Given all of these possibilities (and thousands more), can Bob say with confidence that he has practical *knowledge* that inventory turnover was the key to his businesses’ success? No way.

But now, let’s give Bob credit and suppose he has thought about all of these factors and accounted for them — he’s fairly sure that, in this case, nothing else changed in the year of the retailer’s turnaround, and nothing else can account for how well it did. So he goes beyond his simple historical account, and stands by his generalization that “the essence of good retailing business is fast inventory turnover.” He advises his consulting clients to turn over their inventory accordingly. What can we say about Bob now? Well, first, despite his claims to the contrary, Bob is now doing theory — he’s made a claim about cause and effect in general between two variables in the abstract. And also — and this is the important part — Bob has made what is essentially a statistical claim, and the logical next step is to subject his claim to statistical scrutiny. Essentially, Bob has said that faster inventory turnover by itself has a positive impact on a firm’s profits. Another way of putting this is that if we had 100 businesses that were similar in all other respects, and half of them sped up their inventory turnover, we would expect businesses in that half to, in general, enjoy more profits and success than the others.

See what we’ve done here? What Bob has phrased as a claim about the ‘essence’ of business we can just rephrase as a statistical claim. And the advantage of rephrasing as a statistical claim is that, in this form, we can test it, and figure out if it’s actually true, as opposed to the folk-wisdom of one man who just happened to have a lucky experience.

So does Bob actually know what he thinks he knows? I’m not sure, but modern social scientists could find out, using the sophisticated, well-justified, well-audited statistical methods the fields have developed. We could dig through lots of financial data of the thousands of retailers in the world; and find a cross-section of firms that are very similar in most respects except for inventory turnover and through regression analysis they could figure out what is the average impact of a given increase in inventory turnover on a firm’s profits. Ideally, we could even find some way to do a quasi-natural experiment, where a set of firms were randomly assigned to increase their inventory turnover — while other variables were untouched — so we could see the impact of that variable alone. We know that these methods are valid, because they’re just applied math, which is grounded in logic. If you think the methods are flawed, there are mathematical proofs that can show you otherwise. None of this is voo-doo or rocket science and it’s not take-it-or-leave-it — it’s just about taking Bob’s claim seriously, explicitly stating the measurable implications of his claim, and then testing to see if they check out. If he’s right, then they will check out: regression analysis will show a positive influence of inventory turnover on profits. If the regression analysis doesn’t show this, then — while it’s still conceivable that he was right about his particular experience — his claim that higher inventory turnover is better for firms in general is just plain wrong, and there’s no way around it. He should accept that his experience was a statistical quirk and that, as such, he’s not justified in advising other companies accordingly. If his knowledge doesn’t generalize — i.e., if it can’t be applied in other contexts — then it’s not at all useful.

So to review: To do better, we need generalizable knowledge. If ‘practical knowledge’ has not been tested systematically (think: pre-20th century medicine), it is highly, highly suspect. If practical knowledge can be tested statistically, it should be, and the statistics must have the last word.


That’s the basic idea. Now I just want to argue back against a lot of the complaints about and slogans against statistical methods that I hear from the people around me.

(1) ‘Numbers leave a lot out — even descriptive (positive) things about the world.’ This is a true, good, and important claim. And it’s not something social scientists are oblivious to. When we get good data, we have sound statistical methods for doing the right thing with it. But when we just can’t get reliable measurable data on things, statistics is useless. And this has implications for the limitations of descriptive social science. For example, a big part of understanding why one brand succeeds and another fails probably has to do with highly complex social influence. Like, one cool group of friends decides to like the brand, and they use it, causing other people to think it’s cool, as it slowly catches on. These things are probably have a bigger impact on a company’s success than does its inventory turnover. But we’ve generally had scant ability to measure how cool and socially influential a company’s customers are, or how people’s general vague attitudes over time go from, “What is that?” to “Hmm, this seems like the it thing,” to “Okay, I’m buying this.” These things are all inside people’s heads, and hence hard to measure — but no less important. As such, there’s always the risk that social scientists will — because statistically significant results are what get them published in journals — place too much emphasis, in their understandings of how the world works, on the things that they can measure.

(1a) BUT — I hasten to add this question — who does have knowledge about the things we can’t measure? The social scientists don’t have reliable data they can ply their methods on, but should we therefore conclue that armchair philosophers, magazine pundits, gurus, or academics in softer cultural-studies and media-theory type fields *do* have this knowledge? I see no reason why we should trust any of these people, in lieux of statistical checks. (Indeed, my own bias is to think that if anything social scientists who are trained to think rigorously about social systems probably have more likely hypotheses than any of these other groups.)

(2) ‘Correlation does not imply causation.’  Yep, statisticians and social scientists know this better than anybody else in the world, which is why we have sophisticated methods for distinguishing correlation from mere causation — controlling for priors, path analysis, etc., etc. Believe me — if you’ve thought of it, so have modern methodologists.

(3) A general pet peeve of mine is that many people think that the social sciences have been discredited by some of the incorrect ideas social scientists had in the early 20th century. These same people usually do not think that all of literary theory has been discredited by, e.g., the noxious, wrong and crazy ideas that Freud and Lacan — to name just two thinkers whose influence persists in lit theory — had about, e.g., women. This double-standard is obvious, blatant, and suggests its own correction. And there is another obvious point to make. Yes, many so-called social scientists in the 20th century had noxious and incorrect beliefs. Do you know how we now know those beliefs were wrong and how we have since corrected them? Statistical demonstrations.

(4) But probably the main reason people object to statistical analysis is that, very plainly, they don’t like what the statistics are telling them. I would place this disliking into two separate categories.

(4a) They doubt the descriptive truth of the statistics. For example, suppose your friend says, “I ready XYZ statistics about my home state/some other group I identify with, but I just don’t think that’s correct — it doesn’t jive with my experience.” What do you we tell this person? Well, first, if this person is correct, and the statistics are not right, then statistics itself provides methods for their correction. Indeed, it is only sound statistical methods that can correct bad statistics. If statistics have been incorrectly obtained, this does not reflect upon statistics itself (which provides ample warnings about and methods for dealing with things like selection bias, etc.), but with the researchers. However, if the statistics that our friend does not like actually were well-obtained and are correct then, well, (s)he’s wrong, and the statistics are right, and we just have to accept that, and there’s no way around it. If you don’t like what statistics are telling you, the you should change your beliefs, rather than expecting the truth to change to accomodate your sensitivities.

(4b) They fear the moral or political implications of a particular statistic. This is the tricky space where people confuse and conflate positive (descriptive) and normative reasoning. Let me work with a sort of tense example. Suppose we were to discover new evidence that many of our behavioral traits and mental qualities were relatively strongly determined by age 5 (a finding, by the way, that I doubt will turn up — as one who is learning lots of new fields and techniques at a relatively advanced age, I am a believer in brain plasticity). I think some of my very progressive political-activist friends might object and worry that such a finding could be used to undermine support for public education — they would worry that people would think, “If we’re pretty fixed by kindergarten, what’s the point of, e.g., investing more in good elementary schools for the less privileged?” But I think this would be the wrong way to approach the finding. We could take plenty of other political lessons away from such a study — we could say it strengthens the case for a huge government push for pre-K education, or better care for pregnant, nursing and neo-natal mothers, because it would show how key the first five years of life are and just how much they do to reproduce inequality. In other words, we shouldn’t fear the possible implications of the statistics — we should face them head on, and work with them as best we can. Or suppose some new study were to show that boys are less likely than girls to have very high facility and proclivity for verbal reasoning. Would this undermine the goal of trying to make boys work hard and be confident in their English and Literature courses? Not necessarily. If anything, evidence that boys have less of a proclivity for English could strengthen the case for making an especially hard push to teach them to read literature well — after all, its important to our society that all people, of both genders, develop their language skills, and so if boys are less likely to do it on their own, that urges us to push them harder.

Are there some cases where the kinds of re-interpretations I have suggested won’t work out, and the statistics will inevitably undermine some political or moral end that you support? Maybe. In that case, you just have to change your view. If you cannot make a case for a moral or political goal while simultaneously acknowledging demonstrably true statistics, then your case does not stand up to scrutiny, and you should find a new one.

(5) Finally, many people see social scientists and their statistical methods as overbearing and arrogant, going beyond their proper station. Does social science itself, properly defined, go beyond its proper station?  No. Do social scientists themselves? Yes, sometimes. So I’ll conclude, after the asterisks, with the two things social scientists and their statistical methods cannot do:


(1) While social science can tell us about effective ways to achieve particular moral goals, it cannot tell us what our fundamental morality should be in the first place. If an economist comes on T.V. and tells us that ‘Our evidence suggests that raising the minimum wage will have xyz impact on the employment rate and government revenues,’ we should listen respectfully. If she tells us that ‘the minimum wage is intrinsically wrong, because consenting adults should be able to enter into economic contracts on any terms that they like,’ she is no longer being an economist, but a political philosopher. If an economist tells us, ‘An engineering degree brings a college graduate a wage premium of xyz in the labor market,’ that is a fact we should be aware of. If she tells us, ‘A liberal arts degree is completely pointless,’ she is being a cultural critic (there could be a value to a liberal arts degree that is unmeasurable, hence outside the purview of social science).

(2) Finally, statistical methods alone are not alway sufficient to establish causal order. When we come in to test a hypothesis with regression analyses, we usually have a vague picture of the mechanics of the underlying system we want to analyze. We have general ideas that ‘A impacts B impacts C,’ and not the other way around. A lot of these ideas about causal order are common sense — ‘B’ cannot cause ‘A’ if ‘A’ happens first, for example. But a lot of them are less obvious. In these cases, statistics can only test for causal orders that are assumed a priori, and these assumptions they borrow from experts in the area they’re studying.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s