Is a rising stock market a good thing?

The answer is, “usually, yes.” But the reasons why are somewhat subtle.

Some of my friends have been sharing this article by Jeff Sommer in the NYT; the article points out how well the stock market has done under the current President’s tenure. The article focuses on the Down Jones Industrial Average, a dinosaur relic of the past for measuring the stock performance (the index tracks an arbitrary and small number of companies and uses an absurd weighting scheme). But the more logical (and popular among economists) measure of stock market performance, the S&P500, has done incredibly well—up 163% since Obama’s inauguration, according to Yahoo Finance data. People like to look at how the stock market has performed under various presidents, at least partly because they think it is a measure of how well they have done on economic policy. People also like to look at how the market reacts during closely contested elections: If one party or the other unexpectedly wins the presidency or control of the House or Senate, the stock market’s reaction the next day is seen by some people as a measure of the market’s perception of the economic competence of the two parties.

In this post, I want to use just the most basic financial theory identities to show how to think through the question of when and why a rising stock market is a good thing. As I referenced above, the bottom line is that it generally is a good thing, but the reasons why are, I think, slightly more subtle than may be assumed by those who use the stock market to score political points. By exploring what, precisely, the stock market’s value represents, and why that matters, we can learn about finance and the economy, and also have more intelligent conversations about evaluating policy makers.

 

***

 

The fundamental identity of finance:

When people buy stocks, they do so in the hope of turning their spare cash today into more cash in the future. So, when we buy a stock, we’re paying for the future cash flows from that stock. Some of those cash flows could come from payouts from the company—in the form of dividends and share repurchases. And some of those could come from reselling the asset/stock to other investors (yielding a return from ‘capital appreciation’). But those other investors to whom we resell the stock will, in turn, also be hoping to profit via dividends, buybacks, and capital appreciation. All capital appreciation has to come from some other market participant being willing to pay more for the stock in the future, ad infinitum. What this means is that, even though capital appreciation is an important part of the gains that any investor can expect to make in her own lifetime, for the market as a whole, we can think of the stock valuations and prices as a function just of dividends and repurchases.*

For simplicity, let us from now on just use ‘dividends’ in place of ‘dividends and repurchases.’ Since the market values stocks for their expected future cash flows—which we are now calling ‘dividends’—it makes sense to write the price using the following simple identity:

Price = [Expected dividends] / [Some discount rate]

I am making an abuse of notation here.** But the basic concept is there in this simple notation: Price today reflects expectations for future dividends, discounted by some amount. This is an identity. It is true by virtue of how we are defining the terms: Whatever the market’s expectation of future dividends, there must be some discount rate at which those future expected dividends are being discounted that can rationalize why the market is trading the asset at the current Price.

So that’s just an identity—what can it tell us? Well, it tells us that significant*** changes in the value of stocks can come from one of two sources: Changes in expectations for future dividends and changes in discount rates. Let’s look at each of these in turn.

 

***

 

Dividends:

Dividends are just cash that firms can choose to pay out to shareholders. Legally speaking, shareholders are ‘residual claimants’ of the firm, meaning that they are entitled to be paid after the firm has met its obligations to pay contracting parties such as suppliers, debtholders, etc. This means that, to a rough approximation, the money that the firm can pay out to shareholders is determined by its profits. Empirically, dividends tend to reflect a moving average of firms’ profits over time. Profits that are not paid out to shareholders can be reinvested inside the firm. If the company’s internal reinvestments have the same rate of return as the company’s discount rate—and in economic equilibrium, they should be approximately equal, on average—then these internal reinvestments are equally valuable to shareholders as the cash payout would be. So, assuming that equality, short-term ‘payout policy’ (the choice of what fraction of profits to pay as dividends vs. reinvest) has no effect on the share value.

What does this mean? It means that, to a close approximation, we can use profits and dividends interchangeably in thinking about firm valuation. Practically speaking, the things that increase firm profits are things that increase dividends. It also means that we can change our exact identity above to an approximate identity:

Price ≈ [Expected profits] / [Some discount rate]

(Indeed, you may already know that “fundamental value” investors and analysts typically use some accounting measure of profits (such as EBITDA) or free cash flows, to value firms, rather than explicitly modeling their future dividend flows. The tight theoretical and empirical link between profits and dividends is the reason why they can do this.)

So, all that said, what are the implications? The bottom line is that one reason why stock prices increase is that expectations for future profits increase.

Is it a good thing when that happens? The answer is, I think, mostly yes. Usually, if market participants expect firm profits as a whole to grow, it’s because they’ve become more optimistic about consumer spending—and the things that tend to drive consumer spending, employment and GDP, tend to correlate with better outcomes in life for people as a whole.

But there could be some special circumstances when increased corporate profitability could be a bad thing. Suppose that some new policy were adopted that protected incumbent firms from competition by innovative startups. Since S&P500 firms are, by definition, large firms, expectations for their future profitability would, in this thought experiment, increase. The value of the S&P500 would increase, even as consumers would be hurt, and the value of privately-held and small-cap startups would decrease. Or, suppose that some new law were passed that greatly extended various patent protections. Assuming that S&P500 firms are net suppliers of patents, their profits would benefit, while the effects on consumers and the economy as a whole would be more ambiguous. Thus, if we used the value of the S&P500 as our summary statistic of economic well-being, we would be misled. In some policy circles, people draw a distinction between being “pro-business” and being “pro-market,” and this thought experiment captures one of the ways in which there can be a difference. Various types of policies could benefit certain corporations’ profits while being bad for the economy as a whole.

So the bottom line is that increases in the value of the stock market that are driven by what economists call “cash flow news” are usually, but not always, indicative of good news for the economy as a whole. It turns out that news about macro variables that will affect consumer spending (which tends to affect all firms in all industries) tend to swamp news about, say, legislation that will protect one industry or another from competition, etc. So most changes in expectations for corporate profitability reflect good news. But it’s worth remembering that this doesn’t always have to be the case.

 

***

 

Discount rates:

Discount rates capture the fact that, even if I expect some stock to pay me $100, on average, next year, I’m not willing to pay $100 for it today. There are two reasons for this: First, even if I expect it to pay $100, there’s some uncertainty—some probability that it could pay more or less—and most of us are risk averse and thus willing to pay less than the average payoff. Second, it has traditionally been asserted (though the era of negative interest rates may be casting doubt on this) that there is a time value of money, such that even riskless future cash flows are discounted in the present. For the purposes of this question—since risk-free interest rates have been low and stable for the past 8 years—the first factor is most important. Let’s leave aside the time value of money for now.

What will determine by how much I discount a risky payoff of $100? Necessarily, the two things that will determine that are (1) the riskiness of the payoff, i.e., how widely dispersed are the possible outcomes are around my expectation (e.g., does it pay $50 vs. $150 with probability .5 each, or does it pay $0 vs. $200 with probability .5 each?), and (2) my attitudes towards that risk, how risk-averse or risk-seeking I am.

So what this means is that the other major source of changes in the value of the stock market is changes in discount rates, driven by changes in perceptions of riskiness as well as attitudes towards risk.

The prevailing consensus in modern finance is that most major aggregate (that is, market-wide) changes in the value of stocks are driven by ‘discount rate news’ rather than ‘cash flow news’—that is, changes in perceptions and attitudes towards risk. (Note that discount rate means the same thing as [required] rate of return, expected return, etc. All these terms can be used interchangeably, but I think that ‘discount rate’ is the most intuitive in the context of valuation.)

Is it a good thing, per se, when discount rates decrease—i.e., when perceptions of risk and aversion to risk decrease?

I actually think this is a tough philosophical question. Normatively, how can we say what our preferences and attitudes towards risk should be? Moreover, is it even possible to say how we should perceive the amount of risk that there is? Presumably, in judging changes in perceived riskiness, we would want to separate true riskiness from inaccurate perceptions of riskiness. Perhaps we think it is good when true riskiness decreases, and good when an inaccurately high perception of riskiness decreases to a correct perception—but that it is not good when the perception of riskiness becomes inaccurately low. But how do we make such a distinction between the truth and the perception? Indeed, in a deterministic world, it’s unclear what it even means to talk about the true riskiness!

If we could make such a distinction, between true riskiness and perceived riskiness, then it would seem that decreases in discount rates driven by decreases in true riskiness were a good thing. If the future path of GDP, consumer spending, and thus corporate profits, all become more reliable, and less risky, that would be desirable. But if discount rates decrease, we can never know if the economy truly became less risky, or the market fallaciously perceives it as less risky. In the end, if the economy continues to do well, low discount rates will be proclaimed, ex post, to have been justified; if not, then the previous market  high will be proclaimed, ex post, to have been an obvious bubble. But we can never know with certainty ex ante.

When the current president took office in 2009, the U.S. was still amid a major and largely unprecedented financial crisis. The low stock market valuations at the time likely reflected general macroeconomic pessimism—expectations that corporate profits might be low for some time—but also, more significantly, very high discount rates, reflecting uncertainty (perceptions of riskiness) and risk aversion. There could be a mix of institutional reasons (e.g., interlocking financial constraints) and psychological reasons for it, but the current academic finance literature is in agreement that discount rates/expected returns/required rates of return tend to be very high during recessions. It’s no surprise when the stock market bounces back from a recession low. The value of the stock market goes so low in recessions precisely because those are the time periods in which investors are most sensitive to downside risks—and that very fact, in turn, makes the stock market cheap, and thus likely to bounce back.

So the performance of the stock market over the past 8 years would seem to reflect two separate periods: First, a resolution of the extreme uncertainty of the financial crisis, which allowed discount rates to go from being very high to moderate. Most people would say that the smooth resolution of the financial crisis and its fear and uncertainty was a good thing. And second, the period of the last several years, in which decreased perceptions of and aversion to riskiness (themselves, in turn, influenced by monetary policy) allowed discount rates to go from being moderate to being very, very low. Are these very low discount rates a good thing?

We don’t know, and we won’t know, until it’s too late.  🙂

 

***

 

Conclusion:

Significant*** changes in the value of stocks are driven by changes in (i.) expectations for corporate profits and (ii.) discount rates. Increases in expectations for corporate profitability usually, but not always, reflect good economic news. Decreases in discount rates could reflect a desirable decrease in fear and uncertainty, but might also reflect fallacious overconfidence and risk tolerance. To find out whether today’s very low discount rates are a good thing, you’ll just have to wait and see.

 

 

______________________________________________________________________________

* Somewhat technically, for any positive discount rate, the discounting of the ‘terminal value’ of the asset will asymptote to zero as time increases.

** In reality, since dividends are paid out over many future periods, and since discount rates can vary between periods, I should really have t-subscripts, and an infinite summand symbol. Also, depending on what notation you prefer, you can write the discount rate as the thing you multiply cash flows by (something like .94), or as the thing you divide them by (something like 1.05). We also have flexibility with whether to write the divisor as [discount rate] or as [1 + discount rate]. My goal is to focus on putting the high-level concepts in English, so my apologies if I irritate some precise readers, or those who have been previously exposed to one notation or the other.

***Technically, where I write ‘significant,’ it should be ‘unexpected,’ to reflect the fact that, in the theoretical absence of news, the stock market would still be expected to increase by its expected rate of return.

Here’s all the interesting stuff in Nate Silver’s The Signal and the Noise

I’ve been immersing myself in statistics textbooks and software recently, as a part of a class and my general career interests. So over a weekend ski trip, I took on a lighter version of the work I’m doing by reading Nate Silver’s The Signal and the Noise: Why So Many Predictions Fail—but Some Don’t. Silver has been thoroughly and well-reviewed since his book was published shortly after the presidential election. So I won’t need to introduce him or the basics of what he does. My post will just highlight some of the more interesting, surprising, and difficult-to-articulate stuff in the book, particularly those that are related to topics in economics we’ve already discussed.

***

At the heart of the book is a powerful and important idea: The truth about the world, as best as we can understand it, is probabilistic and statistical, but we humans are unsuited to statistical and probabilistic thinking. What does this mean? Let me give a couple of examples. People say things like, ‘Uncle Lenny died of cancer because he was a smoker.’ The unjustified certainty with which we use the word ‘because’ here reveals a lot about how we think. After all, a large fraction of us—smokers and non-smokers alike—will die of cancer. We know with certainty that smoking statistically increases one’s risk of developing cancer, but we can’t say for sure that Uncle Lenny in particular wouldn’t have developed cancer if he weren’t a smoker. A more rational thing to say would be ‘Uncle Lenny died after being long weakened by cancer. As a smoker, he was more likely than the general population to contract cancer, and so it’s likely that his smoking was a significant contributor among other risk factors to his development of a cancer that was sufficient to contribute to his death.’ But that lacks a certain pith. See the problem? We all know that the underlying reality is that a wide variety of different risk factors contribute to and explain cancer, but we humans like to trade certain and definitive statements about linear causation, rather than thinking about a complex system of inputs that take on different values with different probabilities and interact with each other dynamically to produce distributed outputs with certain probabilities. In other words, we humans like to reason with narratives and essences, but the truth of the world has more to do with statistical distributions, probabilities, and complex systems.

Other examples of essentialist thinking are: When we have a hot summer, we often say that it was caused by global warming; on the other hand, global-warming deniers will say that we cannot make any such attribution because we had hot summers from time to time even before the industrial revolution. The most realistic thing to say would be, “global warming is increasing our chances of experiencing such a hot summer, and thus the frequency of them.” Another example: people will say that, “Kiplagat is an excellent long-distance runner because he is a member of the Kalenjin tribe of Kenya.” This ‘because’ is not entirely justified, but neither is the offense that sensitive people take to this claim, when they say things like, “Not every Kalenjin is a fast runner! And some Americans from Oregon are great runners, too!” The most precise way of putting the underlying truth would be, “Kiplagat is an excellent long-distance runner. He is a member of the Kalenjin tribe, which is well-known to produce a hugely disproportionate share of the world’s best long-distance runners, so this is one major factor that explains his ability.”

Why are we bad at probabilistic thinking, and locked into definite, essentialist, narrative styles of thinking? The axiomatic part of the explanation is that our brains have evolved to reason the way they do because these styles of reasoning were advantageous throughout our evolutionary history. We humans have been built as delivery mechanisms for our masters and tyrants—our genes. They encode instructions to make us and or brains work in the ways that helped them (the genes) get passed down through the generations, rather than working in ways that lead us to the strict truth. Probabilistic thinking takes a lot of information gathering and computational power—things that either weren’t around in our evolutionary history, or were costly luxuries. So our brains have evolved mental shortcuts or ‘heuristics’—ways of thinking that give us the major advantages of surviving and reproducing in the probabilistic world, without all of the costs. Our ancestors did not think, and we do not think, ‘Three out of the last five of our encounters with members of this other tribe have ended badly; so we can conclude with X% certainty that the members of this other tribe that we see here are between Y% and Z% likely to have a hostile attitude toward us.’ Rather, our brains tell us, ‘This enemy is evil and dangerous; either run away or fight—look, I’ve already elevated your heart-rate and tightened up your forearms for you!’ I.e., it gives us an essential claim and a definitive takeaway. In the modern age, public authorities say, ‘Smoking will give you cancer,’ which gets across the main takeaway point and influences behavior in important ways, more powerfully than ‘Lots of smoking generally contributes to a higher probability of developing cancer.’

Our brains are also wired to see a lot of patterns, causation, and agency when they aren’t there. As Silver notes, the popular evo-psych explanation for this is that it is more costly to erroneously attribute a rustling in the woods to a dangerous predator, and to take occasional unnecessary precautions against it, than it is to erroneously always assume that all rustling just comes from the wind, and get eaten alive when a predator actually does appear. Since missing a real pattern is more costly than falsely seeing an unreal one, we tend to see more patterns than there really are, and believe that we can discern a predictable pattern in the movement of stock prices, or get impressed by models that, e.g, predict presidential elections using only two small metrics, finding them more impressive than predictions that rely on aggregates of on-the-ground polling. Our basic innate failures at thinking statistically are reinforced by the culture around us, which accomodates/manipulates us (in good ways in bad) by appealing to our need for narrative approaches to understanding the world.

But now we live in the modern age. Our needs are different than they were in our evolutionary history, and our evolved psychology should not be destiny. We need to learn to reason more truly—which means probabilistically and statistically. Silver explores how we have succeeded and failed in doing this with examples drawn from baseball, online poker, politics, meteorology, climate science, finance, etc.

***

Why is it so important that we learn to reason probabilistically and statistically?  There are two main reasons. The first is very practical, and the second is more theoretical but ultimately very important. First, we obviously base our plans for and investments in the future around our predictions of what the future will be like. But the future cannot be known with absolute certainty, so we need to make rational decisions around a probable distribution of outcomes. For example, in chapter 5, Silver recounts an example of a flood which the public authorities predicted would rise to 48 feet—since the levees protecting a neighboring area were 51 feet tall, locals assumed they were being told that they were safe. But the 48 foot prediction obviously had a margin of error and, this time, it was off by more than 3 feet, and the levees were overrun. Given how dangerous it is to be in a flooded area, the local residents, had they understood the margin of error in the prediction and the probability of the levees being overrun, would have decided it was worth evacuating as a precaution—but they weren’t made to understand that the authorities’ prediction in fact entailed a range of possibilities. This is a very concrete example of the ubiquitous problem of reasoning, planning, and acting around a single true expectation, rather than weighting a range of possible outcomes.

Another example of this is how climate scientists don’t feel like they can give probabilistic statements to the public, like, ‘The most likely outcome is that, on our current path, global temperatures will rise on average 2 degrees Celsius over the next 100 years, and we have 90% certainty that this increase will range between .5 and 3 degrees. Additionally, we fear the possibility that there could be as-yet-imperfectly-understood feedback loops in the climate which could, with 5% probability, raise temperatures by as much as 8 degrees over the next century–while the chance of this is low, the potential costs are so high that we must consider it in our public-policy responses. Additionally, the coming decade is expected to be hotter than any in the last 100 years, but there is a 10% possibility that it will be a cool decade, from random chance.’ The public—you and I—are not good at dealing with these kinds of probabilistic statements. We demand stronger-sounding, definitive predictions—they resonate with us and persuade us, because they’re what our brains are comfortable dealing with. And a lot of the confusion in public debates surrounding scientific matters comes from our demand for definitive answers, where science can only offer a range of probabilities and influences. Climate scientist Michael Mann was quoted in the book as saying, “Where you have to draw the line is to be very clear about where the uncertainties are, but to not have our statements be so laden in uncertainty that no one even listens to what we’re saying.”

But the second, more fundamental reason for why we need to get better at probabilistic prediction is that offering and then testing predictions is the basis of scientific progress. Good models, particularly those that model dynamic systems, should offer a range of probable predictions—if we can’t deal with those ranges, we can’t test which models are the best. That is, we as a society would be ill-advised to say to climate scientists, ‘You predicted that temperatures would rise this decade, but they didn’t—neener neener.’ Rather, we should be savvy enough to understand that there’s a margin of error in every prediction, and that the impact of some trends can be obscured by random noise in the short run, and so the climate scientists’ claim that temperatures are rising is true even if it did not appear in this particular decade.

The rest of this post consists of some of the more interesting of the book’s ideas about statistical reasoning, and some of the barriers thereto, after a brief discursion on economics.

***

I’ve written in the past about the Efficient Market Hypothesis and about the value of short-selling, so it piqued my interest when Nate made some interesting points that related the two. One challenge Nate presents to the EMH is the two ‘obvious’-seeming bubbles that we have experienced in recent memory—the late 90s tech bubble, and the mid-2000s housing bubble. Now, it’s obviously very easy to call bubbles in retrospect, with the benefit of hindsight. But let’s accept for the sake of argument that we really could have seen these bubbles coming and popping—in the 90s, P/E ratios were hugely out of whack, and in the 2000s, housing prices had accelerated at rates that no underlying factor seemed to explain. The question is, why didn’t people short these markets and correct their exorbitant prices earlier?

Well, part of the problem is that in certain markets it can be difficult to accumulate a large short position without huge transaction costs, sufficient to move prices to a more rational level. But Silver’s more interesting argument is that institutional traders are too rational and too risk-averse relative to their own incentives. Counterintuitive, right? What does Silver mean? Let’s imagine that we’re in a market that looks a little overheated. Suppose there’s a market index that currently stands at 200, and you’re an analyst at a mutual fund and you think that there’s a 1/3rd chance that the market will crash to an index of 50 this year. That’s a big deal. But there’s still a 2/3rd chance that the party won’t stop just this year, and the market index will rise to 220 (a 10% return—not bad). In this scenario, the bet with the highest expected return is to short the market, a bet with an expected return of about $.18 on the dollar ( (1/3 * 150 – 2/3 *20) / 200). Going long in the market has an expected loss of the same. So if your goal is to maximize your expected return you go short, obviously.

The problem is, institutional traders don’t have an incentive to maximize their expected return, because the money they trade is not their own. Their first incentive is to cover their asses, so they don’t get fired. And if, in this scenario, you prophecy doom and a market crash, and short the whole market two years in a row, while the market is still rising, you’ll have a lot of outraged clients and you will get fired. And that’s the most likely outcome–the 2/3rds probability that the bull market will continue, and return another 10% this year. If you go along with the crowd, and continue to buy into a bull market that becomes overpriced then, well, when the music stops and the bubble pops, you’ll lose your clients’ money, but you won’t look any worse than any of your competitors. So this may be why a lot of bubbles don’t get popped in good fashion. It’s not that institutional investors are irrational—it’s that they are being rational relative to their career incentives, which are not well-aligned with market efficiency as a whole.

What’s the solution to this problem? Well, part of it is to get more really good short-sellers. One interesting tradeoff here is that the market is most efficient when people are (1) smart and (2) putting their own money on the line. Right now, we’re seeing a transition in which mutual funds and such are becoming more and more common, and so a larger portion of trading that is done in financial markets comes from institutions rather than from individual retail investors. These institutional traders may be smarter than independent retail investors, but they’re not betting their own money, which means their incentives are not well-aligned with market efficiency—the mutual fund’s first incentive is to avoid losing clients, who will bail out if the fund misses out on a bull market in the short term. So institutional investors will face a lot of pressure to keep buying into bull markets even when they know better. In short: don’t expect bubbles to go away anytime soon.

***

Silver discusses some of implications of the fact that predictions themselves can change the behavior they aim to predict. This is particularly pertinent in epidemiology and economics. For example, if the public authorities successfully inform the public that, this year, the flu is expected to be especially virulent and widespread in Boston, Bostonians will be especially inclined to get vaccinated, which will then, in turn, cancel the prediction. So was the prediction wrong? Maybe, but thank God it was! In economics, if the economics establishment sees that some developing country is implementing all of the ‘right’ policies, it will predict lots of economic growth from that country—this will cause a lot of investments and optimism and talent to flow into that country which could ‘fulfill’ the prediction. On the most practical level, this means that in these scenarios it’s very difficult to issue and then assess the accuracy of predictions. On a philosophical level it may mean that a perfect prediction that involves human social behavior may be impossible, because it would require a recursive model in which the prediction itself was one of the inputs.

A lot of this reasoning here raises a moral quandary. Should forecasters issue their predictions strategically? We know that public-health authorities’ predictions about how bad a flu outbreak will be will influence how many people get immunizations. The Fed’s predictions about the future of the economy influence companies’ plans for the future, which plans can then fulfill the Fed’s predictions (i.e., if a company is persuaded by the Fed that there will be an economic recovery, then it will ramp up its production and hiring right now, in order to meet that future demand, which will help fulfill that prophecy). Should these and similarly situated agencies therefore issue their predictions not descriptively, but strategically, i.e., with an eye to influencing our behavior in positive ways? In practice, I assume the agencies definitely do. The Fed has consistently optimistically over-predicted the path of the U.S. economy since the financial crisis. This is embarrassing for it, but any cavalier expression of pessimism from the Fed very well could have tilted the U.S. into a double-dip recession. The obvious problem is that when public agencies make their predictions strategically rather than descriptively, they could, over the long run, dilute the power of their predictions in the eyes of the public—i.e., people might start to automatically discount the authorities’ claims, thinking “this year’s outbreak of avian flu, much like last year’s, will affect 10^3 fewer people than the authorities suggest, so I don’t actually need to get a vaccination.”

***

Silver offers a lot of helpful reminders that rationality requires us to go beyond ‘results-oriented thinking.’ On televised poker, for example, commentators praise the wisdom and perspicacity of players who bet big when their hands weren’t actually all that strong, statistically speaking, and who win either because (1) they caught a break on the last cards dealt (in Texas hold’em style) or (2) they were  lucky enough that everyone else had even weaker hands. But while commentators may praise these players’ prescience, we should call these bets what they are—dumb luck. We shouldn’t evaluate people’s decisions after the fact using perfect information, what we know now. We should evaluate how rationally they acted given the information they had access to at the time. And betting big with a weak hand, without any information that other players’ hands are even weaker, is never the smart or rational thing to do—even though it will luckily pay off in some chance occasions.

***

‘Big Data’ is a modish term right now. An essayist in Wired claimed a few years ago that as we gain more and more data, the need for theory will disappear. Silver argues that this is just the opposite of truth. As the amount of information we have multiplies over and over again, the amount of statistical ‘noise’ we’ll get will multiply exponentially. With all this data, there will be more spurious correlations, which data alone will not be able to tease out. In the world of Big Data we’re going to need a lot of really sound theory to figure out what are the causal mechanisms (or lack thereof) in the data we have, and which impressive-seeming correlations are spurious, explained by random chance. So theory will become more important, not less.

***

One big takeaway for me, as I read Silver’s accounts for how statistical methods have been applied to improve a variety of fields, is that we are very easily impressed, sometimes intimidated, by mathematical renderings of ideas, but statistics really is not rocket science. The computations that statistical software can do at first seem complex, but they’re all ultimately built on relatively easy, intuitive, concrete logical steps. Same with models: the assumptions on which we build models, and the principles we use to tease out causation and such from within the wealth of data, are ultimately pretty intuitive and straightforward and based in basic logical inference. In reading Silver’s account of the how the ratings agencies got mortgage-backed securities wrong in the run-up to the financial crisis, I was astonished by just how simple the models the agencies were using were. That is, even those of us who like to bash the financial sector still tend to assume there’s some sophisticated stuff beyond our ken going on in there. But Silver reports, for example, that the ratings agencies had considered the possibility that the housing market might decline in the U.S., but continued to assume that defaults on mortgages would remain uncorrelated through such a period. The idea that mortgage-defaults would always exhibit independence—and that the rate of default as a whole could not be changed by global economic conditions—is flatly ridiculous to anybody who takes a moment to think imaginatively about how a recession could affect a housing market. But because the ratings agencies’ ratings were dressed up in Models based around Numbers on Spreadsheets, Serious People concluded that they were Serious Ratings. A lesson for the future: Don’t let yourself be bullied into denying the obvious truths or accepting obvious falsehoods just because they have been formulated in mathematical notation. A seemingly sophisticated mathematical model is in truth a very bad one if its basic assumptions are incorrect.

The lesson here is not that we should eschew statistical methods—it’s that we should get in on the game and improve the models, instead of being cowered by the people who wield them. Indeed, another striking part of the book was Silver’s admission that his own famous political-prediction model on his Five Thirty-Eight blog is not terribly sophisticated—it’s only been so successful because everyone else’s standards in the political world have been so low. And the statistical methods that revolutionized baseball drafting and trading, as recalled in Moneyball, weren’t that sophisticated either—they were just low-hanging fruit that hadn’t been eaten yet.

***

The more polemical parts of the book center on Silver’s righteous claim that pundits be held to account for their predictions. Silver points out that political pundits, like those who appear on the McLaughlin group, regularly get their forecasts wrong in very predictable ways, and never get called out on them or punished. As one who, like Silver, gets angry when people make plainly descriptively untrue statements about the world, I did enjoy his righteous outrage. But I think that in this, he (and I) get something basically wrong—namely, being a political pundit and appearing on the McLaughlin Group are fundamentally not truth-seeking activities, and so their failure to deliver truth should be completely unsurprising and probably doesn’t even qualify as a real indictment in the pundits’ minds. The goal of the people engaged in these activities is not to uncover the truth, but to root for their team. So of course the Republican pundits on McLaughlin group always predict Republican electoral victories, as the Democrats predict Democratic victories. That’s what they’re there for.

More fundamentally, I think Silver under-estimates how uncommon it is for people to think about the world in a descriptive truth-seeking manner. Most of us most of the time are not engaged in truth-seeking activity. Most of us typically choose the utterances we issue about the world on the basis of loyalties, emotional moral commitments, etc.. Thinking about the world descriptively is just not the natural mode for most people. When a Red Sox fan, in the middle of a bad season, says something like, “The Red Sox are going to win this game against the Yankees,” we shouldn’t actually take him to mean, “The Red Sox are certain to win this game” or even necessarily “The Red Sox have a better than even chance of winning this game.” Rather, the real content of his statement is better translated as, “Rah, rah, goooo Red Sox!”  For most people, statements that they phrase as predictions are not a matter of descriptive analysis of the world—they’re statements of affiliation, hope, moral self-expression, etc. The social scientific and descriptive mindsets are very rare and unnatural for humans, and if we’re going to get angry about people’s failures in this respect, we’re going to be angry pretty much all the time.

But I do agree with the basic takeaway from this polemic: Silver wants to make betting markets a more common, acceptable, and widely-expected thing. If we were forced to publicly put our money where our mouths are, we might be more serious and humble about the predictions we make about the future, which should improve their quality. I’ve long relied on Intrade to give me good, serious predictive insights into areas where I have no expertise, and do wish liquid betting markets like it, where I can gain credible insights into all kinds of areas, were more common and entirely legal.

***

A lot of expert reasoning goes into building a good model with which to make a prediction. But what about us general members of the public who don’t have the time to acquire expertise and build our own models? How should we figure out what to believe about the future? Silver provides some evidence that aggregations of respectable forecasters (i.e., those who have historically done very well) are almost always better than any individual’s forecasts. E.g., an index that averages the predictions of 70 economists consulted on their expectations for GDP growth over the next year does much better than the predictions of any one of those economists. So in general, when we’re outside of our expertise, our best bet is to rely on weighted averages of expert estimates.

But there’s an interesting catch here: While aggregates of expert predictions generally do better than any individual experts, this fact depends upon the experts doing their work independently. For example, Intrade has done even better than Nate Silver in predicting the most recent election cycles, according to Justin Wolfers’ metrics. So does that mean that Nate Silver should throw away his blog, and just retweet Intrade’s numbers? No. And the reason is that Intrade’s is strongly affected by Silver’s predictions. So if Silver were, in turn, to base his model around Intrade, we would get a circular process that would amplify a lot of statistical noise. An aggregation ideally draws on the wisdom of crowds, law of large numbers, and the cancelling-out of biases.  This doesn’t work if the forecasts you’re aggregating are based on each other.

Aggregations of predictions are also usually better than achieving consensus. Locking experts together and forcing them to all agree may give outsized influence to the opinions of charismatic, forceful personalities, which undermines the advantages of aggregation.

***

Nate argues, persuasively, that we actually are getting much better at predicting the future in a variety of fields, a notable example of which is meteorology. But one interesting and telling Fun Fact is that while meteorologists’ actual predictions are getting very good, the predictions that they are compelled to present to the public are not so strong. For example, the weather forecasts we see on T.V. have a ‘wet bias.’ When there is only a 5-10% chance of rain, the T.V. forecasters will say that there is a 30% chance, because when people hear 5-10% chance they think of it as an essential impossibility, and become outraged if they plan a picnic that subsequently gets rained on, etc. So to abate righteous outrage, weather forecasters have found it necessary to over-report the probability of rain.

Meteorologists’ models are getting better. We humans just aren’t keeping pace, in terms of learning to think in probabilities.

***

But outside of the physical sciences, whose systems are regulated by well-known laws, we tend to suck at forecasting. Few political scientists forecast the downfall of the Soviet Union. Nate attributes this failure to political biases—right-leaning people were unwilling to see that Gorbachev actually was a sincere reformer, while left-leaning people were unwilling to see how deeply flawed the USSR’s fundamental economic model was. Few economists ‘predicted’ the most recent recession even at points in time when, as later statistics would reveal, we were already in the midst of it. Etc., etc.

***

Silver points out that predictions based on models of phenomena with exponential or power-law properties seem hugely unreliable to us humans who evaluate these models’ predictions in linear terms. A slight change in the coefficients in the parameter can have huge implications for the prediction a model makes if it is exponential. This can cause a funny dissonance: a researcher might think her model is pretty good, if its predictions come within an order of magnitude of observations, because this indicates that her basic parameters are in the right ballpark. But to a person who thinks in linear terms, an order-of-magnitude error looks like a huge mistake.

***

Silver briefly gestures at a thing that the economist Deirdre McCloskey has often pointed out—that our use of ‘statistical significance’ in social science is arbitrary and philosophically unjustified. What is statistical significance? Let me back up and explain the basics: Suppose we are interested in establishing whether there is a relationship, among grown adults, between age and weight—i.e., are 50-year olds generally heavier than 40 and 35-year olds? Suppose we sampled, say, 200 people between 50 and 35, and wrote down their ages and weights, and then constructed a dataset. Suppose we did a linear regression analysis on the data, which revealed a positive ‘slope,’ representing the average impact that an extra year of life had on weight in the sample. Could we be confident that in general, for the population of people between 35 and 50 as a whole, this relationship holds? Not necessarily. Theoretically, there’s always a chance that our sample set is different—by pure chance—than the general population, and so the relationship in our sample cannot be generalized. There’s a possibility that the relationship we observed between age and weight is not a true relationship at all, but was just a matter of chance. And (as long as our sample was truly randomly selected from the population) we can actually calculate the probability of this possibility, using the data’s standard deviation and the size of our sample. In statistics, we call it the p-value, and a p-value of .05 means that there’s a 5% chance that a relationship observed in a sample is just an illusion, produced by chance. In contemporary academe, social scientists by convention will generally publish results with a ‘statistical significance’ of 95%–i.e., where the p-value is lower than .05. But applying this rule mechanically actually doesn’t make much sense. It means that today, a statistical analysis that produces a result with a p-value of .050 will get published, while one with a p-value of .051 will not, even though the underlying realities are almost indistinguishable. There’s no fundamental philosophical reason for setting our general p-value cutoff at .05—indeed, the basic reason we do this is that we have 5 fingers. In practice, this contributes to the rejection of some true results and the acceptance of some false results. If we accept all findings that establish ‘statistical significance,’ then we’ll accept a lot of false results. For example, if a journal publishes 100 research findings, all of which have a p-value of .03, passing statistical significance, we would expect that, on average, 3 of these findings would actually be incorrect, illusions of the samples from which they were built. (This is, by the way, after controlling for the possibility of the data being incorrectly obtained.)

***

On page 379, Silver has what is possibly the greatest typo in history: “ ‘At NASA, I finally realized that the definition of rocket science is using relatively simple psychics to solve complex problems,’ Rood told me.” (I am envisioning NASA scientists carefully scribbling down the pronouncements of glazy-eyed, slow-spoken Tarot-card readers.)

***

The final chapter in the book, on terrorism, was fascinating to me, because with terrorism, as with other phenomena, we can find statistical regularities in the data, with no obvious causal mechanism to explain those regularities. In the case of terrorism, there is a power-law distribution relating the frequencies and death tolls of terrorist attacks. One horrible feature of the power-law distribution of terrorist attacks is that we should predict that most deaths from terrorism will come from the very highly improbable, high-impact attacks. So over the long-term, we’d be justified in putting more effort into preventing e.g., a nuclear attack on a major city that may never happen, as opposed to a larger number of small grade terrorist attacks. Silver even argues that Israel has effectively adopted a policy of ‘accepting’ a number of smaller-scale attacks, freeing the country to put substantial effort into stopping the very large-scale attacks—he shows data suggesting that Israel has been able to thus ‘bend the curve,’ reducing the total number of deaths from terrorism in the country that we would otherwise expect.

***

But the big thing I was hoping to get from this book was a better understanding the vaunted revolution in statistics in which Bayesian interpretations and ideas are supplanting the previously-dominant ‘frequentist’ approach. But I didn’t come away with a sound understanding of Bayesian statistics beyond the triviality that it involves revising predictions as new information presents itself. Silver told us that the idea can be formulated as a simple mathematical identity: It requires us to give weights to the ‘prior’ probability of a thing being true; the probability that the new information would present itself if the thing were true; and the probability of the information presenting itself but the thing still being false. With these three we can supposedly calculate a ‘postperior probability,’ or our new assessment of the phenomenon being true. While I will learn more about the Bayesian approach on my own, Silver really did not convey this identity on a mathematical level, or help the reader understand its force on a conceptual level.

Overall, then, I found the book disappointing in its substantive, conceptual, and theoretical content. A lot of the big takeaways of the books are moral-virtue lessons, like, “Always keep an open mind and be open to revising your theories as new information presents itself”; “Consult a wide array of sources of information and expertise in forming your theories and predictions”; “We can never be perfect in our predictions—but we can get less wrong.” All great advice—but not what I wanted to get from the time I put into the book. The sections on chess and poker are interesting and good journalism, too, but they will do little to advance the reader’s understanding of statistics, model-building, or the oft-heralded “Bayesian revolution” in statistics, etc. But maybe I’m being a snob and wanting more of a challenge than a book could pose if it expected to sell.

–Matthew Shaffer

What good is short-selling? (Econ for poets)

If you follow the business press, you’ve probably seen the raucous unfolding story about Herbalife, a company whose share-price has tumbled then oscillated ever since Bill Ackman, the hedge-fund manager, took a short position in the stock a couple months ago. Ackman has alleged that Herbalife is actually a pyramid scheme — i.e., that its revenue primarily comes not from its sale of actual goods, but from its ‘multi-level marketing’ strategy in which its distributors recruit new individuals to sign up as distributors, and take a portion of that sign-up fee in return. That is, Ackman alleges that Herbalife distributors are only making money by taking one-off payments from new distributors, which is obviously not a sustainable business strategy over the long run (how will Herbalife’s distributors make money once all 7 billion people on earth have been recruited, if it can’t make money by actually selling its goods?). Ackman wants the authorities to investigate Herbalife’s business model. Others, like Carl Icahn, have come to Herbalife’s defense, saying that Ackman’s allegations are misplaced and, more, since these false allegations have unjustly driven the company’s stock-price downward, the company is now a very good buy.

This story will, no doubt (as is every story’s wont), continue to unfold. But I wanted to use this opportunity to explain and explore the basic theory of short-selling in financial markets. Short-sellers don’t have a good reputation with the companies they target for short-selling, or with members of the public who think that short-sellers hurt the companies they target or profit from others’ losses or hurt the market. But I want to argue that short-sellers play a very valuable role. This post will have four basic parts: (1) I will explain what short-sellers do, emphasizing that they do not directly ‘take capital away’ from companies, and therefore do not directly hurt them. (2) I’ll argue that we as a society do not want the stock market just to go up and up as high as possible, but, rather, we want it to be correctly priced. In part (3) I’ll combine points (1) & (2) to argue that short-sellers play a valuable role. And (4) I’ll caveat my roseate view, and acknowledge and address some criticisms of short-sellers.

***

(1) What is short-selling? The basics: Suppose you have a very good reason to think that a stock is underpriced — that, all things considered, it will return more than the market rate in the future. What should you do? Obviously, you should buy it, which amounts to placing a bet on the stock’s rise. Colloquially, we call this ‘taking a long position’ in the stock. Now suppose that, after a few years, other investors fall in love with the stock, and, now, you think it’s overpriced. What do you? Obviously, you sell. But what if you see a stock that is overpriced, but you don’t own the stock in the first place? What do you do? It would obviously make no sense to buy it in order to then sell it — that would just incur two fees from your broker. So what can you do? Is there any way that you can bet on the decline of a stock’s price if you don’t own the stock in the first place (or bet on its decline beyond just selling off all of your shares)?

As you’ve probably guessed…yes, you can short-sell or ‘short’ the stock. How do I do that? Technically, when I short a stock is that I borrow it from someone else for a contracted time and at a contracted price (colloquially, we call this ‘taking a short position’ in the stock). This allows me to profit from the stock’s decline over the period of the contract. Here’s how it works: Say that stock in QWERT is trading for $100 a share. I could pay somebody else $10 to ‘borrow’ their stock for 1 year — if they expect the stock to rise, stay flat, or even only fall a little bit, then this is a great deal from their perspective. Then, I could immediately sell the stock at the market rate of $100. Then, at the end of the year, if the price of QWERT’s stock has declined to, say, $70 a share, I could repurchase the stock at this new, lower price, before returning it to the party I borrowed it from. So I paid $10 to borrow it for the year, sold it for $100, and then bought it back for $70 — I made a cool $20 while effectively investing only $10 of my own money for the year. (Modern markets are sufficiently sophisticated that I don’t actually write up individual contracts to borrow every stock I short — I can do it with a click of a button. But this transaction is legally happening somewhere underneath my click.)

If I’ve belabored this explanation a bit, it’s because I want to make clear a couple of key points: First, a ‘short position’ (just like a ‘long position’) is simply a transaction in secondary financial markets between consenting adult investors that doesn’t directly impact the capital that the company itself can access. A short-sale is the flip-side to any long position in a stock — long investors are gambling that the stock is underpriced, while short investors are gambling that it’s overpriced. What does this mean? Well, first it means that the common financial metaphors that compare short-sellers to sharks and predators are misleading. Short-sellers aren’t hurting other investors without their consent — when you borrow a stock to short it, your counterparty knows exactly what you’re doing, and makes a deal with you anyways, because s/he disagrees with your assessment. And short-sellers don’t directly harm the businesses they target (N.B. I’ll caveat this later). A company gets the equity capital that it needs in order to grow and function from its Initial Public Offerings and other direct share offerings. But as soon as a company sells shares to the public, the money it received on the sale belongs to it. So increases and decreases in the price that those shares trade for in secondary financial markets (i.e., fluctuations in the stock price) have no direct effect on the company’s store of and access to capital. To repeat: The equity that gets traded in secondary financial markets is completely distinct from the equity that is on the company’s Balance Sheet.

So what is short selling? Here’s another way to define it: It’s a legal transaction in which I’m a nice guy who takes the opposite side of two trades, paying a fee to a guy who wants to loan out his share for a year, and selling to a gal who really wants to take a long position in the stock. If I’m lucky, I make a profit on the trades. If I’m not, he and she do. The company’s day-to-day operations are usually completely unaffected by my trade.

***

(2) What do we want the stock market to do? Some theory: When I was a wee lad, before I understood basic financial theory, I thought of the stock market — as represented by the S&P500 index charted on TV screens and newspaper front pages — as a sort of agentic and determined creature, struggling admirably and valiantly and against adversity to move uphill. The S&P 500 chart was, I thought, a measure of the economy as a whole, and the higher it climbed, the better the world was. And wee-me was not the only one to think this way. Indeed, there’s interesting research at the intersection of cognitive science and financial theory that shows how even sophisticated financial commentators imbue their descriptions of stock prices with normative and agentic metaphors — an increase in stock prices is described as “the market vaulted to new heights,” while a decrease is inevitably written up as “another slip in a faltering market.” The basic metaphor that this language embeds and subconsciously conveys to us is: “The market is a self-willed agent, and it is an excellent thing when it ‘rises,’ and a sad thing when it ‘falls.'”

But this is not actually a rational way to think about the stock market. We don’t necessarily want stock prices to ‘climb’ higher and higher. Rather, we just want stocks to be priced correctly. Why? Well, there’s one very obvious and practical reason, and another less-obvious but more fundamental reason. The obvious reason is that when asset valuations just climb and climb, that causes a bubble, and bubbles usually pop, and cause a lot of instability and hell when they do. Bubbles are bad on the way up and on the way down — on the way up, I look at my stock portfolio and think I’m wealthier than I truly am and spend way too much; on the way down, I get upset about how much wealth I’ve lost and become risk averse and don’t buy enough. But is ‘popping’ the only problem with over-high asset valuations? What if we had a magic-wizard policy that could stop bubbles from popping by banning short-selling, etc., to keep bubbles permanently inflated? Would super-high asset valuations be a good thing in this magical world? Even here, economists would say ‘no,’ because even in this magic world, over-high valuations lead to a ‘misallocation of resources.’

What does this mean? Let’s explore a very simple model. Suppose that I have $100 in savings, which I’m considering investing in the IPO of PetApps.com, a new startup website that sells Apps that help your furry friend keep tabs on what other pets in the neighborhood are up to. (Yes, I’m being derisive.) It’s a ‘roaring’ ‘bull’ market that everyone wants a piece of, driving up equity valuations ‘through the roof.’ I realize that the PetApps.com IPO is overpriced relative to its true, fundamental value, but the market has so much ‘momentum’ that I can cash out of my investment while it’s still moving upwards. Should I invest in PetApps.com? Well, the answer depends on who’s asking the question. From my own selfish perspective, I should — I can make a profit by buying and selling quickly during this frenzy. But from society’s perspective, this investment would be a bad thing. Why? Well, when we say that shares of PetApps.com are ‘overpriced,’ we’re saying that, for their cost, these shares will not earn good returns in the future. In other words, capital invested in PetApps.com will not generate as much value as it would elsewhere; more colloquially, the management of PetApp.com is too incompetent to handle all that money wisely. That means that we would all be better off I invested my cash in some company that was undervalued, or in municipal bonds, or even if I just spent it on a vacation now, which would generate income for airlines, etc.

To generalize this thought experiment: At any given time, we have a lot of good options for what we can do with our money. Given that, we don’t just want to just put more and more value into any particular asset, because that would detract from the money that we could use for the other goods. Rather, we want to price each good correctly; this is what economists mean by ‘allocating capital efficiently.’ We as a society don’t just want company shares to sell for high prices per se during their IPOs — we want them to sell for correct prices, providing the company with exactly as much capital as it can use efficiently.

What about the secondary market (i.e, the buying and selling of stocks that you and I can do through E-Trade after the IPO)? As we noted above, trading in the secondary market does not directly effect the capital available to a company. So does it matter to the real economy? I think so. One way to think of secondary markets is as one big ecosystem that supports the ‘primary’ equity markets of IPOs. That is, primary investors in IPOs only invest in the first place because they are counting on the fact that they’ll be able to cash out by selling their shares, whenever they want, into a liquid market. If they made a wise investment decision during the IPO, investing capital in a company that went on to use it to do something transformative (like Apple), they’ll cash out into rising secondary markets, and make a killing. If they invested unwisely, wasting society’s scarce resources on Pets.com, they’ll lose a lot in secondary markets. Trading in secondary markets thus rewards and punishes investors for making efficient and inefficient investment decisions. It’s the ecosystem that is essentially supporting the basic business of getting good investments into good companies.

Also, companies often sell new share issues, well after their IPOs. Those shares will be sold at a price that reflects the total ‘market capitalization’ of company, calculated as the share price times shares outstanding (i.e., if your total market cap is $100,000, on 100 initial shares, and you issue 100 new shares, your 200 shares should now all sell and trade for $500 a piece, since the total value of the company should remain more or less unchanged). And so the same basic principles apply: We want secondary markets to price shares correctly, not highly, in order to prevent destabilizing bubbles, and to properly reward and punish good and bad allocation of capital. A rising S&P 500 is thus only a good thing if the S&P 500 had formerly been undervalued; a declining S&P 500 is a good thing if it had been overvalued. This is why the normative and agentic metaphors that our financial commentators use for the ‘climbing’ and ‘slipping’ market are problematic and misleading.

***

(3) Does short-selling help the market do what we want it to do? An argument: So let”s put the pieces of the puzzle together. How do short-sellers help support the economy? Well, the most obvious and commonly cited way is that they help provide ‘liquidity.’ If you want to buy a stock in financial markets, you’ll need to buy it from a seller — often a short-seller. But the more fundamental good they provide is that they help correct over-valued stocks. Recall that a short-sale is only profitable if the stock price actually does end up dropping, as the short-seller predicts. Short-sellers, by entering the market and becoming sellers, increase the for-sale supply of a stock and decrease the demand for it, bidding its price down. Short-sellers also have an incentive to publicly reveal their short position, to persuade other investors to drive down its stock price. Thusly do short sellers correct the prices of stocks that they believe are overpriced. This provides the indirect good of supporting the efficient allocation of capital, as discussed above. And often it has more direct, tangible benefits. Short-sellers have historically been very good — often better than the official regulators — at sniffing out and exposing accounting fraud and large companies. The threat of drawing the attention of short-sellers can help scare management teams (whose compensation is typically tied to the stock price) into behaving, being honest and transparent with analysts, not paying out lots of company cash for ‘consulting’ from shell companies they themselves (the managers) own, etc., etc. Short-sellers may be the best and most effective regulators the market has.

***

(4) Can there be abusive, harmful short-selling? Some caveats: I’ll admit that my perspective here is generally pretty positive about the value short-sellers provide, and I hope I’ve persuaded my readers to share this general feeling. But I think there are special and marginal cases in which short-selling can be harmful. What are these? The first is, most obviously, when a short-seller is wrong and deceptive about his evaluation of a stock’s true value. Suppose Bill Ackman is completely deluded about Herbalife, and the business is truly sound. If this turns out to be the case, then the stock price will eventually rise again, and Ackman’s hedge fund will suffer greatly, punishing him for his false assessment, and Herbalife will be fine. But what if Bill Ackman, having realized that Herbalife was sound, quietly exited his short position, without telling anybody? He might profit from doing this, since Herbalife’s stock has already dropped a great deal just on his allegations. If this happened, Ackman would hypothetically profit from, essentially, tricking the markets; Herbalife would have wasted a lot of its valuable capital on its legal threats to Ackman, its PR campaign, etc.. The markets would have, in this special case, rewarded the wicked and punished the good. Could this happen? Theoretically, yes, but in practice it’s unlikely. If he did this, Ackman would ruin his reputation and credibility on Wall Street for the rest of his life, which would be very costly to him in the end — so it’s very improbable that he or any other investor of significance would. More, delusion and deception are native to the human condition, and hardly unique to short-sellers. And why should we say that erroneously shorting a stock is so much worse than erroneously boosting a stock?

In theory, as we’ve noted, a short-sale can only profit you if the company’s stock price actually does drop, as your short-sale predicts. So a short-sale does not reward you for just attacking fundamentally sound companies. Are there exceptions to this? One possibility is that there could, in some marginal cases, be powerful self-fulling prophecies. Again, as I’ve emphasized, trading equity and balance-sheet equity are distinct; so short-sellers don’t deprive companies of the capital they need to do business. But my understanding is that for some companies, lending terms and other obligations are tied to their shares’ trading prices.  E.g., a company might be required to pay a higher interest rate on debt if its shares fall below a certain price. Or a bank that has entered into lots of derivatives contracts might be required to post lots of extra collateral immediately if its share price falls; posting this collateral could, in turn, force the bank to sell off other assets in a fire-sale, which would hurt its core business, initiating a downward spiral. These sorts of effects can be particularly harmful in major economic or financial crises — and so sound regulation should guard against these sorts of systemic and spiraling risks.

But we also hear this concern voiced outside of crisis situations. Some people worry about more mundane ways in which this self-fulfilling prophecy can work its evil. I.e., there’s a fear that short-sellers put heavy pressure on management teams to pay too much attention to short-term stock prices, which could cause them to lose track of sound management for the long run. Is there truth in this idea? Honestly, I’m pretty skeptical. On the most basic theoretical level, the value of a share consists in a slice of all the future profits of a company — so placing ‘the long-term value’ of a company in opposition to ‘its short-term stock price’ is a false dichotomy. More practically, it seems that it would require heroic skill to take down a fundamentally sound business just by psyching out the management. I suspect that this ‘concern’ about ‘harmful short-term pressure’ from short-sellers is largely mongered by management teams who aren’t very good at what they do, and want some pre-fabbed catch-phrases to take to the press, so that the big bad mean short-sellers will leave their company alone!

***

(5) Things I didn’t just write: This post has not argued that everything in the financial sector,  and our public-policy approaches to it, is a-okay. It has not argued that equity-trading is necessarily the most morally worthy of professions (I will also not say it is particularly morally unworthy). It has not argued that there is no excess of high-frequency trading in the markets. It has not argued that it is no problem that so much intellectual talent in the U.S. is pulled into the financial sector as against, say, engineering or teaching. It has not argued against circuit-breakers to prevent the massive crashes that can come from panic psychology or algorithm-driven trading. It has argued that short-sellers provide a valuable service that is essential to a modern economy, and the language and metaphors we use to describe them are misleading.

–Matthew Shaffer

What’s so important and interesting about accounting

In January, I took a very short intensive introductory course to financial accounting. When I first signed up for it, I cringed to think what my old Nietzsche-thesis advisor would think of such a practical and putatively boring endeavor. But I actually – I really mean this and I’m not just writing this to impress some future employer – found it very intrinsically interesting. So this will just be a brief post in which I’ll expend what I learned, by telling you what’s so interesting and important about accounting. After that, I want to give a brief intro to some of the basic ideas and concepts of accounting, partly because I had been frustrated, when I first started the course, that they are not usually explained well to people who are not already familiar with the field.

So first, what is financial accounting? I would define it as the set of rules and concepts we use to prepare relatively simple financial statements that capture or represent important truths about a firm and convey them to outsiders—both what the firm is worth now and what it’s doing on an ongoing basis. This definition suggests both (1) why accounting is important and (2) why I think accounting is interesting.

First, financial accounting is very important because the information that firms convey to outsiders determines whether, how much, and on what terms those outsiders lend to or invest in those firms. We as a society have limited amounts of capital to invest and lend. So we have an interest in that capital being used very wisely. We want it to be lent to or invested in firms that will use it productively, doing innovative and transformative things, and not lent to or invested in, e.g., hopeless companies overseen by feckless managers who are desperate for another lifeline when, in fact, their business models are outdated. The world would be better off today if more capital had gone to Apple and less to Pets.com during the 1990s. We also want investors and creditors of firms – after they have invested or lent – to continue to have an accurate picture of what’s going on inside a firm, so that they can monitor and pressure managers to behave and do well. In this vein of thought, it’s really not an overstatement to say that modern capitalism—in which public companies, owned by outside shareholders, must compete in capital markets to access the capital they need to grow—depends on good accounting. (Lastly: we as a society also increasingly want information about things like, e.g., how a firm is effecting and harming the natural environment, so we can figure out how best to regulate and efficiently abate these costs—this will become a more significant part of accounting in the future.)

Second, accounting is interesting because capturing and representing the truth about a firm is an intellectually challenging and fraught endeavor. The fundamental truth about a firm is actually fairly chaotic—a million different things of varying importance are going on at once—and we need to figure out rules for distilling some simple but accurate summary from this chaos. Accounting is in this sense a philosophical enterprise. The world itself does not label things as ‘assets’ or ‘expenses’; rather, we humans decide which labels we apply to which things; and we set rules for doing so based off of imperfect intuitions and ideas that involve human social ideas like justice (i.e., we want accounting standards that will promote fair and beneficial outcomes for society as a whole via efficiency and transparency) and conservatism (i.e., we want standards that will prevent managers of firms from doing self-servingly optimistic reporting, because we think that people can be selfish). I think it’s also helpful to analogize accounting to statistics, which is also about distilling useful trends from the chaos of data. How tall are women compared to men? Obviously, the fundamental truth is that there are 3.5 billion+ men in the world, each of a different height; and 3.5 billion+ women, each of a different height. Chaos, in other words. But if we have a research project that hinges on the relationship between gender and height, we have to find some simple way to describe the general relationship between these two populations. Statistics provides us with a way to extract out of the real world some useful artifices: The heights of the “average man” and the “average woman” (neither of whom actually exist as real things in the real world), and the standard deviation of each population—four simple numbers that capture most of what we need to know.

That’s why there’s actually a surprising amount of contention in accounting. For example, in the U.S., publicly listed firms issue financial statements according to U.S. GAAP (Generally Accepted Accounting Principles); in the EU, most countries require their firms to use IFRS (International Financial Reporting Standards). The U.S. has been planning, for years, to ‘converge’ its accounting standards with the IFRS—but this convergence has been slowed and stopped at various points, due to ineliminable disagreements. If representing the truth about a company were a simple, scientific enterprise, this would not be the case. Both U.S. GAAP and IFRS are constantly updated to keep up with business and financial innovation. How do firms account for some new complicated financial transaction, when the underlying goods don’t have the words ‘asset’ or ‘expense’ or ‘liability’ branded on them? That’s up to the bodies that oversee GAAP and IFRS—and both bodies pay lots of very smart people very good money to debate these rules all year.

A more practical introduction to financial accounting:

In practice, financial accounting results in the production of four financial statements. These statements are produced by accountants within a firm, and checked (or ‘audited’) by independent accounts outside the firm, and then disclosed in companies’ official public filings, such as quarterly and annual reports. Under GAAP, these four financial statements are: (1) the Balance Sheet, (2) the Income Statement, (3) the Statement of Retained Earnings, and (4) the Statement of Cash Flows. The most important, in my view, are the Balance Sheet and the Income Statement; so I want to just describe the basic concepts of, and sources of confusion around, these two financial statements, and then describe the basic idea of the other two more briefly.

The Balance Sheet

The Balance Sheet is supposed to capture what a firm is worth at a single point in time. It reports the company’s Assets, its Liabilities, and its Shareholders’ Equity. The Balance Sheet is based around the “accounting equation,” which you may have come across: Assets = Liabilities + Shareholders’ Equity. To understand this, you first need to know that Shareholders’ Equity, in accounting, is not the equity that gets traded in stock markets. Rather, Shareholders’ Equity in accounting is an accounting contrivance that is sort of defined as Assets – Liabilities. This means that the accounting equation is a simple identity. I just wanted to clarify that up front, because it tripped me up for two whole days when I first started with accounting, and not every textbook conveys it explicitly. Now, I think the best way to make the accounting equation clear, from here, is to illustrate it with a stylized story:

Suppose you start your own company. At the beginning, you invest $10,000 of your own money. The $10,000 you invested immediately becomes an asset of the company—cash on hand that the company can use. And because you, the owner, have invested this money yourself, and not taken out any loans, that $10,000 in assets is your equity in the company—if you shut the company down tomorrow, you could take the whole $10,000. So when you first invest $10,000 in your own company, the company has $10,000 in assets (cash) and $10,000 in equity, with no liabilities. $10,000 = 0 + $10,000. Get it? Now, suppose your next move is to pay $10,000 up front to rent a storefront for two years. You might think this $10,000 payment is an expense, but on the Balance Sheet, we consider this ‘prepaid rent’ an asset, because you’ll be able to use that storefront over the next two years in ways that will help you earn money. (Prepaying for rent is, in this sense, conceptually similar to buying, say, an annuity—both will pay out income for a set period, so both are assets.) So what did we do to the Balance Sheet and accounting equation? We just changed $10,000 worth of the asset ‘cash’ into $10,000 worth of the asset ‘prepaid rent.’ So the accounting equation is still at $10,000 = 0 + $10,000; on the Balance Sheet, all we did was change the name of the asset. How do we know that the ‘prepaid rent’ is truly worth $10,000? We don’t. In fact, you might hope that you’ve gotten a great deal on this storefront, and it’s truly worth $12,000. But we can’t just let you report the value of an asset at what you think its true worth is—you’ll probably inflate the value of all of your assets if we do. So GAAP requires you to be conservative, and report the value of your assets at the cost of their purchase—and to hold onto the receipt so you can prove it.

Next, suppose that you now need to buy inventory to fill up your store with goods that you can sell. You don’t have any cash left, so you go to bank, get a $10,000 loan and then use that loan to buy $10,000 worth of inventory. What just happened to our accounting equation? Well, inventory is an asset, because you can sell it to generate income, and the loan is a liability, because you’re liable for paying it back to the bank. So now you have: assets of $10,000 in prepaid rent and $10,000 in inventory… a $10,000 liability… and $10,000 in equity. $20,000 = $10,000 + $10,000. The accounting equation is still in balance. Get it? Now, things will get slightly harder. Suppose that over the first year, you sell half of your inventory for $25,000. What do we report at the end of the year? Well, since you’ve sold half your inventory, what you have left is only worth $5,000; in addition, you’ve now used up half the value of your prepaid rent. So those two assets are now worth only $10,000 combined. Meanwhile, you’ve just earned $25,000 in cash—an asset. So your total assets are now worth $35,000. But you still owe the bank $10,000, a liability, so your equity in the total assets owned by your company is now $25,000. And it makes sense that your equity increased by $15,000 total over the course of the year, because you just earned $25,000 by expending $10,000.

So you can see how, here, the accounting equation Assets = Liabilities + Shareholders’ Equity, must always be true, simply because of how we’ve defined the terms. It’s an identity. When you purchase assets in the first place, that purchase must have been financed by either debt or equity; if you purchase a new asset using the income you generated (i.e., reinvesting earnings), then that income had technically flowed through equity (since owners are entitled to profits), and so, again, your assets increase by the same amount as your equity. Hopefully you can use your imagination to see how this identity will still hold up when, e.g., the owner sells her shares to the public in an IPO; or the company has negative income for a year (say, using up $5,000 of a rent expense and $5,000 of inventory, and only getting $8,000 in income–thereby reducing Shareholders’ Equity by $2,000).

I’ve skipped over pretty much all of the actual details about how you put together a Balance Sheet—how you ‘depreciate’ the value of an asset over time, etc. But I hope I’ve conveyed the conceptual gist. The Balance Sheet is supposed to capture the value of a firm at a point in time—what assets does the company control, what portion of the value of those assets is owed to creditors, and, hence, how much of that value do we say belongs to equity owners?

But as I hinted above, Balance Sheet ‘Equity’ is not actually equal to the equity we’re used to—the equity that trades on stock markets at prices graphed on CNBC. In fact, usually they’re radically different. And this fact is a key to understanding the virtues and the limitation of the Balance Sheet. Necessarily, the difference means that investors do not value a company in the same way that the balance sheet does. I.e., the ‘market value’ usually does not equal the ‘book value’; or, investors disagree with the Balance Sheet about what a company is worth. Why is this the case? What accounts for this difference? In my understanding, there are two basic components to the difference:

First, the ‘true’ market value of assets and liabilities is different from their accounting or ‘book’ values, and investors are interested in market values. For example, suppose your company bought an office building in Williamsburg, Brooklyn, for $4 million in 1993. You might be required to value this asset on your Balance Sheet at $2 million right now (the historical purchase price, minus 20 years of depreciation expenses); but because Williamsburg has gentrified and New York City in general has revived so much since 1993, chances are the actual market value of your building is well over $4 million. (Alternatively, if you bought a building in downtown Tokyo during the height of the Japanese real-estate bubble in 1988, chances are your balance sheet overstates the value of that asset—which is one reason Japanese banks keep holding onto old real-estate investments. Similarly, some U.S. banks have been trading at below their book values, suggesting that investors think they will have to recognize losses on many of the assets they purchased before the financial crisis.) So while we have good reason for accounting for assets at their historical cost—namely, stopping managers from over-optimistically over-representing the value of their assets—this requirement means that assets are not reported at their ‘true’ value.

Second, and more importantly, investors do not simply value a company according to how much they would get if the company were to liquidate today. Rather, equity investors are also interested in owning a slice of all of the company’s prospective future profits. And this capability hinges on things like (1) the reputation it’s gained with customers and (2) the margins in the particular market space it’s entering, to name just two—things that are not captured on the Balance Sheet. That is, investors are interested not just in a company’s assets right now, but in its ability to generate income and profits on an ongoing basis into the future. And this, dear readers (thanks for your patience!), brings us to the Income Statement.

 

The Income Statement

The income statement is the financial statement that’s supposed to represent how the company is doing on an ongoing basis—the proverbial ‘bottom line’ refers to the company’s ‘net income’ which is listed, literally, on the bottom line of the income statement. Because net income is calculated for an ongoing basis, the income statement covers a period of time (the last fiscal year, in the annual report), rather than a particular point in time—i.e., it represents a flow, rather than a stock. The basics of the income statement are actually quite straightforward: for a firm, as for you and me, ‘net income’ is just revenue minus the expenses incurred in earning that revenue. The Income Statement just lists all the firm’s revenues, all of its expenses (including things like taxes), and then subtracts, and reports net income on the bottom line. It’s basically that simple. But there are a couple of extra interesting things we need to understand in order to get the significance of the Income Statement:

First, the Income Statement is fundamentally linked up with the Balance Sheet. For example, in each year, when you earn a positive net income (‘earnings’), you either pay out that income to owners as dividends or reinvest those earnings in the company, thereby increasing Shareholders’ Equity on the Balance Sheet. If you suffer a loss in a year, then the loss (by definition) reduces your assets without reducing your liabilities; so the loss is reflected in a decrease in Shareholders’ Equity. This is all laid out explicitly in the Statement of Shareholders’ Equity (see below); but it’s important to understand conceptually how the ‘slice in time’ valuation/financial position of a company  in the Balance Sheet is constantly being ‘updated’ by its flow of profits and losses as reported by the Income Statement. I.e., profits and losses flow through the Income Statement onto the Balance Sheet.

Second, the major counterintuitive thing about the Income Statement is that income is reported on an ‘accrual basis’ rather than a ‘cash basis.’ That is, to calculate your net income for a year, you don’t just subtract the cash you’ve paid out from the cash you’ve received (this would be ‘cash basis’); rather, you calculate the revenues you’ve ‘earned’ and subtract the expenses you’ve ‘accrued’. Let’s illustrate using the example company we worked with above, in the section on the Balance Sheet: At the beginning of the first year, I had paid out $10,000 for ‘prepaid rent,’ right? But on the income statement, we don’t record a $10,000 expense in year 1; we only record a $5,000 rent expense at the end of the year, because this is the amount of the asset that I ‘used up’ in earning my revenues in that year. Similarly, since I only sold half of my inventory during year 1, I only record half the cost of purchasing the inventory as an ‘expense’ on my income statement—because this is all the inventory I’ve ‘used up’ in earning my revenues. Finally, if I were to sell some inventory to a customer ‘on account’ (they promise to pay me in three months), I’ve already ‘earned’ this revenue, and so that makes it into the income statement even before they actually pay up.

Why do we do it this way? Well, there are a couple of theoretical and practical reasons. The most abstract theoretical reason is that if I have, e.g., a promise from someone that (s)he will pay me in one month, this promise is technically a financial asset right now, in that I have already secured a good guarantee of a future cash flow. And so, theoretically, I’ve impacted my company’s financial position (Balance Sheet) the moment I’ve earned the promise to pay from somebody else, and not in the moment when (s)he actually hands over the cash. Since the whole conceptual idea of the Financial Statements is that the Income Statement ‘flows in’ to the Balance Sheet, the Income Statement should reflect that I have earned the asset ‘promise to pay $X,’ right away—it shouldn’t wait for the exchange of one asset (cash) for another (the promise).

The more down-to-earth theoretical reason is that the Income Statement is supposed to give a good picture of what you can expect a company’s typical yearly income to be. If I invest $1 million in a building that I can use to earn $200,000 a year for the next 10 years, it doesn’t make sense for me to report a $800,000 loss this year, and a $200,000 profit for the next 9 years. I’m doing the same basic business in each year, so it would be a better representation of my true yearly income to recognize the building-purchase as an investment, and therefore to ‘allocate the expense’ of it over the next 10 years—meaning that I recognize $200,000 revenues and $100,000 of expenses, for $100,000 net income, for each of the 10 years.

And another practical reason to do it this way is that it prevents certain kinds of opportunistic and deceptive ‘earnings management.’ Suppose that you’re a manager of a company. You’ve had a very good year, but you have reason to believe that things are about to turn sour. You might be tempted, if we used cash-basis accounting, to do some creative accounting: For example, you could purchase all of the inventory you’ll need for next year up front (right now); that way, you would increase your ‘expenses’ in this year, and reduce your ‘expenses’ in the next year, smoothing out your earnings over the two years. That way, next year, your investors might not catch on that, actually, your company is going downhill fast, and so you could exercise your stock options at a high price well into your company’s downfall. Good for you; bad for everyone else. See the problem? Accrual accounting—by forcing managers to ‘match’ expenses to the period in which revenues are earned—prevents some of this opportunistic timing of expenses.

So that’s the basic conceptual gist of the Income Statement. The actual implementation is tricky business. ‘Accrual-basis’ accounting has some big advantages, but the downside is that doing an income statement with ‘cash-basis’ accounting would be a lot easier to control—you could just look at people’s cash receipts. With ‘accrual-basis’ accounting, we need lots of complex and debatable rules about how to ‘allocate the expense’ of various investment-purchases over time. And we can’t just match these expenses to reality using tangible cash receipts. The rules that accountants consequently use can get complex, debatable, and subject to judgment and discretion. This is why accounting is a serious profession involving a serious professional exam, etc.

 

The Statement of Shareholders’ Equity

This is the simplest financial statement, and, in my view, one that doesn’t really convey much extra information, but is just needed to bridge a technical gap between the Income Statement and the Balance Sheet, by reporting dividends, retained earnings, and the company’s transactions with its own owners (new share issues and repurchases). Basically, the Statement of Shareholders’ Equity just explains any changes in the Shareholder Equity figure (as reported on the Balance Sheet) from one year to the next. That figure is effected in intuitive ways by the company’s net income, dividends, and share repurchases/issues. If a firm earns a positive net income, it can distribute those earnings to its shareholders as cash dividends (in which case the money is taken off the company’s balance sheet entirely, because the cash now belongs to whomever it was paid to—the company is a distinct ‘entity’); or it can retain and reinvest those earnings, which increases Shareholder Equity on the balance sheet accordingly. If a company suffers a loss in a year, this detracts from Shareholder Equity directly. So in most years the Statement of Shareholders’ Equity just reports earnings and dividends, subtracts the latter from the former, and adds the difference to the old Shareholder Equity number to get the new Shareholder Equity number. In years in which the company issues new shares, or repurchases outstanding shares, this also shows up on Statement of Shareholders’ Equity.

 

The Statement of Cash Flows

The final statement is the Statement of Cash Flows. What does it do? Well, if we want our financial statements to give a good picture of the truth about a company, this statement should hopefully plug any gaps of information that other financial statements left out. As the name suggests, the Statement of Cash Flows reports the flow of cash in and out of the company over the past year—how much cash did you have then?; how much do you have now?; what accounts for the difference?; where did it all go?; how much went to investments?; how much was paid out in operations? In theory, you can get all of the information that is presented on the Statement of Cash Flows from the other financial statements. But there are a couple of reasons why it is useful to have a separate Statement of Cash Flows that focuses just on this cash information:

First, if you’re doing business with another firm—lending to them, or servicing or selling to them on account—you’ll want to be paid in cash. And since the Balance Sheet and Income Statement are technically based around the inflow and outflow of assets—not just cash—they may not clearly present all of the information you need. For example, suppose you’re in a bank, and debating whether to lend to a hedge fund. The hedge fund might look great on the Income Statement (earned a 30% ROA last year) and great on the Balance Sheet (a debt-to-equity ratio of only 2-to-1). But if the hedge fund isn’t keeping much cash on hand—indeed is paying a lot of it out to post collateral—and many of its assets are illiquid investments in, e.g, Australian timber woods, the hedge fund could easily get into a situation where it just couldn’t summon the cash to make its interest payments to you. Or suppose you’re doing some contract work for a startup firm that earned a lot of income last year, but hasn’t been able to collect the cash from the other firms it serviced—you might worry that, since they can’t turn their ‘accounts receivable’ into cash, they won’t be able to pay you cash for your work. So there are a lot of situations in which outsiders want to know about the cash situation of a company specifically; but the Income Statement and Balance Sheet focus on assets in general, not cash specifically. So the Cash Flow Statement plugs the gap there.

Second, and finally, the Statement of Cash Flows is also useful for monitoring and guarding against a couple of kinds of misbehavior related to imperfections of the other financial statements. When we went over the Income Statement, I explained why the Income Statement reports revenues and expenses on an ‘accrual basis’; when we talked about the Balance Sheet, I explained why it reports asset and liability values at their ‘historical cost.’ The way I think about the design of the Cash Flow Statement is that it is a useful check on the kinds of mischief and abuse that can come from those requirements in those statements. For example, because you must record assets such as buildings at their historical cost minus their depreciation, the ‘book’ value of these assets can be very different from their ‘true’ or market value. This provides a very ripe opportunity for earnings manipulation. Suppose your company’s basic business model is falling apart, and every day you’re losing money on your actual core operations—in this year, you’ll lose $6 million on operations. Suppose also that you own that building in Williamsburg whose ‘book value’ is now $2 million, but whose real, market value is some $10 million. If you sell off that office building, you can report an $8 million ‘gain’ on the sale, which will make up for your $6 million loss on operations, giving you $2 million in positive net income. With this phony liquidation, you can make things look good this year, increasing your assets, and bringing home big net income, even though this business model is clearly not sustainable. But whereas your Balance Sheet and Income Statement will look fine if you use this strategy, your Cash Flow Statement will reveal what you’re doing. The reason is that Cash Flow statements are divided into three separate sections: cash flows from operations (at top); cash flows from investing activities; and cash flows from financing activities. By clearly decomposing cash flows into these three separate categories (as opposed to the aggregation in the income statement), the Cash Flow Statement helps outsiders better monitor the success of your actual day-to-day operating activities.

***

This post doesn’t even scratch the surface of the detailed processes through which accounting actually happens. I just hoped to convey the theoretical concepts and an outsider’s appreciation for (a) how the four financial statements work together to represent the truth about a firm and (b) how our economy as a whole depends on that. The big takeaways from this post, I hope, are (1) accounting is interesting, because it involves a lot of complex and philosophical questions about ‘what is the truth about a company’s value, and how do we capture and distill it in a few numbers?’; (2) accounting is important, because the rules we use to convey information about companies’ value will impact which companies we invest in and which management teams we reward with big bonuses, etc., and so it’s a foundational structure for our economy; and (3) learning some basic accounting is worthwhile, because if you really want to understand what’s going on inside a business, beyond borrowing a few lines from the business press, you need to be able to understand a company’s financials and what they reveal—and, more importantly, what they don’t.

The Solar Panel Trade War

There has been an ongoing conflict between the U.S. and China, taking place in WTO courts, private meetings, and policy debates in D.C. and Beijing, over China’s exports of solar panels to the United States. Basically, both China and the U.S. are WTO members. This means they’re both supposed to support free trade in the interest of overall global progress, and to eschew old-fashioned efforts to erect massive tariffs and trade barriers in the interest of helping their own domestic industries while hurting foreign ones. The basic theory, dating to David Ricardo, is that the world as a whole does best when everyone can freely trade, across borders, for goods in which other nations enjoy a comparative advantage, while exporting those goods in which they enjoy a comparative advantage. Trade barriers hurt human welfare by, most obviously, preventing us from making use of the best quality/best priced goods, if they happen to come from another country, and, over the long run, unnaturally distorting markets by artificially propping up particular industries in particular nations where they do not have a comparative advantage.

Recently, the U.S. has slapped substantial tariffs on imports of solar panels from China. China claims this is a WTO violation. However, the U.S. claims to be doing this only to cancel out the effect of Chinese state subsides on these panels — we, Washington claims, are not the aggressors in a trade war, but merely responding to China’s aggressions. With our tariffs, Chinese solar panels now reach American shores at the same price they would have sold for in the absence of Chinese state subsidies. Now the international law aspects of this are beyond my ken — one interesting point of contention is that, given the state’s ubiquitous involvement in the economy in China, it’s hard to clearly delineate the boundaries between the state and private firms, and consequently difficult to prove just how extensive Chinese subsidies to its solar industry are. But I want to avoid this legal debate, and take an economic approach to this trade war. My bottom line is that I do think the U.S.’s strategic response to China is justified here, but it is justified for reasons that are more complicated than the economically naive would assume, so exploring this issue will be educational.

First, why is it a problem, from the U.S.’s perspective, that China is subsidizing its solar exports? Let’s abstract away from this question. Suppose China were to massively subsidize its green-tea industry. This would, in my view, be an unquestionably good thing for the American economy. The reasons are very simple. I like green tea. A lot of us do. If we can get more of it, cheaper, that makes us better off. It’s as if China is just giving stuff away to us. If your friend gives you something for free, that’s a great thing from your perspective; and it’s the same with another country. It’s really that simple. Would anybody get hurt by this subsidized Chinese green tea? Yes: American domestic green-tea producers, and them alone. They would have more trouble competing with the lower-priced Chinese green-tea exports, and likely go out of business. Still, from the perspective of America as a whole, this is a good thing. If China is going to permanently produce lower-priced green tea than our domestic manufacturers will, regardless of whether that low price has to do with state subsidies or natural comparative advantage (superior climate, lower agricultural labor costs, etc.), then there’s no good reason for green-tea manufacturers to exist in the United States.

How are solar panels different? In a lot of ways it’s the same; in some ways, it’s an even better deal. With Chinese taxpayers subsidizing their solar exports, it’s as if China is just giving us some solar panels for free. And, indeed, given the threat of climate change, the use of solar panels has large positive externalities–that is, benefits that accrue to society as a whole, not just to individual users of solar panels. Since the cheaper solar panels become, the faster and more eagerly they should be adopted by domestic firms, China’s subsidized solar exports are good for the environment, too. If China wants to pay us to buy their solar panels, and if American consumers want to use them to reduce their carbon footprints, why on earth would we would want to stop any of them?

So this is a very serious argument that we should not only not fight Chinese solar subsidies, but actually be grateful for them.

But the Obama administration is pretty smart and environmentally conscious. So surely they must be aware of that argument, and must, in turn have a reason of their own for slapping these tariffs on. What are those reasons? There are two possible interpretations: one cynical and one more intellectual. The cynical interpretation is that America’s domestic green-tech industry is politically aligned with the Democratic party, and an important source of donations, & etc. So the Obama administration is seeking to protect its donors from foreign competition, even though America as a whole would benefit from being able to import solar panels more cheaply from China. As a pessimist/realist, I do think this is, descriptively, an important explanatory factor.

However, some more complex economic theory shows that there is a way in which Chinese solar subsidies could be bad for America in the long run, and protecting our solar industry could benefit us. This economic theory depends on the economics of “clusters,” which I have discussed in some of my other posts on economic geography.

Basically, the argument goes something like this: Energy is a very big deal. A bigger deal than green tea. The world runs on energy, and will require more and more of it as Asia and Africa develop. And because oil is non-renewable and threatens to exacerbate climate change, clean-tech is an even bigger deal. We can reasonably expect that in a few decades, green energy, particularly solar energy, will be incredibly important to the global economy. But this still doesn’t explain why we should dislike China’s subsidies: sure, clean tech is a big deal, but why not let the Chinese pay for this big deal thing as long as they want to?

This is where clusters come in. Industries tend to develop in clusters. The traditional example of this is how all of the Big Three auto manufacturers located in Detroit. Each had an incentive to locate there, because the city had workers with the knowledge and skills relevant to the auto industry — and those workers, in turn, had an incentive to stay in Detroit, where the auto employers all were. So this kept the American auto industry “locked in” in Detroit, and any other city that tried to, after this cluster developed, win back some of the auto business, was very, very hard pressed, and generally unsuccessful (until Japan competed with superior technology and human capital, and then China, particularly Shenzhen, used lower labor costs). We can expect the same of green tech. As more and more firms attempt to enter the green tech market, they will disproportionately want to locate in geographies that already have the workers with skills and experience in that industry. And those workers will increasingly move to geographies where those firms are located. Also, the “knowledge spillovers” between green-tech firms in that region will help those green-tech firms just do better, helping beat back foreign competitors.

So whichever country gets a “first-mover” advantage in developing a green-tech industrial cluster can hope to get “locked in” to a pretty permanent advantage in that industry, which will provide lots of employment for that country. This is why the U.S. and China are engaged in a trade war over solar panels, and why they would not get locked in a trade war over green tea. Each wants to get a “first mover” advantage in producing the major green-tech cluster within its own borders, in order to lock itself into advantages for a century or more to come, in an industry that we can expect to be really huge in the future.

There are also higher-level clustering externalities involved here. Green-tech firms tend to employ lots of highly-skilled, highly-educated people. and so they both (1) attract highly skilled foreigners and (2) provide incentives to Americans to pursue higher skill and educational levels. So clusters in high-skilled industries cause their local area to become more highly-educated, which makes pretty much everything else there better as well. (This is why, for example, Rochester, N.Y.’s decline has not been as bad as Detroit’s decline–both of them have lost major employers, but Rochester’s old industries had attracted more highly skilled workers, who were thus better equipped to create new firms and employment opportunities in the old firms’ wakes.)

So overall, I’ll make an exception here to my generally laissez-faire trade principles, and say that this is a situation in which the U.S. is justified in raising tariffs–the long-term stakes are high enough to justify the short-term costs this will impose on the American economy. But I can’t say I’m certain this will work out in the end. There are good theoretical reasons for America to try to nurture its green-tech industry. But what’s good in theory often flails in the government hands. Witness the complete boondoggle of biofuel in the U.S., which, in my understanding, hard-core environmentalists think is very bad for the environment, because the subsidies are so large that they, e.g., incentivize firms to use large quantities of non-renewable energy sources to make ethanol — complete, utter insanity. Every reasonable person seems to agree it’s a big boondoggle. But it’s hard to get rid of ethanol subsidies, because once you subsidize it once, you create entrenched incumbent interest groups who use their lobbyists to keep the money flowing. That kind of public-choice problem is definitely a risk in using industrial policy to nurture a solar-energy cluster in the U.S., and so we should be intensely aware of and guarded against that as we move forward.

Does language reflect or reconstruct reality?

Recently, I had a Facebook dialog with an old Yale friend, who just graduated this year with high honors. In my senior year, I had written an exposition of Nietzsche’s philosophy of language, which my friend had asked to read. A little background: In my essay, I wrote that Nietzsche made persuasive arguments that language does not actually reflect nature as it is. Rather, all linguistic conventions are arbitrary, all of the words we have chosen to use are grounded in metaphor, and so our linguistic world is ‘anthropomorphic’ in that in organizes and taxonomizes the world according to human needs and wants, rather than objective reality. Finally, I argued that Nietzsche’s philosophy of language explained his aphoristic, literary style, because it suggested that a scientific, analytic representation of the truth about the world in language was impossible. So Nietzsche urged his “new philosophers” to speak like him — using aphorisms, ironies, puzzles, declarations, and stories to deconstruct old, conventional, hardened ways of seeing and speaking about the world, in order to force us off our conventional taxonomies and cliches, to explore new, more imaginative and original metaphors and ways of talking about the world. So I concluded that my exposition of Nietzsche’s philosophy of language argued against the form in which I presented it (an analytical, academic essay). I thought my friend’s questions were interesting enough that I might publish our dialog. 

***

J: I’m writing a paper on Nietzsche, and I just read over your essay for some guidance on his philosophy of language. A few thoughts: if your non-ironical, clarity-aspiring paper recommends its own destruction, why is it worth reading? Is clarity a ladder to be kicked away? And why does the conventionality of language render it arbitrary?

…..

MS: I’m glad you’ve found my essay useful. I in contrast (I’m sure you know what I mean) cringe to read anything I wrote more than a few minutes ago, it included, and am blushing at the idea that you have a digital copy. But regardless… Toward a response to your questions: (1) There’s only an awkward half-defense. We really should just get what Nietzsche is doing in his later work, understand his implicit critique of language, and learn to speak in his style. But since we don’t all get that, my clarity-aspiring approach, complete with its appeals to our human weakness for taxonomies and structures, is needed to make the point clear, at which point we can finally move on. (2) So yes. A ladder to be kicked away, in your fine metaphor. (3) In my use of the words, it is almost tautological that the ‘conventionality’ of language renders it ‘arbitrary.’ A ‘conventional’ thing is, etymologically, something we humans have just ‘come together’ around—it is justified by broad agreement and choice rather than its status in nature itself. And since language is not tied to anything in nature, i.e., outside of convention, how language develops and evolves is necessarily the product of human arbitration. But I use ‘arbitrary’ in a non-normative, certainly non-pejorative, sense. Indeed, language is a useful metaphor for all other social conventions—they’re all arbitrary, and yet absolutely essential for our sanity and society’s functioning.

…..

J: Another thought on Nietzsche: are all linguistic “conventions” equally arbitrary? Suppose I make up a word – “grark” – to refer to my left toe, the moon, and the set of prime numbers under 30. Isn’t there a sense in which this doesn’t “fit” nature in the same way the word “leaf” does? I’m not convinced that every abstraction is equally violating of the natural order…

…..

MS: Tough, good, pressing question. And you’re surely, unavoidably correct—our linguistic taxonomies and categories definitely do fit the real, natural world, better than a randomly assigned lexicon would. But let’s work with your own example, the word ‘leaf’ — we use that noise to refer to both the photosynthetic organs of fauna and sheets of paper. This makes sense to us, because both are thin and flat and light. But I can imagine an intelligent extraterrestrial for whom that pairing wouldn’t make sense. Maybe, in her world, flat and thin things are trivially common, but each flat and thin thing has a radically different function, and these differences are essential to survival. Her eyes and brain would not have evolved to taxonomize things according to a flat and thin shape as we do. So our pairing of the paper with the photosynthetic organism just might not register with her. Or maybe this extraterrestrial’s civilization has been technologically advanced for so long that their language has evolved to make no distinction between natural and artificial technologies. They refer to their solar panel as ‘leaves’ because both turn sunlight into usable energy — and they would think us curiously backward for not doing so.

So I guess our taxonomies mostly have some grounding in nature, but always have an anthropomorphic inflection as well.

…..

J: I’m on board with what you say here about language being grounded but anthropomorphic. It’s not clear, though, that you’re still being a Nietzschean. Awfully realist about properties, nature, some things “fitting” the world better than others, etc. Let’s talk more this summer.