Matthew Yglesias’ The Rent is Too Damn High

The latest short e-book in the wonkosphere is Matthew Yglesias’ The Rent is Too Damn High. I recommend it and following Yglesias at Slate. Yglesias is one of the clearest and most accessible economics writers to meet the criterion of seriousness.

The basic idea of the book is this: Right now, in the U.S., legislation and regulations at all levels of government are punishing and thwarting density. Local zoning boards and activists make it difficult for developers to build new high-rises in up-and-coming (‘gentrifying’) areas of cities. And, even where new buildings are permitted, government-mandated parking-space requirements and height restrictions artificially suppress supply. Meanwhile, other political constituencies protect policies like the Home Mortgage Interest Deduction, which incentivize and subsidize life in single-family, owner-occupied homes in the suburbs.

This has a number of bad consequence. Most obviously, it makes housing much more expensive. When the supply of housing in desirable urban areas isn’t allowed to expand to meet demand, prices necessarily soar. And insofar as people are priced out of those areas, they go seek housing elsewhere, in suburbs and exurbs, thereby bidding up housing prices everywhere else, raising costs for the country as a whole. These rising housing prices take a big chunk out of people’s paychecks, which is a bad on its own — Yglesias makes a compelling case that much of the American middle class’s economic gains in the past decades has been completely wiped out by rising housing prices. But they also have a lot of broader, subtler costs: Housing is most expensive in the most productive areas of the country, i.e., cities in the northeast corridor and California. Which means people are being incentivized by housing policy to move to less-productive areas, like the sunbelt (and yes, sunbelt cities are much less productive than Boston — if you thought the opposite, that’s just an illusion of relative movement over the past few decades). And that’s a national problem: If we care about Gross Domestic Product, we must care the policies and incentives that influence individual people’s levels of productivity. It’s good for people to live in high-productivity areas: A young professional will learn more and make more connections and therefore become a better professional if he lives in Manhattan than if he lives in Phoenix; and the wages that people in service jobs earn move with the productivity of their region as a whole (which is why nannies in Manhattan earn much more than nannies in Phoenix, even with equivalent skill sets). So a country with better housing policy, and consequently cheaper housing, would be more densely concentrated around major high-productivity cities and wealthier as a whole.

Denser life is, additionally, greener life. It involves less driving, both by residents and by the suppliers of the grocery stores that feed them (on a per customer basis). Heating 1,000 families in a single city high-rise is naturally more efficient (less carbon-intensive) than heating them all in separate homes. And, finally, the more densely we’re packed in major metropolises, the easier it is for us to preserve the great outdoors elsewhere — there is a contradiction in being a conservationist who thwarts development at dense cores. In short, if we just let people build housing more easily and more densely — much more densely — it would bring a host of benefits.

In laying this all out, Yglesias takes on some really elementary but common misconceptions about housing policies. As a center-left liberal, Yglesias is particularly miffed that progressive-type advocates are so often behind efforts to keep developers out of gentrifying areas. They, like Yglesias, say they want to keep housing prices low so that working and middle-class people can afford to continue living in their long-time neighborhoods. And they see that development and higher rents frequently go hand-in-hand. But, as Yglesias points out, their assumption, and their response, get it exactly backwards. The causation goes the other way around — developers come in to a neighborhood because rents are rising; development per se does not cause higher rents (on its own, density should put downward pressure on rents). Keeping developers out is exactly the wrong way to keep housing prices low. This is basic economics — if demand for housing increases in an area, but supply is not allowed to expand, then rents will rise. While I think — and I had the sense Yglesias would agree — there are sometimes good reasons for preserving historically significant buildings and districts, we should be honest that we are doing so for aesthetic/nostalgic reasons, rather than dressing it up with a risibly opposite-of-truth economic justification like “modern high-rises cause higher rents because they’re modern!” Yglesias also exposes the lunacy of making so many housing decisions matters of policy when there doesn’t seem to be any good justification for not just leaving them up to markets — case in point, parking requirements. Some people will want to park their cars in their buildings; and developers and owners will respond to meet that demand. Other people won’t want parking. It makes a lot more sense to let markets — via the decisions and tradeoffs of individual people and developers — decide how much private parking there should be, rather than to impose one-size-fits-all on everyone.

Finally, in arguing all of this, Yglesias warms my heart by arguing that identity politics have hopelessly muddled the policy issues. Urban progressives really should promote more development — as a way to keep rents down, so middle- and working-class people can still live in cities if they choose (and to make us greener) — but they don’t, because they are instinctively distrustful of business, hence developers. Conservatives really ought to support freer housing markets and deregulation, but they don’t, because they are skeptical of a policy that smells too pro-urban. And intelligent discussion of the policy of housing policy is rendered impossible, as the conversation inevitably devolves to the level of symbolic identity politics — a big national debate over “Which is the better way of life: suburban or urban?”

Personally, I think both are the better way of life, for different stages — high density makes sense for early adulthood, when one is figuring out career paths and looking to meet new people; lovely leafy suburbs with sledding hills in the front yard and ferny woods in the back, and a public high school that sends dozens of kids to the Ivy League, are good for once we’re settled in our careers and raising kids. So the debate needn’t be about symbolic identity politics, or elevating one type of lifestyle over the other. It should actually just be about arranging institutions and policies to best allow our country’s hundreds of millions of people to coordinate and trade-off their individual preferences for where they want to live in the most efficient, broadly beneficial way possible. And that kind of coordination, broadly speaking, is best achieved through markets. Insofar as deregulating housing and densifying will also bear fruits in making us greener, then that’s terrific, too.

Two proto-critical thoughts: Yglesias chides ‘suburban conservatives’ for failure to work to dismantle local anti-density housing regulations, concordant with the free-market ideology they claim to endorse. But (1) as a political generalization, this is imperfect (which Yglesias may have conceded in passing — I don’t recall): the affluent suburbs of northeast-corridor and west-coast metropolises are majority Democrat and, it certainly seems, have plenty of anti-density “let’s keep this town socio-economically homogenous” laws on the books. And (2) Yglesias, as I remember, didn’t give much ink to unpacking the main, very well-known reason why suburbanites try to drive up real estate prices in their towns through anti-density regulations: they want to keep their public schools excellent, for their kids. If public education weren’t an issue, suburbanites might worry about new construction on their cul-de-sac, but not in the town (i.e., school district) as a whole. So our metonymic ‘suburban conservative’ could say in her defense:

“Sure, I guess I’m anti-free market when it comes to my school district’s housing regulations. But that’s just a response to government control over my number one priority right now — my child’s education. If, as I advocate, we had a more extensive voucher system, I wouldn’t have to ‘pay twice’ to send my kids to private schools. I could more easily afford to send them to schools where they could test in to classes with other gifted students. But, in lieux of that, I need to keep this school district good — which means keeping life here expensive.”

(Because, after all, what child of upper-middle class professionals isn’t “gifted?”)

My point really is not to say anything about public education per se (I’ll save it for another post). It’s just that the picture Yglesias presents looks a little different when we keep in mind that it is a reaction to the fact that we have limited choice, a lack of a market, in primary and secondary education. This isn’t a refutation, and there’s no obvious policy takeaway that comes out of this — it’s just something I wish he had chewed over a bit more.

Was WWII a natural experiment? Are there any such in economics?

(Note to readers: The first five paragraphs here are introductory, for readers new to economics. Analysis starts in paragraph 5, “So which is correct…”)

What should government do during a time of financial and economic crisis? Well, first, most people agree, the central bank should lower interest rates. The way this works in the U.S. is that the Federal Open Market Committee (FOMC), a committee of the Federal Reserve chaired by the Fed chairman, votes to set a target for new, lower interest-rates on U.S. Treasury securities. But the Fed can’t just issue declarations that ‘set’ those interest rates. Treasuries are bought and sold in competitive markets. So the Fed becomes a buyer — by purchasing lots of Treasuries, it increases the demand for Treasuries, which raises their price. A rising price of Treasuries (or any other bond) equates to a decreasing interest rate — i.e., if you could formerly buy a promise from the government to give you $100 in a year for $95, that means the interest rate was 5.27% ([100-95]/95); but if Federal Reserve bids up the price of that promise to 99 dollars, your interest rate is 1.01% ([100-99]/99). This is why the price of a bond and its interest rate always move in opposite directions — they’re just two ways of calculating the same phenomenon.

Why does lowering interest rates on Treasuries help? Since most financial assets in the U.S. are in some way linked to interest rates on Treasuries (i.e., the “risk-free rate”), a lower interest rate on Treasuries decreases interest rates throughout the economy — on your car loan, on your mortgage, on your business financing. This has to do with simple supply and demand — if there is no longer a supply of super-safe Treasuries that provide a yield of 5.27%, people with loanable funds will have to seek out other (riskier) options — like financing your house — to get that 5% return. So the Fed’s purchases of Treasuries, called “Open Market Operations,” lowers interest rates throughout the economy. (They also put more cash that can be lent to the private sector back into the hands of banks.) This makes it easier for businesses to get corporate financing to, say, expand their R&D facilities and hire more people to work in them. It makes it easier for you to afford a mortgage, which will make you more likely to buy a house, which, in turn, will be a boon to the construction industry, and its workers, who will consequently have more spending money, which they can spend on other industries that will also be boosted. So lower interest rates should bring more hiring and more spending, which should help the economy recover.

So why isn’t the Federal Reserve lowering interest rates right now? The trouble is, interest rates on U.S. Treasuries can’t really go any lower — they’re already near 0 in real terms (i.e, accounting for inflation). More, given that interest rates have been so long for so long, some fear that the good investment opportunities have already been undertaken — and sustained lower interest rates will just lead to easy money and cheap credit for unworthy enterprises, leading to mal-investment that could lead to another crash.

So what do we do now? Well, once the central bank is ‘out of ammo’ in terms of interest rates, economists disagree on what to do. Some — generally referred to as Keynesians — believe that the government itself should effect ‘fiscal stimulus.’ That means that the government suddenly takes on lots of new projects, like rebuilding national infrastructure, which would employ lots of workers, and put money in their pockets, which would help fight against the recession. Every dollar spent by the government, they argue, boosts the total economy by more than a dollar, since the workers the government employs will spend their money on haircuts and televisions that will also employ other people, who will then spend their money on…etc, etc.  So government spending has a ‘multiplier effect.’ The Keynesians also highlight the fact that, because interest rates are so low right now, the government can issue debt to fund these projects especially cheaply.

But others — referred to as ‘austerians’ or advocates of ‘austerity’ — disagree. If you undertake large fiscal stimulus, the austerians argue, businesses will see the government’s growing debt and consequently anticipate future tax increases that, in the future, will inevitably cut into the profits of projects they were considering starting. So they’ll invest and expand less. Plus, they argue that the ‘multiplier effect’ is overstated. In the chaotic political process, special interests will ‘capture’ the benefits of any stimulus program. Instead, governments should cut taxes and concordantly cut spending, to restore business’s confidence that the country will be economically stable and hospitable to business into the future, and interest rates will be kept low. With that confidence, banks will lend more, and businesses will invest and expand and spend more. The debate between these two camps is, to put it mildly, voluminous and heated. Since the 2008  financial crisis, the Keynesian view has gained popularity among the economic commentariat, but countries have varied in their actions — China and the U.S. undertook large stimulus funding, while the U.K. has generally leaned more toward austerity.

So which is correct, stimulus or austerity? Well, as above, each position seems to have a logic to it; each seems reasonable in theory, in writing. So let’s ask another question — how would we go about testing each thesis? Well, we could find countries that have experienced crises, we could find out whether they implemented stimulus or austerity, and consequently see how they did. From there, we could figure out which fare better — countries under austerity or countries under stimulus. And this is the sort of debate we see on editorial pages every week — there’s a new economic report from one country or another, and the economic commentariat pounces upon it as proof of what they knew all along. But this isn’t exactly scientific. Republicans in the United States have argued that the disappointing economic recovery is proof that the Democrats’ 2009 stimulus did not work. But Obama’s defenders point out that we need to consider the counterfactual: They argue that economy would be even worse, much worse, without the stimulus, and so the stimulus ‘worked’ in bringing us from ‘unthinkably horrible’ to ‘disappointing’ economic conditions. Some, such as Paul Krugman, make the argument that the rigorous economic theory centering on the ‘multiplier effect’ demanded much more stimulus than Congress passed in 2009, and that only a full stimulus would really work. So the U.S.’s difficulties are no proof that stimulus doesn’t work. What about austerity? Many countries that have implemented austerity have not consequently seen robust growth, and some have even experienced new crises (whether Greece actually effected ‘austerity’ is debatable; in 2001, Argentina’s attempts at austerity prompted a public backlash so extreme the instability and loss of confidence caused a financial crisis). Can we conclude that austerity is harmful? Not quite. We need, again, to consider counterfactuals. Coming to either conclusion from these ‘data’ would be like concluding that, since death rates are higher among people who have gone to an emergency room lately than among the general population, emergency rooms must kill people.

So, again, how do we test the competing theses? Let’s make it even easier: How do we go about testing just the stimulus thesis alone? There are two difficulties: (1) Any time that a government enacted stimulus in response to a crisis, we’ll always have the counterfactual question, “But couldn’t things have been even worse without it?” (2) Modern states and hence modern fiscal policy are relatively novel — so our ‘sample’ of fiscal stimuli isn’t very deep. In economics we can’t rerun experiments multiple times, adjusting key variables in isolation and observing the effects, as we can in, say, chemistry. So how can we attribute causation?

The best we can do is what economists call a natural experiment — some historical event that ‘isolated’ the variable ‘stimulus’ from the context of ‘recession,’ so we can observe the former variable without contamination. That could be some set of circumstances in economically normal times that made policy-makers enact policy changes that were equivalent to a stimulus, but that weren’t a response to a recession. For advocates of the stimulus thesis, such as Paul Krugman, that natural experiment is WWII. From the stock-market crash of 1929, through the Great Depression, up until the U.S. began gearing up for WWII, economic growth in the U.S. was anemic or negative. In 1940, the U.S. started building new bases and munitions and planes, etc., in response to the possibility of war, not due to an economic downturn — and from there onward, for a long stretch, the economy took off. So when the variable ‘stimulus’ was isolated from the context of an already-advancing recession, it worked, and, according to the economists’ calculations, had a very large multiplier effect indeed.

Let me be clear that I take this argument extremely seriously. The case is compelling. In response to it, I want to call into question not this particular economic argument, but economic methodology — and hence, economics — as a whole.

Allow me me to draw a comparison to one of my other pet preoccupations: exercise physiology. I love to run; this morning, I ran a 5-k race. I ran my hardest and did decently well (17:10), but I could have potentially gone slower or faster. Why do I think that? Well, my pace varied throughout the race in ways that don’t seem to be explained purely by ability — at mile 2 I was running faster with less effort than I had been at mile 1.5, even though I wasn’t yet inspired by the site of the finish line, and even thought my muscles were more worn and soaked in lactic acid by that point. What can explain the difference? Not exercise physiology. Exercise physiology can give me a lot of very good basic guidelines to becoming a better runner — it can tell me what kinds of nutrients I need in my body, it can explain why I should mix fast track workouts on some days with long, ‘aerobic’ runs on others. But it can explain very little about why I felt I could run faster with less effort at mile 2. What ‘s the actual explanation? As far as I can tell, I sometimes have surges of adrenaline, where I feel motivated to catch one person ahead of me out of pride; while other times my adrenaline falls, without my control, as I get distracted by thoughts of other obligations and begin to think that the race doesn’t really matter. I feel the way I did at mile 2 when some inspiring thought wiggles its way into my head, through no action of my own. To stretch the metaphor, we could say that exercise physiology can describe my ‘potential output,’ and that potential output constrains how fast I can run, but my actual pace at any moment has to do with my motivation, my identification with my task, with choruses replaying in my head that bring goosebumps and adrenaline, what you might call as a catch-all, ‘spirit’ — a term that evokes ‘animal spirits’ but goes beyond just financial swings, and also connotes human choice, agency, and motivation, more generally.

My point here is that WWII probably had a lot of effects on ‘spirit.’ In fact, only an economist would really think of WWII as a clinically isolated boost in government expenditures, whose effects would therefore be replicable in any other context. American pride was pricked as never before by the attack on Pearl Harbor; millions of men were sent overseas, so those who stayed home no doubt felt they needed a really good reason why; families at home read harrowing letters from their soldiers; America as whole thought, correctly, that the fate of the free world depended upon it; Americans saw, everywhere, government propaganda that reminded them, correctly, how much their work and productivity mattered; and toward the end of the war, America felt itself emerging as the new global superpower.

My thesis is that all of this could amount to, in effect, a big national adrenaline surge — a shift in spirits that made people suddenly identify with their work in billions of small ways that, in aggregate, had enormous effects on GDP. Workers became more productive, business owners suddenly became more ambitious, because, well, suddenly, it really mattered. I don’t have any data on national adrenaline levels; I can’t run any regressions on any of these variables or their effects; but I can’t believe they don’t matter, because, well, I’m human.

And what this suggests is that it’s really implausible to imagine that we can isolate the effects of ‘government purchases’ from the context of “The Greatest, Goodest War Ever Fought, From Which our Heroic Boys Emerged Victorious.” Have I demolished the case against stimulus? Hardly. I think the logic of the case for stimulus is compelling; but I think the most widely-cited natural experiment is neither. And this problem would seem to contaminate empirical economics research more generally: Just as my running speed depends on my mysterious state of feeling ‘psyched up,’ my productivity at work varies with mysterious states of alertness, interest, and enthusiasm about the idea of being a hard worker. Since macro-economies are built on micro-economic behaviors, it really could be the case that the reason one nation is wealthier than another is actually explained by how psyched up and enthused about hard work the people in each country are. But since we can’t measure these, the regressions economists run will always attribute the difference to some other, measurable variable.

This sort of ties in to an interesting, telling, surprise in development economics recently, that came through a paper by a few Dutch economists. For the past many years, development economists have focused on “Randomized Control Trials” to test their theories of development. They find instances in which a form of aid intervention is distributed across a population completely randomly — that way, they can isolate the effect of the intervention itself. Consider the alternative: If we observed a bunch of localities, some of whose leaders required all children to go to school, and others’ who didn’t, and then saw that the the former localities were wealthier by X amount, we couldn’t conclude that all of X was explained by the education policy difference. Rather, it seems likely that the local leaders who required education would be more enlightened in other ways, too, that could explain most of the difference. Likewise, we can’t just observe the impact of a World Bank program that distributes aid non-randomly — because, then, it’s likely that the people who received the aid might’ve been selected for their exceptional dire straits, or because they were more politically well-connected, both of which would bias the sample. So development economists more and more require “Randomized Control Trials” (RCTs), which are frequently compared to ‘double-blind’ experiments in medicine, to provide proofs of the efficacy of any aid intervention.

But the problem, as the paper argues, is that RCTs are not actually double-blind. The objects know they are receiving aid, and may change their behavior accordingly. There could be an ‘aid placebo effect.’ And, indeed, the economists find statistical evidence of just that: When groups were randomly selected, and some were provided with ‘modern cowpea seeds’ while others were provided with traditional cowpea seeds, the former groups with the modern seeds had yields about 20% greater than the others. Amazing! But, more amazingly, this effect completely disappeared when the group with the modern cowpea seeds was not told they were getting modern seeds. In other words, it appears that the whole difference could be accounted for by behavioral changes — by farmers who worked harder and more optimistically at the thought that they had special, advanced seeds that would bring them enormous advantages.

In short, I’m joining those who critique the modern economics profession for becoming overly quantitative, while insufficiently nuanced and open in its approach to human behavior. It seems strange that economic theorists spend so much time making extraordinarily mathematically sophisticated extrapolations of the assumptions of microeconomic theory, when ordinary people would report that those assumptions are clearly untrue. It seems strange, too, that the Dutch study above caught so many (me included) by surprise — have we seriously never noticed, in our own work lives, how we all work harder and better and more productively (our yields increase, as it were), when we feel optimistic about the end result? How could we not assume the same of poor people who are told “these are modern seeds.”

***

(For those who are interested, my actual (currently completely unqualified) views on fiscal stimulus are: What fiscal stimulus? On the theory side of things, I lean more toward the stimulus than the austerian view, because I think the austerian assumptions about human behavior are unrealistic — i.e., I really doubt, and the evidence doubts, people fully take account of potential future tax hikes and decrease their expenditures to match the government’s increases. So an ideal stimulus could rescue the economy from a downward spiral of declining spending, declining profits, declining confidence, declining investment, job losses, and more declining spending ad infinitmum. So I believe the theory that supports an ideal  stimulus. But in practice, stimulus is rarely implemented ideally or effectively: Government can’t just execute an order that boosts demand for the economy as a whole. It has to procure funding for various agencies that have their own, independent motives. Indeed, a very large portion of the 2009 ‘stimulus’ money was actually not spent in an even moderately timely manner, because the people who had final control over its expenditure had their own goals and interests aside from simply an aggregate demand boost. And then, stimulus has a lot of bad second-order effects: Every ‘temporary’ government program creates constituents who will demand that its expenditures be continued even long after the ‘stimulus’ justification has disappeared — planting the seeds of a new crisis, years on, driven by debt and loss of competitiveness. At the same time, I think it’s likely a bad idea to take austerity to the point of laying off government workers during an already-declining economy, which could add to a downward spiral. The correct position, then, seems to me to involve a mix of boosting demand through more aggressive deficit-financed tax cuts, having counter-cyclical social insurance policies in place, and using anomalously-low interest rates to undertake public-infrastructure projects that actually need to be done anyways and actually will be temporary. More importantly, politicians should take advantage of crises to undermine incumbent interests groups and cartels that drag on innovation and growth: A time when people are intensely worried about their pocketbooks is the right time to tell them that their local friendly doctor, yes him, is driving up medical costs by blocking reforms that would let nurses take on simpler tasks and reforms that would make some really simple and harmless medications available over the counter; or that their haircuts are so expensive partly because their state government got the idiotic idea that barbers need to be licensed. These kinds of growth-spurring reforms tend to get left ouf of the macroeconomic debate.)

Policy Rationality vs. Cultural Identity Heuristics

(Another note on method: I’ve continued to debate, with myself, what form this blog should take. Lately, I’ve started to think that much of it should be devoted to incredibly basic economics, as an introduction to — as cliche as it sounds — ‘the economic way of thinking.’ There are three reasons for that: (1) I don’t have the ability to write at a very high, technical level now, (2) some of the keenest interest in what I’ve been writing recently has come from friends who know very little economics and are full of wonder, and (3) I think I will learn a lot just by revisiting and trying to accessibly articulate the very basic stuff–since this blog is just getting started now may be the time to that. So here goes.)

***

“Should we defund NPR?”

There are two ways that we can go about thinking about this question. We can think about it rationally. Or we can think about the way almost all of us actually do — using heuristics. What is a heuristic? It’s a sort of cognitive short-cut or rule of thumb. When we have a question that is very complex, abstract, and difficult to answer, we mentally replace it with another, easier question. When someone asks “Should we defund NPR?” our brain hears, “Is NPR good or bad?” And that question then becomes “Are I and my friends NPR-type people, or not-NPR-type people?”

So let me start out answering the heuristic question: I think NPR is not just good but great. I love NPR’s programming. I subscribe to 3 NPR podcasts, read one NPR blog, and tune in whenever I have access to a car. I think it’s terrific.

But I also love my neighborhood Indian restaurant, the cigar shop by Harvard Square, and red socks from J Press. I think they’re all terrific, too. But none of those three deserves public funding; and it would be naked selfishness for me to advocate subsidies for any of them.

So, the obvious point is, there’s a really basic but really important distinction between liking a thing and thinking it’s terrific, and having a good reason to think it should be government-subsidized. (Indeed, usually the way we express our liking for a good is to take money out of our own bank accounts to pay for it.) This is all very obvious when written down in a blog post, but the problem is that most of us do a really poor job internalizing this idea. And this is a big problem, because the way to get the correct answer to any question is to think about it rationally.

So let’s just take a few steps in that direction. “Should we defund NPR?” This question hinges on the question of “what deserves public funding?” Clearly, not all good things. Most good things, like Indian take-out and red socks from J Press, are provided by markets — by businesses eager to satisfy people’s likings at a price that people think is worth it (if you don’t think you’re getting more than you’re paying out for your Indian food, you shouldn’t buy it).

Is NPR different? Yes, of course. But how so? Is it different in a way that means it shouldn’t be simply left to the market? Well, it’s possible that there’s a “market failure” here. The classic example of a market failure is a lighthouse. Everyone enormously benefits from lighthouses — people on ships that don’t get sunk, and regions that therefore benefit from the trade they bring. But it’s hard to get the people who benefit to willingly pay for lighthouses — each ship captain can claim that she knew the waters so well she didn’t really need the lighthouse that night, while no individual townsperson can be directly charged for the more general prosperity the town has gained — because there’s no way to exclude non-payers from its benefits (i.e., it would be impossible to only cast light for pre-subscribed ships). So lighthouses won’t be adequately provided by markets. If a thing has enormous public benefits, but experiences a market failure, then almost everyone agrees that the government should step in to provide it at public expense. A classic market-failure/public good happens when a good is (i) non-excludable (there’s no way to stop people from enjoying its benefits) and (ii) non-rivalrous (my gain from the lighthouse doesn’t detract from yours).

Is NPR a public good? It is easily the most intelligent and informative radio we have in the U.S. But if intelligent and informative radio is valuable why can’t we all — XM radio style — just individually subscribe and pay for this intelligent and informative radio, just like we pay for intelligent and informative books? With the rise of satellite radio, there is no longer a tenable argument that public radio is a classic market-failure — radio is now very much ‘excludable.’ And, indeed, since the average NPR listener is more well-educated and affluent than the general population, (1) shouldn’t she be able to pay for the content she desires? and (2) isn’t public funding for NPR, then, truly regressive, because it uses taxpayer dollars collected from all economic classes to satisfy the tastes of the most well-educated and affluent tier?

Again, maybe. But there’s also another response to that. Let’s make the implausible assumption that it costs NPR $2 per listener to produce its content each month. But suppose that individual prospective listeners are unwilling to pay this fee. Every individual person says that NPR is only worth $1 a month to him. Is this proof that NPR is not “worth it” — i.e. that the costs of producing it outweigh its benefits? Not always. There might be what economists call “externalities.” An externality happens when you are affected by my consumption of a good. When I smoke a cigarette, I’m not just smoking a cigarette — I’m also getting some tar in your lungs, too, which could drive up your medical costs. When I get immunized, I’m not just getting myself immunized — I’m also helping people who aren’t immunized, because I’m decreasing the likelihood of an outbreak that would affect them. Smoking has negative externalities; immunizations have positive externalities. If a good has an externality, its costs to society are different from its costs to you. In other words, if a good has a positive externality, it is worth more to society as a whole than you personally are willing to pay for it — which means, if we all just act individually and selfishly, then we won’t get enough of it.

Does NPR have positive externalities? It could. Maybe intelligent and informed people are more likely to contribute to society’s flourishing, by working and voting and interacting more intelligently. In that sense, maybe I benefit simply by you listening to NPR. In the example above, NPR costs $2 per listener per month, but each person only values it at $1. But suppose that society as a whole would gain benefits equivalent to $3 per month for each new listener because of these externalities. In that, case, it makes sense for the government to fully fund NPR’s costs for a very basic reason: society as a whole can gain $3 of benefits per listener for only $2 of costs per listener; the market won’t provide those $2 of costs per listener; and government is supposed to look out for the interests of society as a whole. If we’re concerned that NPR mostly appeals to more affluent people, and that funding therefore effectively redistributes toward the top, then there’s another easy solution: The government can tax away some of the benefits generated for society as a whole, and use that extra revenue to fund other goods for lower-income people.

There are a lot more questions and complications here, including a lot of second-order effects. Consider the fact that there are no other intelligent radio stations on the airwaves today. Why is this? Ironically, part of the reason may be precisely that we have publicly subsidized NPR — thereby scaring away potential competitors who might’ve liked to compete with it. This competition, theoretically, could have forced NPR to be even better.

But the point is, there are a *lot* of difficult, complex, and abstract questions we have to answer in order to answer the question “Should we defund NPR?” And not one of these question is “Are I and my friends NPR-type people?” Who you are, or  how you identify, doesn’t matter — what matters is the correct answers to the questions above. There’s no inconsistency in a pickup-driving, rural-dwelling right-winger supporting NPR because he thinks it contains enough information that could make the nation more prosperous but that markets won’t supply; and there’s absolutely no inconsistency in a square-glasses Cambridge resident supporting defunding because she thinks it will invite challengers to compete for highbrow and ‘progressive’ niche markets. We really shouldn’t be surprised either person.

But we are surprised by both people above. And, indeed, both people are in practice surpassingly rare. And the reason is that, when people think they are thinking about policy, they’re very rarely actually thinking about policy. They’re usually asserting their identities and symbolically defending the status of the groups they identify with. And NPR is held, in the popular imagination, as a symbol, a sort of religious talisman, of urban progressivism. And so, accordingly, more funding for NPR is held in the popular imagination as a kind of elevation of the symbols of this group; its opposite held as the opposite. Questions about public goods, externalities, or potential competitors never enter the popular imagination — until an economist comes on the air, in which case she is assumed simply to be rationalizing her own cultural sentiments toward the talisman.

Do ‘identity politics’ and ‘symbolic politics’ sound like pet peeves? They are. Am I basically saying that most people are stupid? Well, yes and no. Most people are stupid about public policy, because learning about public policy takes a lot of time, and most people have jobs and relationships with real people and — equally importantly — very limited influence over public policy. So it’s just not rational for them to be rational about policy. This is what economists calls “rational irrationality.” People are stupid about policy in the same way that I’m stupid about theoretical chemistry: When my brother talks to me about his research in that field, my thoughts go no further than, “That sounds cool!” I take his word for everything he tells me about chemistry, never really question it, and move on. You could say I use the “kin heuristic” to learn about chemistry. This is — does it sound condescending? well, it’s plainly true — precisely how most people think about policy and how to vote, except they rely on the social groups they identify with rather than just kin.

And so perhaps I should be more sympathetic toward people who are stupid about policy, if I expect my brother and his friends to tolerate my ignorance of chemistry. Maybe. But sympathy doesn’t imply permissiveness.

There are two big differences, as far as I see it, between chemistry and policy. The first, obvious, one, is that we’re all responsible for policy because we live in a democracy. Your policy knowledge has externalities in ways that my chemistry knowledge doesn’t. If you vote for shitty politicians with shitty policies, I’ll get hurt; if I still believe in phlogiston, I’m not sure that’s really a problem. The second is that identity politics are intensely divisive and — call me old-fashioned and sentimental — I’m still committed to the idea that we should be good neighbors to each other. There are people who won’t get lunch with you if you have the wrong political identity, because politics in their mind — whether they are cognizant of it or not — is about in-group and out-group tribal identity. If you support defunding NPR because you want to facilitate competition for the high-brow listenership, many potential friends will assume you support that defunding because you resent urban progressives — because they assume you use the same kinds of heuristics for policy as they do — and will be committed to disliking you as a result. People’s tribal political sensitivities can make life really unnecessarily  and unfairly difficult for intellectually idiosyncratic people, those who will make themselves ideological minorities wherever they go. And the only cure to this, it seems to me, is to expect people to be able to converse rationally and dispassionately.

Barry Eichengreen’s Exorbitant Privilege

Have my apologies, loyal readers and devoted followers, for the sparse blogging as of late. Out and about, some relaxing here, some other projects there.

Anyways, the latest econ read was Barry Eichengreen’s Exorbitant Privilege. Three stars. Monetary economics is among the hardest for slice of the field for laypeople to understand. If you’re intermediate — you’ve taken, and learned from, a macro course and kept up with the Financial Times a bit since — you’ll find this book moderately challenging. If you’re advanced, it’s all old news. If you’re a novice, I’ll try to help (which might make this blog post a bit tiresome for the others).

The basic question the book seeks to answer is: “Will the U.S. dollar maintain its ‘exorbitant privilege’ as the international reserve currency?” And the way the book answers the question is: “Here’s the history of the international monetary system. Oh, and here’s a chapter of analysis, too.”

What is the ‘exorbitant privilege’ that the U.S. dollar and consequently the U.S. economy enjoy?  Well, first, here’s what it looks like:

The dollar remains far and away the most important currency for invoicing and settling international transactions, including even imports and exports that do not touch US shores.  South Korea and Thailand set the prices of more than 80 percent of their trade in dollars despite the fact that only 20 percent of their exports go to American buyers.  Fully 70 percent of Australia’s exports are invoiced in dollars despite the fact that fewer than 6 percent are destined for the United States.  The principal commodity exchanges quote prices in dollars.  Oil is priced in dollars.  The dollar is used in 85 percent of all foreign exchange transactions worldwide.  It accounts for nearly half of the global stock of international debt securities.  It is the form in which central banks hold more than 60 percent of their foreign currency reserves.

That last one, in bold, is the most well-known — it is what we mean when we say that the U.S. dollar is the ‘international reserve currency.’ Why do central banks around the world — the equivalents to the Federal Reserve, such as the E.C.B., the Bank of Japan, and the Bank of England — hold U.S. dollars? They do it partly in order to stabilize their exchange rates — and they do that in order to prevent, e.g., a domestic industry that primarily exports to Sweden facing devastation by a depreciation of the krona with respect to the domestic currency (which would make the domestic product more expensive to Swedes). But why do they mostly reserve dollars to stabilize their exchange rates? The answer is sort of recursive and in that sense quintessentially economic — they use dollars because all the other central banks are using dollars to stabilize their exchange rates, so by stabilizing its own currency relative to the dollar, each individual central bank can simultaneously stabilize its currency with respect to all the others, using just one metric and one kind of intervention. By buying dollars on the foreign-exchange market, foreign central banks can depreciate their own currency, and build the foreign reserves necessary to, at other times, sell dollars in order to appreciate their own currency. E.g.: When the Japanese yen was appreciating too quickly, in a way that hurt the competitiveness of its exports to the U.S. and elsewhere, the Bank of Japan would buy up U.S. dollars and dollar-denominated assets. After the bubble burst, the Japanese yen faced devastating depreciation, as foreign investors fled from Japanese assets — and the Bank of Japan was equipped with dollar reserves that it could use to appreciate and stabilize the exchange rate in the event of a panic. (Am I belaboring these explanations? Apologies — mixed readership, I assume.)

But anyways: Another reason each central bank holds dollars is that it functions as a ‘lender of last resort.’ When everybody freaks out in a financial panic, the central bank, with its access to the public fisc., is supposed to be there to lend to banks and (sometimes) companies facing temporary liquidity problems, to help them meet their obligations without going under. And since (as above) most international business is conducted in dollars, a typical foreign business and the banks it deals with have liabilities in (1) their own domestic currency and (2) dollars. So, if the central bank is to be able to lend to help its banks help each other and their other businesses meet those liabilities in a liquidity  crisis, it needs to have its own currency (which is trivial) and U.S. dollars — and not really any others. And, beyond that, foreign multinationals, as a matter of prudence, want to hold a lot of dollars because they mostly do their international trade in dollars.

So we’re slowly getting the picture of ‘exorbitant privilege’ — every other country in the world has to hold a lot of dollars and worry about its exchange rates with respect to the dollar, but the U.S. doesn’t have to requite these monetary affections and anxieties. There are a lot of reasons why this is a pretty sweet deal for the U.S. Just think about the most basic one: If another country’s central bank (or its private companies) wants more U.S. dollars to hold and we in the U.S. don’t want as much of their domestic currency, then they will trade real goods for our dollars — which are printed by the Treasury by fiat for a negligible cost. In a very real sense, the dollars that are simply held abroad — including, which Eichengreen didn’t mention, those held by international criminals, who trust the full faith and credit of the U.S. government beyond any other — represent the value of goods and services that we in the U.S. got basically for free from other countries. This is what economists call seignorage. There are other benefits, too: Our companies don’t need to expend as much intellectual effort or derivatives-brokerage fees on hedging the risks of currency movements, because they (1) raise money, (2) do international transactions, and (3) report to their shareholders all in the same currency. Next (and this gets a bit arcane) this demand for dollars also drives up the demand for highly liquid, dollar-denominated assets, such as U.S. Treasuries and mortgage-backed securities. This means that U.S. borrowers — including the federal government, state governments, government agencies, corporations issuing bonds, and individuals seeking mortgages — enjoy artificially lower interest rates on their loans (as we’ll see later, this had a pretty salient downside, as well). That is, we in the U.S. get to borrow extra cheap, because international investors want our assets not just on the basis of our ability to repay principal and interest, but because they use our assets for transactions with each other — if you imagine the supply and demand curves for loans in the U.S., the reserve value of dollars amounts to just a boost in supply.

Finally, in the now extinct Bretton Woods system of fixed exchange rates, the dollar had a different ‘exorbitant privilege’ (one which led to the coinage of that phrase in the first place): Because the dollar was used for international transactions, the U.S. could buy other countries’ exports in dollars. This meant that the U.S. as a whole could consistently run a large trade deficit with another country without depleting its reserves of that country’s currency. Other countries couldn’t do that. Suppose that Bretton Woods set the dollar-franc exchange rate at 1 to 1; suppose also that France constantly bought a lot more from the U.S. than vice versa. In that case, France would slowly run out of dollars, which would lead speculators to suspect that France’s central bank would no longer be able to maintain the 1 to 1 exchange rate — they would engage in a ‘speculative attack’  against the Franc by simply selling off their francs in exchange for the dollar (at the 1 to 1 rate that the Bank of France was obligated to honor) in anticipation of an eventual forced depreciation of the franc. To defend the franc and honor its fixed-exchange-rate commitment, the Bank of France would have to keep selling off its dollars to these speculators — if the speculators won in this game of chicken, the Bank of France would eventually run out of dollars and be forced to stop honoring its commitment to exchange 1 franc for 1 dollar, which would effectively amount to a depreciation of the franc. To avoid this, in advance, the French government would have to make onerous efforts, accompanied by all kinds of economic distortions and inefficiencies, to prevent itself from running chronic trade deficits with the U.S.  (All this is no longer a problem, in the exact same form, with our current system of a floating exchange rate between the dollar and the euro.) The U.S. never had to think about any of that, because it purchased its imports in dollars. Exorbitant privilege.

So the bottom line is: The U.S. dollar is the international currency, and that brings us all kinds of awesome/unfair benefits.

So why do the other countries put up with their decided lack of exorbitant privilege? And how did we get here? The short answer is: World War I, and then World War II. WWI, happening soon after the establishment of the Federal Reserve, made the U.S. into a major international lender — got our assets circulating abroad, etc., which helped New York begin to compete with London as the financial capital of the world and the dollar to compete with the pound as the international reserve currency. After World War II, the U.S. emerged as a single economy producing more than 50% of global GDP. It was an oasis stability compared to Europe, and Asia was still too poor (its political instability aside) to be a player. And, essentially, lots of countries wanted to curry American favor. So there were really no alternatives to the dollar as an international currency — not sterling because Britain was too sick, and not the Deutsche Mark because West Germany was too eager to win American favor given the looming threat of Soviet troops to its east.

But why still? The U.S. is much less than 50% of global GDP now, and will almost certainly continue to sink lower. Europe is safe and stable and it now has the euro (whose birth Eichengreen details). The answer, largely, is that the dollar has remained the international currency by default. That is, (1) it has simply enjoyed the advantages of incumbency itself and (2) credible alternatives have not yet arisen. Saying that the dollar enjoys the advantages of incumbency is another way of saying that “everyone uses the dollar as international currency because everybody else uses the dollar as international currency.” Even if you personally believe that the dollar isn’t the ideal currency for international transactions, you’ll be forced to use it insofar your counterparties are using it — so everyone can get locked into using the dollar even when every individual would like a different system (classic “prisoner’s dilemma”). Regarding the lack of alternatives: The euro has partly displaced the dollar in international reserves in recent years, but (recent events would seem to suggest with good reason) not very much  — partly due to the inherent instability in having a monetary union without a fiscal union and partly due to the fact that Europe for demographic reasons is likely on the decline in the long term. The IMF’s Special Drawing Rights (SDRs) seem theoretically sound, but just haven’t caught on in practice. China is (have you heard?) really big and rising fast — but markets regard it as inherently unstable as a largely poor non-democracy, and its financial assets don’t trade internationally very much, so the RMB seems unlikely to replace the dollar anytime soon.

So all this leads to the big question that all of this history and theory is here to answer: What next? Could anything change all this this? Yes: A U.S. fiscal crisis. As the U.S. debt increases, foreign investors may become increasingly nervous that the U.S. will attempt to ‘monetize the debt,’ inflating and depreciating the dollar in a way that reduces the real value of foreigners’ holdings of U.S. assets. This could lead to a sudden flight from U.S. assets, which would itself depreciate the U.S. dollar, which itself would fulfill those very fears. Such dollar instability would drive global investors to demand an alternative.

More generally, the system could shift just a little bit at a slower pace. Indeed, Eichengreen’s prediction is that the dollar will maintain its leadership while losing some of its dominance. The continuing decline of the U.S. as a share of global GDP, and the concordant rise of Asia and Brasil, will create a more multipolar global economy, which will invite a more multipolar global monetary system. Combine those with more sophisticated financial markets that could make currency exchanges more efficient, and we can expect central banks and businesses to work with more diversified currencies and things like Special Drawing Rights over the long term. Will the loss of its privilege have some negative effects on the U.S.? Yes. Will those effects be disastrous? No.

So that’s the summary. Here are a few questions and Interesting Things:

Something I didn’t get: Eichengreen seems to imply, insofar as I understand him, that the U.S.’s artificially appreciated currency is a kind of privilege. But, econ 101 says that, very plainly, an appreciated currency is a privilege for importers and burden for exporters. Surely, Americans who have suffered with declining export-oriented manufacturing industries don’t consider themselves privileged. I don’t see how an artificially appreciated currency can be called a privilege for the economy as a whole — it would seem, rather, to just trade off between one group and another.

This being a book written within the past 4 years, the author is compelled to give His Angle on the Financial Crisis. In brief: the recent financial crisis was largely driven by a burst in a real-estate and asset bubble that was, in turn, ballooned by exceptionally low interest rates in the U.S. And to a certain extent, the usual explanation for these low interest rates — the Asian ‘savings glut’ — doesn’t quite explain it in full. After all, with interest rates in the U.S. so low, why wouldn’t the capital from gluttonous Asian savings go elsewhere, where it could earn a higher return? And the answer is that, as per above, this gluttonous capital wasn’t seeking out returns — in a way that would have led to its more efficient allocation — but was just looking for American dollar-denominated assets per se. Without the dollar’s status, we in the U.S. might not have enjoyed the privileges of cheap capital through the early 2000s and the exorbitances of 2008.

Final sylistic judgment: A lot of the writing, especially the more arcane aspects of various failed attempts to establish monetary union in Europe before the final, successful effort is, frankly, boring to the nonexpert who isn’t deeply interested in the question. (This is not a criticism of Eichengreen’s work — just a warning to the non-academic intellectual taste-bud.) Throughout, though, Eichengreen is a surprisingly good prose stylist for a monetary economist. And he has a sense of irony that made me LOL a bit. He quotes a British traveler who, upon arriving in the U.S. and trying to make some purchases, discovered that sterling had sharply depreciated over the course of his long voyage across the Atlantic: “Bit of a hold up, what?”

Banerjee and Duflo’s Poor Economics

First things first: Two thumbs up for for Esther Duflo and Abhijit Banerjee’s Poor Economics. Read it. It’s a quick 260 pages that quite inexplicably manage to contain (1) the basics of the theory of development economics, (2) interesting recent research that complicates the basic theories, and (3) compelling and memorable stories taken from their actual interviews with actual very-poor people around the world.

The way the book proceeds, more or less, is to look at a problem of poverty — disease, malnutrition, under-education, lack of saving, and underinvestment —  and ask three kinds of theoretical questions about it: (1) is there a ‘poverty trap’?  (2) is there a demand problem or a supply problem? (3) why are the poor making the choices they are making with respect to this problem? Answering those three questions is essential to determining the right policy response to the problem. If we start with this theoretical approach, we can understand the rest a lot better.

(1) Poverty traps: Under ‘normal’ circumstances, economic theory suggests that the poorer you are, the easier it should be for you to get relatively richer. If wages are very low in your hometown, capital should flow in, driven by owners and investors who want to take advantage of cheap labor. And the poorer you are the more ‘low-hanging fruit’ is available to you in terms of using extra wages to invest in extra education that can raise your productivity, etc. This is the basis for the belief that ‘economic convergence’ should happen pretty quickly and smoothly in an open global economy. But obviously it hasn’t always happened smoothly everywhere, most notably for the ‘bottom billion.’ Why not? There could be a ‘poverty trap.’ That is, perhaps a lower-middle income country should be able to ‘catch up’ to a rich country very quickly, but a very poor country below a very low level of income could get stuck. Think about how this could work on a micro-level: If your income is so low that you can afford little food and are very small and weak, then you may not be capable even of a basic factory job. But getting the factory job may be the only way to raise your income so you can afford more food. So you’re stuck. Determining where and when ‘poverty traps’ exist is really important because it has implications for proper development policy: If there is a poverty trap, a large initial, charitable investment to get the poor beyond the point the trap point is essential; but if there isn’t actually a poverty trap then the right response is probably just to put in place the right institutions and then get out of the way of markets. (The belief that the bottom billion are in a ‘poverty trap’ is generally associated with Jeff Sachs; the alternative view generally associated with William Easterly.)

(2) Demand or Supply: The next theoretical problem to consider when approaching a poverty problem is “is there a supply problem, or a demand problem?” For example, we know that there’s a high correlation between a country’s income and its citizens’ educational attainment. It’s easy, therefore, to assume that low educational attainment causes poverty, and so we can fix the problem by building more schools and paying more teachers — i.e., the assumption is that there is a ‘supply’ problem, namely an undersupply of educational institutions. But this might be wrong: It could be that there are too few opportunities for educated workers in the country to make investing in education an attractive option for parents and students. If your country doesn’t have the ability to employ lawyers and computer scientists, then it might actually make sense to skip school and just get to work from an early age — i.e., there could be a lack of ‘demand’ for education in the country’s labor markets. Obviously, the question of whether it is better policy to (i.) airlift teachers and schools into a poor country, or (ii.) focus on improving other institutions so that, one day, the country could employ computer scientists, hinges on whether its low levels of education stem from undersupply of education or low demand for educated workers.

(3) The Rational Poor: The above feeds into the most novel aspect of Banerjee and Duflo’s work — their efforts to speak to actual poor people and understand their reasoning for the decisions that economists, several levels of abstraction away, would normally consider plainly irrational. For example: It’s clearly irrational to get pregnant and drop out of school, yes? And so, if someone does get pregnant as a teenager, we must attribute it to ignorance or lack of access to proper contraception, both of which could be fixed by more and better aid workers, yes? Well, yes and no. Certainly there are areas in which contraception and knowledge are undersupplied. But Banerjee and Duflo, in typically provocative fashion, isolate a few Kenyan towns in which this just can’t account for the teenage pregnancy rates. They conclude that what is actually happening is that the teenagers there are actively choosing to get pregnant because, as they would themselves claim in interviews, the prospect of “getting a man to take care of them” is more attractive than the prospect of “continuing to burden their families” for the sake of a very uncertain education payoff. Does this mean that it is in vain to supply more contraception or information? Certainly not. It just means that undersupply of aid is not the only problem, and we need to take the poor’s choices and decisions more seriously and think about how those are affected by the broader institutional structure and opportunities that await them. It means we need to stop simply conceiving of the poor simply as hapless uninformed victims.

Beyond those overarching theoretical questions and approaches, the book is largely just an assembly of a lot of the coolest latest research — including some amazingly clever experiments — in development economics. And so your correspondent presents various appreciations of the insights that stood out, in no particular order:

1. Banerjee and Duflo open the book with the book with the provocative argument that the global poor generally aren’t actually desperate for more food. This sounds horribly cruel. But Banerjee and Duflo propose a test for this hypothesis: If the global poor were very hungry, they should generally use windfalls in their income to consume more calories. But this just isn’t the case. In fact, most such windfalls are used to consume tastier goodies, which, counterintuitively, in many of the studies they highlight, leads to reduced caloric consumption. The takeaway of this not that the poor or okay or dumb, or anything like that. The takeaway is that we need to get beyond the mindset of just expressing sympathy through familiar phrases, and actually try to inhabit the desires and motivations of the poor. And many of these poor would actually prefer a $100 T.V. set that could help distract from the boredom of life in an economically depressed region, where work opportunities are few and far between, to $100 of extra food. This all ties back into theme (2) above.

2. A lot of the ‘wrong’ choices the poor make have to do with the fact that we humans are social animals. The poor don’t actually spend their days thinking about how to maximize their productivity and effect economic convergence. They care about status within their community, saving face, doing what their neighbors do, observing conventions.

3. (Extremely broad point): Truth is an extremely, extremely valuable economic asset. Basic, simple immunizations for basic diseases, available at charitable health clinics, could be an enormous boon to poor people. But very many of the global poor refuse to take advantage of this immunizations, because (i) they do not have the rudimentary scientific understanding to get how they work, and (ii) do not — often due to a history of political action in bad faith to, e.g., sterilize the poor — trust the public authorities to give them accurate information about the value of immunizations. We in the developed West sort of take for granted how much trust we have placed in public knowledge that we simply accept.

4. A major problem in educating the poor comes from a basic human cognitive flaw: availability bias. That is, the very poor are surrounded by very poor people like themselves, and they seem images of very rich people in magazines and on posters and T.V. As a result, they are inclined to look at their children as a kind of lottery. Each child is likely to end up very poor, but just might be extravagantly successful. This inclines them to “pick a winner” among their children, and devote all their resources to him. This belief is, of course, wrong: There are very high returns to marginal investments in education even at a low level, and so poor families as a whole would be better off if they invested in their children more evenly. Schools for the poor have this same false belief, and do the same wrong thing: focusing on only the very best students in the class. Ironically, a false belief like “education is a lottery” can itself create a poverty trap.

5. Another problem for global education today and a generically Interesting Thing about the Modern World is that, for the first time in history, we have a lot of professions other than teaching that are demanding the skills of smart, bookish, analytic types who are not well-connected (especially women). There is a very real possibility that the global pool of teachers may be getting worse on average, just in terms of raw intelligence, as a result.

6. Basic feminism is a really big deal. Your correspondent is always self-conscious about saying this, because he fears coming off as one of those guys who, ironically, tries to advertise his feminism in order to seduce women. But seriously: It’s just a plain statistical truth. Women are at least 50% of the basic human capital and capability of any society, and a society that dehumanizes women or delegitimizes their talents is not only being intrinsically horrible and cruel, it is also depriving itself of 50% of the human resources that can contribute to its flourishing. A particularly memorable example from the book showed how, in a natural random experiment, some poor Indian houses had both husband and wife’s name on the title, while others only had the husband’s name. The former households did better on all kinds of measures of flourishing for generations down the line, just because that one little paper gave the wife dignity that improved the household’s life in myriad ways.

7. The book has (inevitably) a long section on micro-finance. The debate over microfinance is so tired by now that there’s nothing to add here, other than to say that Baneerjee and Duflo’s take is really excellent and measured and balanced.

8. This book, like so many others, highlights some the huge potential of better communications technologies. Just one example: One of the main reasons that there is very little ‘microsavings,’ (i.e., the other side of banking for the poor) is the fixed transaction costs involved in managing very small savings accounts. So the very poor often can’t find anywhere to save, or must often pay to save, which, naturally, makes them less inclined to save at all. But with better communications technologies, we could drastically reduce these costs (i.e. if you can deposit on your mobile phone, the bank doesn’t need to put a branch and a teller near your rural village). Some smart policy changes, such as letting local shopkeepers take deposits on behalf of banks, if the banks trust them and delegate that responsibility, could give a save place for the poor to save.

9. A lot of corruption, the authors argue, can be fought relatively easily, just by making more information about, say, how much money has been given to a local government to build a road, available. A lot of the corruption we have in the world exists now because ireally nobody seems to care — like, care at all. This ties into the big takeaway of the final section of the book, with their political economy prescriptions: This section differentiates INSTITUTIONS and institutions. INSTITUTIONS are big abstract things like “the rule of law,” which, they admit, it’s really hard for external forces to quickly change. But we can change ‘institutions’ on a more micro, piece-meal level, such as forcing local governments to actually use cash for what it has been tendered by publishing that tender. And this is the right place to start.

10. In the final chapter, they offer 5 modest takeaways “in place of a sweeping conclusion”: (i.) The poor “lack critical pieces of information and believe things that are not true.” Accordingly, basic dispersion of basic medical knowledge is definitely some good low-hanging fruit. (ii.) The poor “bear responsibility for too many aspects of their lives.” I.e., we in the West don’t need to remind ourselves to splash chlorine in our water every morning, and the poor shouldn’t either — policies should make the chlorination automatic. (iii.) Sometimes there are “good reasons certain markets do not exist for the poor” — they just can’t be made profitable at such a low scale, including, arguably, health insurance. And that is where, very simply, we need continued charity and public intervention. (iv.) There are many really good, ground-level, very doable changes we can make by changing the micro-institutions within the broader institutional structure of underdeveloped societies — and that’s the place to start with fixing the political economy problems. (v.) Expectations become self-fulfilling prophecies.

Metacognition Changed Our Lives — For Good and Bad

On Friday, your correspondent hosted a party and, typically, began to inflict his ideas on unsuspecting strangers. One guest — a Muscovite cognitive scientist researching early childhood learning at the Harvard Ed school — turned out to be a very willing victim. A long conversation about cognitive science ensued, centering on the value of “metacognition” — that is, thinking about thinking.

A little background: One of the most important Important Facts about the Modern World is the rise and dispersion of psychology. Throughout our history, we humans have always talked about our selves. But before the 20th century, our major tool for doing so was literature. When we sought to understand our lives and feelings, we looked for comparisons in myths and stories, we borrowed metaphors and constructed our own, we compared and shared our experiences with our acquaintances through conversation in an utterly unscientific manner; and with the rise of novels, we began to compare our inner experiences to those imputed to the characters within. But, post-Freud, educated people everywhere have instead begun to appropriate the vocabulary of psychology to understand themselves. We started with pseudo-scientific Freudian and Jungian lexica, and have since progressed to the language of brain chemistry. E.g., in the olden days, when we had a question like “why do couples cuddle?” we would maybe use a saccharine metaphor about a vine and a tree or borrow the myth of Aristophanes; from the 1920s through the 1970s or so we would talk about our relationship with our mothers or archetypes or something; today we tell a story about oxytocin and pair bonding and evolutionary psychology.

Is this revolutionary change in how we think about ourselves — and our new fixation on thinking about our brains — a good thing? Yes and no.

Your correspondent and interlocutor agreed on the main benefits of metacognition. The single biggest, most important takeaway from cognitive science (though many earlier philosophers, most notably Nietzsche, happened upon the same insight without scientific pretensions) is that the human mind does not reflect reality, but rather, reconstructs it from a number of different sources — perception, ideas, biases, social cues, and all kinds of motivations. That is, our mind is less like a mirror  and more like a painter: A typical painting obviously in some ways corresponds to the subject on which it is based, but it differs from a direct reflection (or re-presentation) according to, for example, the aspects of the scene the painter selected to bring within his frame, the aspects he has chosen to focus on, his attitude toward the scene and the subjects within, the other painters and schools of painting he identifies with and has learned from, the painters whom he hates and wishes to dissociate from, his own idiosyncracies, the flaws of his own hand coordination and brushes and palette, and his own desire to win fame by cultivating a unique style. To get a little cheesy about it, we can follow Oscar Wilde in saying that a portrait reveals more of the painter than the sitter. (The key problem with this metaphor is that people can easily see how a painting differs from the actual object it represents, whereas we are, tautologically, incapable of directly seeing how our mind’s reconstruction of reality is different from real reality.) This goes even to the very basic level of how, for example, basic visual perception works: We naively assume that we are ‘taking in’ our entire field of vision all of the time, but in fact our brain really only directly visually perceives the objects of our most intense focus and unexpected changes in the field of vision, while it continually reconstructs the rest based off of memories and expectations and other senses.

When you start to think about your mind this way — and combine that basic model of the brain with more specific examples and insights from other parts of the brain sciences, such as social psychology — it really does change your life. Before you write someone off as a jerk, you start to ask questions like: “Why am I perceiving this person as a jerk? Well, look at that jerkish behavior! But, then again, ‘jerkish’ is a matter of interpretation. If he had a different facial structure would I feel quite so sure? If my best friend did the exact same thing, would I consider it jerkish behavior? Or would I, feeling a little warmer toward the subject, be more likely to write it off as playfulness? Or might I attribute the behavior to a rare product of particularly bad, uncontrollable circumstances?” And then, you start to think things like this: “I think this group is wicked: Are they actually wicked? Yes, of course! I read all kinds of news reports about the wicked things they do! But wait. Could it be that, for whatever reason, I am disproportionately attentive to news stories about the wicked things this group does? Or, beyond that, could people in the news media be disproportionately attentive to, hence disproportionately inclined to report on, the wicked things this group does? In short, what do the actual rigorous statistics say?” Where your correspondent has written ‘this group‘ the reader should insert, according to her own animosities, words like ‘people from neighborhood X’ or ‘Democrats’ or ‘Republicans’ or ‘evangelicals’ or ‘Jewish financiers’ or ‘working class toughs’ or ‘rich people’ or ‘football players’ or ‘lesbians’ or ‘XYZ activists’ or whatever.

And it also affects every interaction in social life: “That person at this party does not like me — Execute avoidance maneuvers. But am I sure he does not like me? Well, his body language is standoffish. Then again, so is mine. But that’s just because I know he does not like me! Wait: Maybe the problem is he thinks I do not like him? How to fix that?” In short, when you fully internalize the idea that your mind is a painter and not a mirror, it makes you much, much more skeptical about yourself and all of your ideas, and, accordingly, less assertive in your animosities, more willing to give people the benefit of the doubt, and, where applicable, more interested in actual scientific methods for figuring things out. This kind of mindset is really important for basic social life, but it could also have hugely useful social consequences. It could do things from, e.g., (1) getting more people to take advantage of cheap housing in putatively ‘bad’ but statistically safe neighborhoods (which, in practice, would also lead to greater integration), to (2) reducing international conflict, as people become more cognizant of how xenophobia could be biasing their perceptions of other countries’ intentions, etc.

Does your correspondent sound pretty rapturous about the potential metacognition for human comity and happiness, etc? He is. Your correspondent and interlocutor also agreed that, in the digital present and the digital future, in which it is safe to assume that a piece of information that cannot be found via Google within 10 minutes probably does not exist, metacognition will become an increasingly essential intellectual tool. Since in a few years we all will have Google in our eyeglasses, we’re not going to need to be trained to get or retain information, but we will need to be very good at culling it, interpreting it, and preventing our own brains from distorting it. So metacognition is becoming increasingly important intellectual tool.

***

But as the euphoria of the celebration subsided, your correspondent, the next day, considered three downsides to our society’s shift from literature to psychology as a way of understanding ourselves:

1. The more we conceive of all aspects of our personality and behaviors as brain-chemistry based, the easier it is to talk ourselves into feeling that we have no control over our flaws. And that makes it easier for us to excuse, hence less likely to change, those flaws. Your correspondent has actually read an interesting study (momentarily unfindable) in which researchers had test subjects read a paper that powerfully argued that human have no free will: afterwards, the test subjects exhibited less motivation on tasks of self-control, ethical restraint, delayed gratification, etc. The belief that we are at the whims of our brain chemistry can be a self-fulfilling prophecy. There’s an obvious trade-off here: Insofar as brain chemistry actually does cause some people’s problems, its cruel and unfair to hold them accountable for those problems. But if we’re too lax, we harm people by giving them too-easy excuses.

2. The popularization of psychology has been infuriating in its invasion of political discourse. The conversations we really should be having in politics are about policies themselves. “Is policy X good or bad?” should be the question that constrains every political conversation. But, partly as a result of the popularization of psychology (though this may be just a result of nasty partisanship more generally) we are spending much more time talking about people and their presumed motivations. Using the ostensibly scientific language of psychology can be a neat way to veil the most bitter demonizations of your political opponents. Are Obama’s economic policies motivated by a professorial resentment of the rich? Are sanctions against Iran motivated in part by anti-Muslim animus? It doesn’t really matter for policy itself. What matters is whether the policies are just, legal, and likely to have better effects than their alternatives. And the more we pathologize our opponents’ psychology, the less time we have for discussion of those things. Does this sound like a pet peeve? It is.

3. The brain sciences broadly are displacing the humanities, reducing the time the educated public devotes to the latter. Taken to its extreme, this could be bad for at least three reasons: (i) The literary imagination is an intrinsically pleasurable and worthwhile thing. The story about oxytocin and the evolution of pairbonding is scientifically true, but Aristophanes’ myth is a bit more touching, and may be worth revisiting and passing down to the generations just for that. (ii) Given that our brains have evolved to learn from narratives, stories may inevitably be more resonant and memorable for us and therefore better learning devices as well; to get really good at thinking about the inner lives of humans, we probably still need to supplement our psychology textbook with some Jane Austen. (iii) Since psychology is a relatively new field, there are almost certainly insights about our selves contained in the “embodied wisdom” of literary traditions that psychology has not yet re-discovered. Given all that, it’s probably a bad idea to throw out the humanities just yet.

***

On the whole, metacognition is a hugely good thing for mankind. Part of the solution to the problems above is just to do meta-cognition more deeply and intelligently — that is, to be smarter and more knowledgeable about brain science. For example, the problem in #1 could be mitigated if we were more cognizant of and guarded against our own susceptibility to suggestion. Problem #2 can be alleviated precisely by recognizing Problem #2 — that is, by admitting that we often use ‘psychology’ as a handy veil for plain hostile demonizations of others. Brain science itself provides the best argument to prevent Problem #3, by illuminating how we learn from stories.

Toward a More Fully Darwinian Economics

(Reposted from the Agenda. I haven’t found time to write the past couple of days.)

Robert Frank’s latest book, The Darwin Economy (compressed in the National Interest here) is getting a lot of buzz. I found it, like all of Frank’s work, very illuminating. But while Frank is a sharp, deep and serious economist, not all of his readers are. I want to push back not so much against Frank’s carefully chosen words as against the less-careful interpretations I’ve found on my progressive friends’ Facebooks and blogs, where The Darwin Economy is held as a revolutionary and devastating riposte to free-market economics. I’m skeptical.

Frank frames his argument around a “prediction: One century hence, if a roster of professional economists is asked to identify the intellectual father of their discipline, a majority will name Charles Darwin,” rather than Adam Smith. Darwin, he says better understood competition, and how “positional goods” pit the interests of the individual against those of the species. (Consider the example of the ill-fated Irish elk: Each incremental advantage in antler size helped each Elk’s chances of mating and reproduction. But over time this produced a very unwieldy, and very extinct, species.) This, Frank says, refutes the Smithian belief that perfect competition channels private interests into public good.

Fair enough, but how serious is Frank’s prediction? Even if Darwin grasped aspects of competition that Smith missed, why would this make him economics’ founder in place of the man who (a) predated him, (b) still gave a pretty good take on competition, and (c) studied the economy itself, as opposed to giving us conceptual tools that could be applied later? If Darwin replaces Smith on that basis, then shouldn’t mathematical game theorists replace Darwin next?

But Frank’s prescriptions are more important than his predictions. Mainly, he argues for a more progressive tax schedule, based on his claim that high income is, like a great pair of antlers, largely a positional good. If the wealthy are mainly motivated by relative position (that is, status) rather than absolute income, raising taxes on all of the wealthy should leave their incentives to work, and even their happiness, unharmed. This should soothe the concern that tax hikes on the wealthy will sap productivity. And so, my progressive friends conclude, we can soak the rich without feeling guilty or hurting GDP. Maybe. But this ignores a much simpler and more persuasive argument against soaking the rich: If we don’t think the government is likely to use extra revenues wisely, we shouldn’t raise taxes even if the wealthy don’t really “need” their absolute levels of income.

And this is a pattern — Frank and his readers do a great job finding interesting ways to think about market failures, without thinking equally hard about the kinds of government failures involved in their alternatives.

To see what I mean, consider one of Frank’s own examples: In order to give their kids the advantage of going through a good school system, parents work longer hours to make higher incomes to afford the property needed to live in the best districts they can. But when all parents do this, no one’s kid gains an advantage, and all have greater stress and debts. I think this story has some truth, but — at the risk of coming off as jeering — it is strange indeed to point to public education as a paradigm for market failure. We jostle for high-price properties precisely because our choice of schools is constrained by law and determined by residence — that is, because education is not a fully competitive, open market. Frank’s market failure is at least in part a government failure.

And the fact that Frank didn’t put it that way may hint that he’s asymmetrically focused on the former at the expense of the latter. I can imagine a fuller Darwinian economics in which insights from evolutionary psychology could shed light on why legislators and regulators are rarely selfless and rational, or on how positional jostling might predictably bias and pervert the functioning of government agencies. That, more complete, Darwinian economics might not look so anti-market and pro-regulation.

A final note: it should be remembered that Frank’s focus is a Darwinian exception rather than the rule. The case of the gazelle — in which each gazelle’s evolutionary reward for outrunning a cheetah improves the fitness of the species as whole — is more common than the case of the Irish elk. And Darwin’s most basic insight was that species survive and flourish through variation and natural selection — in other words, innovation and the destruction of outdated models. An appreciation for that, I submit, hardly bolsters the case for an expanded public sector.