What do business-school accounting professors do?

A little over a year ago, I decided that my career goal was to become a business-school professor. I’ve been lucky enough to be admitted to several great business-school doctoral programs, and, this past week, I’ve been traveling to visit some of the schools. One common theme in my conversations with my fellow admits and the current grad students is that we have trouble explaining to outsiders what our research interests are and why they count as a part of a business-school PhD in subfields like marketing, or accounting, or management. The problem is that academe, as usual, has its own private semantics and, as usual, is not good at translating it to outsiders. This is hardly unique to business schools: Anymore, many dissertations in economics are what regular people might identify as mathematics, and many dissertations in political science are what regular people would identify as economics or game theory. So I thought I’d use what I learned and write a quick blog post about what business-school professors do, with a particular focus on what business-school accounting professors do, for the sake of curious and confused Googlers.  

***

Most business school faculty and their associated doctoral programs are organized into several units, which usually include Management, Organizational Behavior, Accounting, Finance, Marketing, and maybe some others. But these names can miscommunicate what the faculty inside these units actually do. ‘Marketing’ professors and ‘marketing’ journals almost never actually study, say, whether 30-second or 60-second TV spots are more effective. Instead, ‘marketing’ journals have rigorous social science about human behavior and psychology which is at best tenuously attached to the things that firms’ marketing employees actually do. ‘Marketing’ professors may study things like the following: the microeconomic theory of ‘optimal auctions’ (how to design auctions such that bidders reveal their true prices—one application would be stopping oligopolistic government contractors from implicitly colluding); the psychological bases of trust and attachment (one application of which might be how learning how preferences and ideas ‘diffuse’ through populations); or even the neurological bases of attentiveness (which might have applications for education as much as for TV advertisements). So, long story short, when your friend tells you that they are becoming a marketing professor, you shouldn’t make fun of them for doing ‘soft’ cultural studies until you’ve actually read their specific research.

Similarly, ‘management’ professors never study how to schedule staff hours at a local retailer, but might study regional economics and why firms tend to cluster in certain geographies. And ‘organizational behavior’ professors don’t just study social psychology within the firm, but might also look at families, the military, or digital communities like online forums, or even computer models of evolving group behaviors. Even ‘finance’ tends to be broader and more abstract than outsiders expect. In other words, most business-school professors really just study subfields of economics (or in the case of organizational behavior, subfields of psychology), including topics that sometimes riff off of the associated business functions. But they do not, as their faculty unit names suggest, focus on very applied, vocational topics.

Now, some may criticize business academe for being so impractical, but the counterargument is that the private sector already gives private-sector researchers very strong incentives to research the more immediately practical questions, and so academe can add value by stepping back to more abstract questions, to shore up the clarity, precision, and theoretical basis of our ideas, in ways that may yield indirect benefits. The continued, outsized demand for expensive executive education courses, and business-school professors’ excellent consulting fees, are evidence that business practitioners see some benefit in the perspective that b-school theorists bring.

***

Let me go into more detail about the unit that I know the most about: What are the things that people in the ‘accounting’ units at business schools do? Well, to adapt a joke about economics, accounting research is what accounting researchers do, and accounting researchers are people who do accounting research. But one way of thinking about what counts as ‘accounting research’ in academe is that ‘accounting research’ is the subfield of economics that obliquely riffs off of the ‘accounting’ and ‘accountability’ functions within firms, just as marketing research obliquely riffs off of, but does not directly study, marketing functions.

‘Accounting’ refers to the financial information that firms produce (the legally required financial accounting in quarterly reports, as well as the managerial accounting used to make decisions internally, as well as more informal disclosures such as investor presentations and earnings calls). As such, it’s common to describe accounting research as a subfield of information economics. Research in this area includes the following: (A) The game theory of how firms decide what information to disclose given their conflicting goals of (i.) giving investors enough credible information so that they can get financing on good terms, (ii.) keeping trade and strategic secrets away from prospective competitors, and (iii.) not losing access to capital due to overreactions to short-term negative news—a complicated optimization problem; (B) The relationship between firms’ earnings numbers and their asset (stock and debt) valuations; (C) Assessing the performance of doctors in hospitals (given that it would be naïve to simply measure their patients’ outcomes, as this would reward doctors for choosing patients in less-dire straights); (D) How analysts assess municipalities’ finances and, thus, how city governments become cash constrained; (E) Corporate finance, e.g., how firms can issue more equity to raise cash for investments, without thereby signaling to investors that their shares are overvalued.

‘Accountability’ refers to the allocation of decision and control rights within the firm, as well as how the individuals/groups who have been allocated those responsibilities are subsequently assessed and rewarded (‘held accountable’). Research in this area includes the following: (A) Corporate governance—everything about how the owners (shareholders) of firms control the management—including the role and effects of activist investors and the characteristics of successful directors; (B) Executive compensation and pay/incentive packages more broadly; (C) Mergers and acquisitions, which are fundamentally just changes in control/accountability; (D) White collar crime and corporate scandals; and (E) Corporate finance, e.g., how debt holders and equity owners fight over the riskiness of the firm’s capital structure, or how private-equity owners use debt-financing as a way to ‘discipline’ their businesses into running tight ships.

At business schools, courses on valuation, mergers and acquisitions, ethics, and corporate governance are taught by ‘accounting’ faculty. For better or worse, ‘accounting’ professors and research have nothing to do with what ordinary people think of when they hear the word ‘accounting.’ And most accounting professors have never done a single debit/credit T-account in their entire lives, nor does the PhD curriculum include a single such class—just as most marketing professors have never made a TV commercial. Some might criticize business academe for being so impractical, but there’s (again) an argument that people can become CPAs through vocational classes and textbooks and taking the CPA exam (usually well before they go to business school), and b-school professors can add more value by fleshing out the broader economic theory of corporate information and controls.

***

So, what is the value-add of business-schools’ accounting research? What does it have that isn’t already being done in regular college economics departments? Well, I actually think there’s a different answer for every individual topic I listed above, but let me focus on the biggest question in accounting research, and the topic that launched the field—financial valuation. Now, according to basic financial theory, a financial asset (like a stock) is simply a vehicle for transforming cash today into cash tomorrow. Therefore, valuing a financial asset (that is, deciding how much cash today it is worth) is simply a question of projecting how much cash it will return to you in the future, and discounting each future cash flow by a number that ‘translates’ tomorrow’s money into today’s terms—that is, the interest rate—and adding them all up. You can think of the interest rate as the extent to which we humans prefer money today to money a year from now, or, alternatively, as the rate of return of completely risk-free alternative investments such as Treasuries. Additionally, given that the world is uncertain, you also have to think about the probability distributions of those future cash flows, as well as how worried you are about losing how much of your wealth—that is, risk and risk preferences. So financial theory says that only three things matter in valuing assets—and thus, by extension, in setting prices, which are the basis of everything else in the economy—(i.) cash, (ii.) interest, and (iii.) risk.

Given this, firms’ accounting numbers, as opposed to simple cash receipts, ought to be irrelevant. This is because audited accounting financial statements (such as those that you see in companies’ annual reports) hinge on artificial, man-made, abstract concepts that have nothing to do with underlying cash flows, including accruals, amortization and depreciation. Under U.S. GAAP, if you acquired another company at a premium, you had to depreciate this ‘goodwill’ over the following years, recognizing a portion of that goodwill as an ‘expense’ in each, even if the acquisition had gone well and the acquired firm had actually increased in its true value. Under U.S. GAAP, you may have had to depreciate the expense of a building even if it was in fine condition and located in an area that had become more popular such that it’s true value had only increased. GAAP earnings incorporate these fake expenses that don’t actually exist; thus, they are deceptive about what actually matters, namely cash.

As such, the thinking by the 1960s was that accounting information was just a kind of weird relic of a past when people weren’t thinking clearly about economics and finance. While accounting information might have a legal function, in preventing managers from deceiving the public, it did not have an economic/financial function in setting prices and valuations and efficiently allocating capital—in the modern world, it should be useless to sophisticated practitioners.

The paper that launched the field of academic accounting research, An Empirical Evaluation of Accounting Income Numbers (Ball and Brown 1968), however, found that in practice what should be true was pretty much the opposite of the truth. Ball and Brown found—and it has remained the case since—that the best predictors of the value of stocks, their future returns and performance, was accounting earnings numbers.

Now, nobody doubts that what fundamentally matters is future cash flows—it remains the case that financial assets exist to turn cash today into cash tomorrow. The thing is, it’s really hard to project cash flows, and so any cash-flow based model will be “garbage in, garbage out,” and it turns out that accounting numbers, including their fake accrual expenses, actually do surprisingly well at valuing companies. In other words the major insights that launched the field were (1) the structure and presentation of information matter (not just cash flows, interest rates, and risk) in setting prices, even in the most sophisticated markets, and (2) the artifices and the heuristics of the traditional practice of accounting embody a practical intelligence that modern, sophisticated analysts can rarely improve upon using pure cash-flow projections.  Sophisticated analysts and investors still overwhelmingly rely on accounting numbers such as net income and free cash flow in pricing companies’ shares today.

Why might this be the case? Let me draw a parallel to heuristics in the evolutionary use of the term. We could metaphorically think of our genes as fundamentally ‘wanting’ to survive and reproduce, given that survival and reproduction are what have selected and passed down these genes over billions of years. But that doesn’t mean that the genes that influence our brains tell us “survive and reproduce,” and then let us figure out the rest. Instead, our genes give us a set of desires which happen to have maximized our chances of survival and reproduction in our evolutionary history, but which we do not read as such. In other words, they achieve their goal obliquely by giving us desires such as, “pursue status, don’t make other people angry, eat and stay warm, win the favor of attractive healthy people of reproductive age, and beware sudden movements at night.” These desires are heuristic goals in that they are not the goal itself (from our genes’ perspectives), but rather are rules-of-thumb that tended to produce evolutionary ‘success’ better than orienting us toward the goal itself would.

In the same vein, every investor’s goal in buying a financial asset is to transform cash today into cash tomorrow, but s/he’s more likely to achieve that goal by focusing on accounting-numbers heuristics, rather than by trying to make an incredibly complicated and uncertain cash-flow projection. This is arguably the case because accounting standards have been produced by a historical, evolutionary process, of small adjustments and changes being made as-needed, at the suggestion of experienced practitioners. As such, accounting standards embody a lot of historical intelligence that’s hard to reproduce in a single quantitative model. In this vein, accounting numbers do not represent the truth about a firm, but, rather, the best simplification of the truth that we imperfect humans can work with.

And that, in short, is why valuation has come to be part of what business-school academe calls ‘accounting,’ while most people would call it finance.

***

While empirical asset pricing has been the single biggest strain of accounting research over the past forty years, there are many others. And all of the topics of research I listed above have been improved by accounting academics’ focus on the economics of information and control.

Academic accounting research is usually seen as a subfield of economics, but I would argue that business-school accounting professors have some advantages over economists in many of the areas they study. First, accounting academics tend to have more ‘institutional knowledge’ about corporate law, about contracts, about the internal mechanics of firms and their transactions, and about the things that go into the numbers that we statistically analyze. A pure economist and an accounting academic could both prove the Modigliani-Miller theorem (that equity and debt financing are equally costly to firms in equilibrium absent tax biases), but the accounting academic is more likely to know what debt covenants are, or how interest payments are taxed, and how these things affect financing decision in reality. Second, business-school professors tend to engage with experienced students and real-world practitioners, through MBA, exec-ed, and consulting. As such, they tend to be a little bit closer to the ‘practice’ end, along the spectrum from pure theory to pure practice. They’re at a point in that spectrum that I like. (Personally, I think that ‘pure’ economics has already given us enough models of consumer utility-maximization in worlds with preferences defined by set theory and perfect information, etc.—and that there’s more value-add in using the major insights of economic theory in more realistic settings. But I certainly could be wrong and I do respect my pure-economist friends who have a preference for more abstraction.) That may be why you’re more likely to read about the kind of research that comes out of business-schools than about the kind that comes out of pure economics departments in the New York Times.

So the TL;DR version of this blog post is that business-school professors do some of the most exciting and interesting social-science research around today, while still being able to engage with the real world and enjoy very strong career options. Pretty much the only downside is that it sounds less sexy, and more vocational and applied, to say at cocktail parties that you are a PhD in Management or Marketing, instead of Economics or Neuroscience. It’s a smart career move for young researchers to get over that social anxiety quickly!

The proper scope of the firm

One of the most interesting topics in econ/business/finance is the question of the proper scope of the firm. That is, what functions (like IT or human resources or logistics or industrial design), stages in the value chain (like retail, manufacturing, or raw materials sourcing), and businesses/product lines (like a textile manufacture considering shoe-making or a grocer considering an in-store medical clinic), does it make sense for a firm to own as part of its own internal corporate structure, and which ones should it contract out to the rest of the market? I find this question so fascinating for three main reasons: (1) It takes us back to really fundamental, basic economic questions like, ‘When are markets efficient, vs. when are there market failures such that a hierarchical command structure is better?’ (2) It’s relevant to every M&A deal we read about in the news and the value of the companies we all invest in, and managers sometimes use clearly fallacious reasoning to justify costly decisions to expand their companies’ scope; and (3) The business/econ theory of scope of the firm yields a lot of insights on the proper scope of other institutions, including those for whom profit is not the main desideratum. So in this post, I’ll talk through my understanding of the proper scope of the firm, in the hopes that some readers will learn from me and others will improve my understanding in the comments.

***

It’s often said, particularly by those with a capitalist disposition, that the major insight of economics is that decentralized, competitive markets yield better outcomes than centralized controls. Markets, this line of reasoning goes, aggregate the distributed knowledge and comparative advantages of widely dispersed individuals and give each actor strong, direct incentives to meet the needs (as signaled through prices) of the others. Centralized command economies fail because human beings (who give the commands to the command economy) cannot aggregate information as well as a market-clearing price could, and, in a large hierarchical system like a bureaucracy, every individual is distant from the consequences of his/her own actions and so lacks urgency and strong incentives. That’s the argument. And while on a political/social level this may be true, it clearly cannot be true on the microeconomic level of the firm. After all, a corporation is, internally, a centralized command economy that does many different things that theoretically could be contracted via an open, competitive market. The continued existence of actual firms — as opposed to flexible market arrangements among contract CEOs and freelance legal and HR departments, with intellectual properties and factory facilities constantly trading to the highest bidder like a stock — is evidence that command structures are better at this level.

What do I mean? Well, let’s start with an example that might seem silly: Why did Henry Ford own a factory with an assembly line full of workers who had long-term contracts to build Model T’s? Why didn’t Ford instead just go into a public square with his patented Model T design and blueprints and announce that he would buy 2,000 Model T’s at the end of the month from whoever offered the lowest price? After all, classical economic theory sees all markets as working like this — constant auctions between individuals seeking the best deal at that moment. Since Ford owned the rare, unique asset, the intellectual property on the design of the Model T, and since unskilled labor is a ‘commodity,’ economics would suggest that Ford would still earn all the profits from the sale of the Model T through this competitive auction process. So why did Ford Motors own a factory with regular workers (as opposed to just earning rents on its design and contracting the building out to craftsmen)? Well, an obvious reason is that the most efficient way to build a Model T was along an assembly line, which required an enormous capital investment and cooperation among a large number of individuals. 30 people working one day on this assembly line could produce many more Model Ts than one worker could 30 days. That is, a Model T factory had enormous economies of scale as compared to individual craftsmen building Model T’s. Clearly, the benefits to be gained from this economy of scale were greater than the costs of substituting a command hierarchy for a market (costs like workers having weaker incentives and less sensitivity to price changes). So given that, why didn’t Ford outsource the manufacture of Model T’s to another company with a factory? (This is a less ridiculous question — today, after all, many major industrials outsource their manufacturing to companies like Flextronics.) There are a couple of justifications I can think of: Since automobiles were a new, nascent technology at the time, there probably wasn’t a competitive market of potential outsourcers; thus, if Ford hired an outsourcer, it could have exploited its position, claiming that new complications and cost overruns justified ever-higher prices, and Ford would have no alternative but to accept. Also, since automobiles were a nascent industry, there was probably still a lot of “learning by doing,” and Ford could use its experience assembling the Model T in-house to develop its next profitable innovation.

If I’ve belabored this, it’s to illustrate that often some firm scope seems laughably, obviously necessary, but when we dig deeper it’s actually challenging to articulate why. In the perfectly competitive market of Econ 101, in which every individual is constantly auctioning her skills and assets to the highest bidder, there’s zero advantage to owning an asset or hiring an employee long-term per se, as opposed to licensing and renting them. The price of any asset or resource, in this competitive market, is equal to the time-discounted value of the profits it would generate. So by buying an asset (and asset here is used to include resources like mines, entire companies and business units, and intellectual properties), a firm isn’t doing itself any favors unless it can add value to that asset to make it worth more internally than it is in the rest of the market. Fundamentally, firm scope has to be justified by market failure, or, to be more precise, a market failure that has costs that are greater than the costs associated with a control hierarchy. So I ask again, why do we see the level of integration and firm scope that we do see in the world? Why don’t firms hire temp CEOs for specific tasks on short-term bases? Why did Facebook buy WhatsApp instead of doing a joint venture? Why do firms have R&D divisions — why don’t they just buy or license intellectual properties as needed directly from external labs and research scientists? Why would a retailer clean or own its own building? Why are universities in the real-estate business, owning their own student housing? Etc.

Well, obviously there are many reasons. Here’s my preliminary list of some market failures that explain the firm scope we typically see in the real world:

  1. Negotiated, written contracts are costly and time-consuming and cannot fully capture everything a firm needs: This is the most basic reason for corporate scope. For example, imagine you are a firm that sells hand-woven baskets made in India. Hand-weaving baskets is not capital intensive, so theoretically, you could just continuously buy from independent basket-weavers on an as-needed basis. But calling up the weavers for each new project and writing a new contract would be costly; and, particularly for a firm hoping to develop a brand, ensuring that the independent weavers all meet quality standards, or fighting over payments on baskets that do not meet standards, etc., would also be costly. In this situation, it could be more efficient to take the weavers in-house, to a single factory floor, where the weavers are managed continuously, where hours can be planned well in advance to plan inventory to match demand, and where consistent quality can be ensured in the process. These benefits would outweigh the probable costs of paying the weavers for downtime, renting space, and reducing the weavers’ own individual incentives. A lot of firm employment relationships are arguably analogous to this. Firms don’t temporarily hire their CEOs for particular tasks or services, giving the job to whoever asks the lowest salary, because no contract could specify everything a CEO is supposed to do (CEO’s valuable actions are most unobservable); instead, CEOs are given long term contracts and lots of stock ownership of the firm and its profits.
  2. Economies of scale: This is another pretty basic reason for business scope, already discussed in the Ford example above. (Some would make a distinction between ‘scale’, as size specifically, and ‘scope’ as the range of businesses a corporate entity operates in, but I’m including scale as a subset of scope.) Economies of scale can explain why company-owned factories beat market arrangements among artisanal craftsmen. And they can also, perhaps, give advantages to conglomerates like Maersk (a single corporate entity that owns many relatively distinct lines of business that operate separately for the most part). Maersk “shares” a few key functions across its businesses, including bulk purchases. Because suppliers will offer discounts on higher-volume orders, Maersk can make its otherwise pretty distinct business units better off by making single bulk purchases of oil and other commodities as a single unit, on behalf of the whole.
  3. Monopoly/oligopoly exploitation: A classic example of this is the steel-making industry, which requires two distinct processes: metal heating and then setting the hot metal to make steel. Could these two processes be done by separate firms? Probably not. Given that transporting hot metal is costly and dangerous, the two firms would have to co-locate to make this arrangement efficient. And once they had co-located, each would effectively be a monopolist of the other’s business. The metal-heater could demand a pay raise whenever the steel-maker faced new demand, and the steel-maker would have no realistic alternative (and vice versa). As such, separate ownership, and the chance to opportunistically renegotiate contracts, could lead to market failure here. And this fear has meant that there has usually been integration in this business and similar spaces (e.g., coal mines and co-located power plants). (As a side note, interestingly, it’s often claimed that concern over oligopolistic exploitation varies across cultures and this can explain some differences in industrial structure. For example, it’s believed that Japan’s business world is characterized by more cooperative, long-term arrangements among firms and their suppliers and so this particular kind of integration isn’t as ubiquitous in Japan.)
  4. Capital markets failures: This is a broad term that can encompass a number of different things. (a) Managers may believe, correctly or incorrectly, that external investors cannot identify good investment opportunities for the firm as well as they can and that, as a result, the capital markets will not always give them the financing they need for their growth strategies. In this case, a firm could add value by expanding in scope to be able to achieve an ‘internal capital market’ — i.e., getting to the point where it can plow cash from one business into investments in another, instead of relying on the external capital markets to finance its internal investments. Conglomerates often justify their existence using this ‘internal capital market’ argument, but many investors and academics are skeptical. (b) A business can increase its value by buying another business (or another asset more generally) that is very simply underpriced. For example, a company in an esoteric, niche market may be able to identify the value of a new competitor before the capital markets see its value and price it correctly; a company that buys up this new, underpriced firm adds to its value, but essentially does so as a stock picker. (c) Investors may have too-limited time horizons: For example, an independent research lab that yielded important new insights every two decades or so might not be sufficiently supported as a standalone corporate entity by “short-term focused” capital markets. Thus, firms with long horizons (such as drug makers) tend to bring R&D in-house, and standalone, publicly-traded research labs are not (to my knowledge) common.
  5. “Synergy”: This widely-ridiculed term just means that the value of several distinct things together is greater than the sum of the values of those things separately. In business terms, this would have to mean that Company A is worth $100 million and Company B is worth $100 million, but if they were packaged together under one ownership, A-B-Corp would be worth $220 million or so. How can business entities be worth more together than they are separately? A pretty mundane example of synergy would be this: Each individual corporation has to file a number of mundane legal disclosures every year, so when two companies combine they halve the total number of these legal documents and the associated legal costs. A more interesting example of synergy is this: Disney can use the movie studio that it owns to feature Disney characters, which then gets more kids hooked on the Disney universe and drives up demand for other Disney products. A more abstract example of synergy would be this: Since the process of manufacturing new technologies — the trials and errors and failures, etc. — often yields new insights, creators and owners of intellectual properties who are “forward-integrated” into the manufacturing of the associated technologies are more likely to generate new and better intellectual properties. Thus, in this case, keeping the creative R&D work and the manufacturing/implementation work packaged together in one firm (and even one physical location) can be more valuable than compartmentalizing the creatives/innovators vs. the manufacturers in separate firms and stages. A marginal example of synergy is this: Merging companies will often say things like, ‘we have a a great distribution network and they have an innovative new product, so we can use our distribution to market their product.’ It’s not clear that’s really synergy, since the one company could have just used the other’s via a joint venture or something.
  6. Monopoly power/customer exploitation: This one probably needs the least explanation. If I’m one of fifty lemonade stands on the block, I’m a “price taker.” If I buy out all the other lemonade stands, I can raise prices above the competitive market rate and claw back some of the value that consumers would have gotten in a competitive market.

So I hope this outline helps explain the most ubiquitous examples of firm scope. But clearly there is a limit here. There must be a reason why conglomerates have gone out of favor, why the Soviet Union failed — there must be a point at which marginal increases in the scope of a command economy do worse than a competitive market. Indeed, in actual practice, most researchers and savvy investors think that firms have a bias toward going too far — on average, acquiring firms’ stock prices decline immediately after the acquisition is announced. Investors think that managers tend to overpay for acquisitions and cannot realize the value they expect from the acquisition. Why is this? Well, partly it must reflect the fact that our capital markets are working decently well and not obviously mispricing too many firms. (Indeed, if acquisitions usually clearly, unambiguously improved the value of firms, then that would mean that acquiring firms were getting a steal, which would reflect poorly on our capital markets.) And partly it reflects the fact that increases in scope bring their own costs, as referenced above. In a conglomerate-style corporation, the Vice President of one of the internal business will be partly compensated with options that depend on the performance of the conglomerate stock as a whole; thus she will have weaker incentives than she would as CEO of a standalone company. Vertically integrated firms where, e.g., the manufacturing division has to buy from the raw-materials division, can miss out on the information that is communicated by price changes in a competitive market. On the whole, economic theory would predict that rational firms would tend to grow right up to the size where the marginal costs of increased scope begin to surpass the marginal benefits.

***

So given all that theory, what are some applications? What most interests me is the fallacious arguments that managers often give to justify acquisitions. I’ll give a couple of examples:

  1. “Diversification and risk”: Managers (of, say, acquiring Company A) sometimes claim that by acquiring unrelated businesses (of, say, eaten Company E) with uncorrelated earnings, they can smooth their own earnings, and thus reduce their risk.  Doing so obviously should reduce stock price volatility, but it doesn’t actually add value for shareholders, for a very simple reasons: Shareholders of company A could just as easily buy shares of company E themselves and achieve reduced risk through diversification on their own.
  2. “We’re moving to the higher-margin stage of the value chain”: Hardware firms sometimes justify their acquisitions of software firms by noting that software is now the higher-margin stage of the technology business. But this justification is fallacious, or at least incomplete, for a simple reason: The owners of software firms know that they have high margins (and profits) and thus, like anyone else, shouldn’t sell out for a price that doesn’t reflect the time-discounted profits they expect to earn. So high-margin businesses have high value and they accordingly have high prices and so there’s no free lunch in buying into a high-margin industry. Now, there could be other good reasons for acquiring these higher margin businesses; for example, if the capital-markets undervaluing them or there is some ‘synergy’. But buying a high-margin business isn’t a free lunch per se.

Rather, to justify an acquisition, a firm has to pass at least these four tests (some of these “tests” are borrowed from Prof. David Collis of Harvard Business School):

  1. The acquirer has to be able to add value to the acquired entity, to make it better off. This added value has to be greater than the acquisition premium, obviously.
  2. The acquired business unit should be worth more inside the firm than in any other possible ownership structure. Otherwise, the acquiring firm can best profit by selling the unit to the most valuable ownership.
  3. There is no market-based way to realize the value of the acquisition — i.e., flexible market contracts and joint ventures will not realize the same value.
  4. The benefits to the acquisition have to outweigh the costs of expanded scope, in terms of internal coordination and information problems and individual incentives/motivation.

***

Ever since I got interested in corporate scope, I’ve been trying to apply the theory to understanding all of the institutions around me. It’s fun and sometimes I feel like I see a lot of surprising logic and illogic. One institution I debate about is the University. For example, why do universities own housing and also own cool facilities like rock-climbing gyms and heavily subsidize these facilities? Why can’t college students just rent their own apartments like other twenty-somethings, and pay for time at private rock-climbing gyms? Why do some business schools, after charging admits very high up-front tuition, then pay to send those MBAs abroad for ‘immersive experiences’? Why can’t the MBAs pay lower tuition and buy their own flights abroad? What is the ‘synergy’ that justifies packaging all these things together?

One hypothesis I have is that the University is sort of like the managers who think they’re smarter than the capital market and can improve on it with an ‘internal capital market’: The University is an institution that has a particularly paternalistic (and I really don’t necessarily mean that in a bad way) attitude towards its ‘customers’. The University may believe that the external capital market tends to underinvest in human capital and that it can add value through an internal capital allocation process that nudges students to try new things like rock-climbing, going abroad, living in comfortable housing in close proximity with peers, etc. That is, the University charges a high upfront fee to let you in in the first place, but, once you’re inside, all these cool experiences are heavily subsidized so that even the financially-anxious student will try things that the University thinks are worth spending money on, but that cash-constrained young people might normally eschew.

This is abstract, though, and in practice I think universities should consider unbundling many of their products. I’m curious if readers have other examples of surprisingly intelligent or stupid scope in firms or non-profit institutions.

The Innovative University, by Clayton Christensen and Henry Eyring

As part of a broader research project, I recently read Clayton Christensen’s and Henry Eyring’s The Innovative University: Changing the DNA of Higher Education from the Inside Out. There is a temptation to be snobbish about books that have the overused ‘innovative’ in the title, so I started reading with a skeptical mindset. But I found the book to be immensely informative and thoughtful and so I wanted to share what I learned. I’m going to summarize the three main main strands of thought that I pulled from the book: (1) A history of American universities, particularly how accredited universities today are bound to what the authors call a ‘Harvard model’ of what a university should be, a model that came to be for specific historical reasons and that does not serve all of us well today; (2) An application of the theory of ‘disruptive innovation’ (the term here being properly used by its originator) to the higher education market; and (3) Case studies and ideas on how digital technologies can improve the delivery and price of higher education. Then I’ll close with a small criticism and my heterodox take on the big market failure in higher education today.

***

(1) One major bias or barrier to really innovative higher education reform is that the people who accredit and run universities, the people who hire college graduates, the intellectuals who shape the conversation around higher education, and our legislators are almost all graduates of relatively elite tier universities. They have thus been acculturated into thinking that that is the way that a university must be; they have a self-serving bias in thinking that that is what equips you to be a good employee/leader/citizen; and they have no exposure to people who have different needs and expectations for higher education. The authors of and the leaders profiled in The Innovative University are mostly people who have been at one time Harvard faculty. But they make great efforts to control for this bias by (i.) revisiting in detail the history of American universities to understand why they came to be that way and (ii.) studying today’s ‘down-market’, non-selective universities and their students.

What are some things we learn when we study universities this way? First, you learn that a lot of things that we take for granted have really arbitrary historical origins. For example, why do we have summer break at four-year colleges in the U.S.? Is it because students had to go work on the farm in the summer? Not so. In fact, summer breaks can be traced to the first couple decades of Harvard’s existence, when instructors found that the couple-dozen students at the college at the time, some as young as 14 years old, were more likely to break out into fist-fights while studying ancient Greek during the hot, malarial months. Are there nonetheless good reasons for a long summer break today?  Maybe. Arguably this model works very well for Harvard, whose professors need dedicated time for valuable independent research, whose students can secure stimulating summer internships and research opportunities from their freshman years, and whose brand name allows it to rent out its dorms and lecture halls for lucrative summer schools and summer camps. But summer breaks may be a huge waste at other, ‘down-market’ universities, whose students’ feasible internships will not offer as much stimulation and advancement as classes would, whose professors’ research is arguably less valuable than their instruction, whose students are far more likely to get frustrated with the time required to get a degree and drop out, and whose empty facilities are a significant strain on their operating budgets.

Or take, for another example, the assumption that university professors must serve dual roles as teachers and researchers. This can be traced to the early 1700s, and the influence of Isaac Greenwood, Harvard’s first chair of mathematics and natural philosophy, who learned of Newton’s new Laws on a trip to London, and led a push to get Harvard faculty to use the lab equipment necessary to demonstrate these laws to Harvard undergraduates. At the time, it made sense to have instructors doing basic research, to ensure that they didn’t get basic things like ‘does a moving object lose speed if no counteracting force is applied to it?’ wrong, and because scientific knowledge was at a point where undergraduates could be taught the latest insights. But now that cutting-edge research in physics is not accessible to freshmen, it makes less sense to allocate the tasks of cutting-edge research and freshman instruction to the same group of people; in fact, it’s conceivable that today’s experts may be immersed in their specialties to an extent that disables them from communicating the basics to outsiders. Again, the dual role of the professor-as-researcher-and-instructor may still be defensible at Harvard, where professors can pull their own weight in research grants, where many students want to move onto graduate-level original research themselves. But it doesn’t make much sense for down-market universities to be inflexibly committed to this duality. Also, the early research focus ultimately evolved into today’s ‘publish or perish’ — universities are now in the puzzling situation of simultaneously (a) claiming that they are doing their best to improve undergraduate instruction and (b) making frequent publication their professors’ near-exclusive career incentive, giving almost no incentive for good instruction.

A large number of university features that we take for granted trace to the Harvard presidency of Charles Eliot in the late 19th and early 20th century. Eliot gave the Harvard “Faculty of Arts and Sciences responsibility for all college-level instruction.” Prior to this, high-school students could apply directly to Harvard College, Harvard Law School or Harvard Medical School. Now, you need a bachelor’s to apply to the latter two. Eliot coupled this move with an attempt to reduce the undergraduate curriculum from four to three years — but this was vetoed by the 1907 financial crisis, which made Harvard FAS unwilling to forego 25% of their undergraduate tuition. Requiring 7 years of very expensive education to become a general practitioner makes little sense today and is a serious financial burden on our health-care system.  Eliot initiated a move toward a system of lifetime tenure and great faculty autonomy, partly to attract scholars in a time of under-supply and low social tolerance for many ideas. Again (sorry for repetition) the situation may be different today, particularly at down-market universities. Eliot emphasized breadth, aiming to attract the world’s leading scholars in all subjects, and requiring Harvard undergraduates to begin their studies by fulfilling broad distributional requirements. Today’s down-market universities, though, could benefit from product differentiation (“we don’t serve Slavic studies here”; “we do chemical engineering, they do mechanical; apply/transfer accordingly”) and putting technical specialization up front in the curriculum (i.e., arranging the curriculum so that if a student drops out after year 1, s/he already has a few technical certificates, rather than putting the technical certificates in year 4, and the distribution requirements in year 1). Eliot found (to his disappointment) that the college’s football team was a key to raising donations from alumni, and invested in its development and league-memberships accordingly; today’s universities would do better with less emphasis on athletics. Eliot placed a ‘German-style’ research university (PhD programs) ‘on top’ of the ‘English style’ liberal arts college; he and his successor began the dual requirement of distributional requirements and a specialization in a major. While this gave students the advantage of taking graduate-level courses when they preferred and combining breadth with specialization, it also pulled the undergraduate curriculum to become preparation for PhD level research and increased the number of courses required for graduation.

All these taken-for-granted characteristics of the university and more came to be for historical reasons. They’ve since been solidified by replication (i.e., people who found new universities model them on the ones they graduated from) and accreditation and college-rankings standards. After WWII, as veterans began using their GI bills to go to college, the U.S. government empowered established universities to have a sort of ‘peer review’ process in accrediting the new universities that sprung up to serve them. Today, regional accreditors are frequently criticized on the grounds that they emphasize the colleges’ inputs over their outputs (i.e., research facilities and faculty credentials over student employment and measurable learning progress). In 1967, the Carnegie foundation created a simple taxonomy of different types of higher-ed institutions according to their emphasis on research and doctoral programs, intending the classification system to be used for its own charitable purposes. But the Carnegie Commissions’ taxonomy quickly became seen as a normative, hierarchical ranking system, with colleges desperately seeking to “climb the Carnegie ladder” and proudly announcing each new step. We all know about the problems with the U.S. News and World report ranking system, which incentivize schools to compete with luxurious student amenities and otherwise game the system. The U.S. News rankings don’t just put pressure on colleges to make irrational decisions, they also put pressure on students and parents — a student who would prefer to attend a college with lower tuition and fewer luxurious amenities (and hence a lower ranking) will know that his/her prospective employers will rate the value of that university’s degree according to the U.S. News rankings.

So, you get the gist here:  There are a lot of historical features of the ‘Harvard model’ that are not serving today’s down-market institutions and students, but which are not frequently questioned and are actively solidified by the Carnegie ladder, regional accreditors, and the college-rankings system. The book brings home the contrast between the needs of more typical college students and the bounds of the ‘Harvard model’ by following Kim Clark, who unexpectedly stepped down as Dean of Harvard Business School in 2005 to move to the unheard-of BYU-Idaho (formerly Ricks College). There, Clark has made humanitarian efforts to reach out to all high-school graduates, to women (often Mormons) who felt they should drop out of college to become mothers, and to ‘at risk’ students who might not be able to complete a full bachelor’s curriculum, but whom Kim hoped to equip with technical certificates and employability along the way.

***

(2) The term ‘disruptive innovation’ gets ridiculed a lot for its overuse, but the original meaning of the term actually captures a really important phenomenon. The idea is like this: One day, an innovative new product, like the computer, arrives on the scene. Because it’s new and innovative, it’s expensive, and it gets sold to businesses, governments, and the wealthy. A lot of different businesses compete to sell computers to these clients. They try to differentiate themselves and outdo their competitors by offering ever faster speeds  and ever-more widgets and functionalities, what Christensen calls “sustaining innovations.” The computer (the product) gets better and better and higher-functioning and more expensive. But then one day, someone thinks, “Who actually needs all this speed and all these widgets? Why don’t we just offer a super-stripped down computer with the bare minimum functions and sell it to regular people on the cheap?” The big established players don’t like the idea of making a low-quality product and don’t think it could ever work. So this “disruptive innovation” usually comes from a new company, not one of the established ones; the disruptive innovation first wins consumers at the low end of the market with simple low prices, but then also eventually wins the high end of the market, once the simplicity and low cost of the innovation are enough to compensate high-end users for slightly fewer widgets and whistles. The disruption of the mainframe and minicomputer industries by Macs and PCs is a classic example, but Christensen says this is a common product life-cycle.

Does this sound like something that could happen to higher education? Christensen and Eyring answer with a qualified ‘sort of.’ The evolution of the university over the past century is certainly an example of sustaining innovations going beyond consumers’ needs: Too many universities have too many departments, too many indoor rock gyms and athletic teams, and too little student-oriented faculty. There’s now widespread attention to the problem of high costs. So, Christensen and Eyring think there’s a real opportunity for universities to do well by serving ordinary students in a more cost-effective and stripped down manner. But they think that traditional universities’ ability to pass on “wisdom” from established scholars, to facilitate face-to-face interaction among peers, and to produce original research — what Christensen and Eyring call “discovery, meaning, and mentoring” — are unique.  These are things that cannot be replicated, they argue, outside the traditional university and so we won’t see any massive disruption by for-profit and online providers (more on that later).

Instead, they hope for incremental, cost-lowering changes within universities. This depends on a conceptual rethinking and a bunch of specific changes. The conceptual rethinking is that we should stop seeing the higher-ed space as a ladder, with every university competing to climb to the top rung with Harvard. Instead, we should think of the higher education space as a ‘landscape,’ with universities differentiating on their core advantages, attracting particular kinds of students with niche offerings, and competing on price as well as on rankings. Christensen and Eyring highlight some colleges that are trying to become cheaper and more stripped-down, praising BYU-Idaho, under Kim Clark, for cutting athletic teams, reducing the number of majors, and optimizing the logistics of building use in order to move from a two-year community college to a four-year bachelor’s-granting college without increasing annual costs. They also note that if we use cost-per-degree-granted as our primary metric, the number one way most colleges could improve would be to increase their graduation rates and decrease the number of students who stay on for a fifth or sixth year. To this end, they advocate some basic structural changes like a ‘modularized’ curriculum. The idea is that, right now, a lot of students who take a fifth or sixth year to graduate do so because they switched majors at some point and were unable to apply their former major’s classes for credit in their new major. Universities could instead start grouping classes into modules, any of which could be considered a component of a couple of different majors. So, for example, a ‘quantitative methods’ module, including calculus, linear algebra, statistics and/or computational statistics, could be ‘stuck on’ to any social science, science, or engineering BA. A ‘business basics’ module that included accounting and finance could be ‘stuck on’ to both an economics and a healthcare management BA, etc. If students’ interests or career goals change, they could then switch through a variety of majors without losing too much progress toward graduation.

They also suggest that there’s an opportunity for universities to change their professors’ career incentives. For example, a university might offer multiple tenure tracks — professors would be rewarded not just for their esoteric original research, but also for outstanding teaching, course development, textbook writing, integration of insights from others’ original research, and even publication for general audiences that improve non-expert access to fields. They also suggest rethinking tenure. Now, to be in favor of ‘rethinking tenure’ is not to be in favor of ‘firing professors at whim, particularly for having unpopular opinions.’ (If anything, up-or-out tenure may only increase political censorship in the academy, as faculty committees will vote down professors with unpopular views, while more practical-minded administrators would have been happy to have the professor stay on and continue teaching.) Most professions and institutions build up long-term relationships of mutual respect with their employees that prevent abusive and unfair dismissals and there’s no reason academe can’t be the same way. Rethinking tenure could improve the teaching productivity of junior faculty, who would feel less anxiety and pressure to publish prolifically, and also improve the productivity of senior faculty, who would feel less apathetic.

Finally, they suggest that brick-and-mortar universities can survive in the digital future by differentiating themselves from low-cost for-profit, online alternatives by emphasizing a commitment to moral instruction and mentoring (more on this below). Altogether, then, they list twelve “recommended alterations” for universities. They represented these as a table, with a column of “Traditional University Traits” and a column of “Recommended Alterations.” Since it’s the main takeaway of the book, I’ll represent them here, with a list in which each “Traditional University Trait” will be separated by an arrow (“–>”) from its “Recommended Alterations,” below:

  • Face-to-face instruction—>Mix of face-to-face and online learning;
  • Rational/secular orientation—>Increased attention to values;
  • Comprehensive specialization, departmentalization, and faculty self-governance—>Interdepartmental faculty collaboration and heavyweight innovation teams;
  • Long summer recess—>Year-round operation;
  • Graduate schools atop the college—>Strong graduate programs only, and institutional focus on mentoring students, especially undergraduates;
  • Private fundraising—>Funds used primarily in support of students, especially need-based aid;
  • Competitive athletics—>Greater relative emphasis on student activities;
  • Curricular distribution (general education) and concentration (major)—>Cross-disciplinary, integrated GE and modular, customizable majors, with technical certificates and associate’s degrees nested within bachelor’s degrees;
  • Academic honors—>increased emphasis on student competence vis-à-vis learning outcomes;
  • Externally funded research—>undergraduate student involvement in research;
  • Up-or-out tenure, with faculty rank and salary distinctions—>Hiring with intent to train or retain, Customized scholarships and employment contracts, Minimized rank and salary distinctions consistent with a student-mentoring emphasis;
  • Admissions selectivity—>Expansion of capacity, (for example, via online learning and year-round operation) to limit the need for selectivity.”

***

(3) The really big talked-about development in higher education today is the rise of MOOCs. There’s an argument to be made that all this excitement is just noise: Universities have made efforts to provide low-cost distance education online in the past, and it didn’t upend the higher-education market then; completion rates have always been very low in distance education courses. But in recent years there have been major improvements in internet connectivity, download times, and online course platforms, which could provide the basis for modestly effective but super-low cost delivery of higher education, through MOOCs and others. Christensen and Eyring are cautiously optimistic about these changes and the rise of for-profit online universities as well, but they stop short of all-out disruptive-tech-boosterism. They do not expect that students will soon take classes from Coursera for free and get their BAs and MBAs for a few hundred dollars in course registration and testing fees plus the cost of rent and an internet connection. Instead, top-tier universities will continue to provide in-person instruction (and people will continue to compete desperately and pay anything to access them), while second-tier universities will incorporate MOOCs for ‘flipped classrooms’ and similar uses. As they write,

The most powerful mechanism of cost reduction is online learning. All but the most prestigious institutions will effectively have to create a second, virtual university within the traditional university, as BYU-Idaho and SNHU (Southern New Hampshire University) have done. The online courses, as well as the adjunct faculty who teach them, should be tightly integrated with their on-campus counterparts; this is an important point of potential differentiation from fully online degree programs. To ensure quality, universities may also decide to limit online class sizes or pay instructors more than the market rate. Even with such quality enhancements, online courses will allow traditional universities not only to save instructional costs but also to admit more students without increasing their investment in physical facilities and full-time faculties.

It’s hard to predict how online courses will be used in a decade. But the authors highlight some incremental changes being made now, which they recommend to other universities. At BYU-Idaho, for example, online courses have been particularly useful in allowing women who dropped out when they became mothers to finish their degrees. The school actually made it a requirement that all students take at least one online course, as a way of proactively developing their and the university’s comfort with the medium. They’ve found that online instruction is not best used to replace in-person classes, but, rather, for blended and ‘flipped’ courses. In flipped courses, students watch lectures online (sometimes lectures that are specifically tailored for the medium, e.g., including small computer-graded quizzes throughout); students go in to class in-person to work on problem sets, to talk over more advanced applications with the instructor, etc. And BYU-Idaho has also used its online courses toward its humanitarian goal of giving people in developing countries access to its courses and technical certificates.

***

Here are some more thoughts and criticisms: First, I think most of my friends reading this blog post are already prepared to criticize me and these authors for advocating a pre-professional/vocational vision of the university. But we’re not. There’s zero inconsistency in maintaining two positions simultaneously: (1) that universities should pass on ethical and aesthetic learning, and facilitate students’ philosophical expansion, exploration, asking of intrinsically important questions, etc.; and (2) that universities should find ways to do so in a financially sustainable and reasonable way, and mid- and lower-market universities in particular should not yield 50% dropout rates, 6.5 year average graduation times among the graduating half, heavy student debt, and un- and under-employment of their graduates.

In fact, my main concern with The Innovative University is that, if anything, it puts too much faith in universities as providers of the ‘soft’ goods of mentoring and moral-character formation. At one point in the book, the authors observe that what they call “cognitive outcomes” (that is, measurable learning) for one particular online course are as good as cognitive outcomes for a similar course offered at a traditional university, but students pay more for the course at the traditional university. They therefore infer that this disparity proves that students are paying for value, and must be receiving moral instruction, wisdom, and mentoring in return. But there is of course a simpler, more pessimistic interpretation of this data: Employers value traditional universities more because they’re familiar and employers are skeptical of unfamiliar routes and so the students are paying a premium just to reassure prospective employers; they’re not necessarily paying that premium to truly get some pedagogical value for themselves. In other words, the evidence is consistent with higher-ed consumers being stuck in a “prisoner’s dilemma”: we all might prefer some low-cost, no-frills education, but as long as employers will give even a slight nod to students from traditional prestigious universities, we’ll all face immense pressure to choose the traditional one. I’m not prepared to say that Christensen and Eyring are incorrect that professors provide valuable mentoring and moral education. I more that their assertion could be used bolster beliefs in the irreplicability of in-person instruction and to resist calls for experimentation and change. For example, suppose there was a person who was very good at self-directed learning and got a broad liberal arts education in college in a curriculum that covered nearly all of the most significant works in moral philosophy and social thought. Should this person be required to spend two years and $140,000 moving away and receiving “meaning and mentoring” from business-school professors in order to, say, get an MBA to get promoted up from the analyst level in a consultancy, rather than spending a year or two of self-directed learning tearing through some edX and Coursera courses and textbooks to get really hard technical skills in computational statistics, financial valuation, accounting, and optimal pricing theory, etc.? While many students will still be happy to go the traditional MBA route, I think we should also find a way to properly credential self-directed learners, particularly in an era in which so much work is self-directed and unstructured. I hope education entrepreneurs will seize the opportunity to develop “competency-based” credentialing for these self-directed learners. And faculty members should recognize they have a bias when they tell accreditors, legislators, and prospective students, “No, really, you need to separate from your spouse, move cities, and receive our wisdom in person”; those of us who benefit from academe as it currently works should listen closely to those who currently see it as a barrier to their goals.

So I wish that the book had done more to highlight and promote bigger, more fundamental changes in higher education that could be facilitated by new digital technologies — particularly, any of the nascent efforts to establish certification for self-directed learners who are taking advantage of online courses, both in the U.S. and abroad. But my sense is that Christensen and Eyring did not focus on these because the evidence suggests that the users of these open-access online courses have so far been a relatively privileged, college-educated set. Christensen and Eyring think it’s a humanitarian priority to focus on less privileged students, at universities that graduate only fractions of their entering classes. The problems and frustrations of those students, and the benefits that would accrue to society if we solved them, make mine seem utterly trivial.

Finally, a theoretical question and an attendant concern: We often hear that higher education today is a ‘bubble’, but there’s a problem with this claim. If higher education is a bubble, what’s the market failure that’s to blame? After all, when people choose to pay lots of money for iPhones, we generally assume they’re rational, informed consumers paying more to get more; when some price is irrationally high, we can usually blame some monopoly or cartel or psychological bias. So what could the market failure be in higher education? This is a big question, but I’ll offer one comparison that occurred to me: Emergency rooms are a well-known market failure. The problem in an emergency room is that your life, which is the condition of your enjoyment of everything that is valuable to you, is immediately at risk, and so you can’t really shop around and ask all the emergency rooms in town to quote you a price. The hospital knows it can charge any price to the emergency-room patient, and this is why we rely on a mix of (i.) medical professional ethics, (ii.) government price controls, and (iii.) collective bargaining via insurers, to control costs here. In a similar way, in a country that aspires to meritocracy, we have made  university affiliations one of the main determinants of social status and we do not shop for a bargain when it comes to social status — humans will pay almost anything to move up the hierarchy. That’s why there’s little real competition on price in higher education–there’s no university that advertises itself as “85% as good as Harvard, at 60% of the cost” and even if there were, few who had the chance to go to Harvard would go to it. Harvard could cut out most of its in-person instruction, tell its undergraduates to take many of their courses through Coursera, and triple its tuition, and I predict that demand for entrance to the university would barely fall, because one of the college’s biggest sources of value to students is as a gate-keeper of social status. Status is virtually priceless–like your life in the emergency room, it’s the condition of so much else you hope to enjoy in life–and that’s the market failure here. There’s no competitive pressure on elite universities to control costs because status is priceless. Elite universities can charge anything they like, and as long as elite universities set the pace for the universities that imitate them, the market won’t control tuition costs. The market won’t, on its own, value self-directed learning through online classes as much as it values degrees from status-conferring, gate-keeping institutions; the market won’t pressure law schools to adopt a two-year curriculum, as President Obama has advocated; the market will put zero pressure on Yale and Harvard to lower their tuition prices. The market won’t protect us consumers here, because we’ll keep paying anything to place our kids a little higher in the social hierarchy. So, instead, it’s incumbent upon university leaders to make an ethical choice to control their costs and restrain the university arms race, even when it is against the interests of their faculty and employees.

Some notes on executive compensation

Over at PolicyMic, my friend Dana Teppert has an excellent post about executive compensation. She notes that executive compensation appears to be ‘spiraling out of control’ and this is a problem. And I think most of us would intuitively agree, both that executive compensation has become untethered and that this is a problem. CEOs are enjoying consistent wage-hikes that do not appear to be driven by any improvement in their own productivity or their companies’ performance; U.S. CEOs are consistently paid much more than their European counterparts and there’s no obvious reason why this would be the case if the ‘market’ for CEO-level talent were truly competitive. Morally, we are horrified by a distribution in which our society devotes 1,000 times as much resources to compensating the efforts of a $25-million-a-year CEO as to providing for the family of a $12.50-an-hour laborer who works full time. Many of us are also concerned that growing income inequality could tear at the social fabric.

How should we think about executive compensation and what, if anything, should we do about it? As is my wont, I want to step away from the outrageous facts and approach this question with basic theory. If we think we need ‘solutions’ to ever-higher executive compensation we need to understand the underlying dynamics of the system that is generating it.

***

In the perfectly competitive labor market of Econ 101, everyone gets paid approximately her/his ‘marginal product.’ The reason is simple: If your efforts can add $800,000 to a company’s revenues, then some company should be more than happy to pay $600,000 a year to retain you, thereby adding $200,000 to its bottom line; then, in turn, one of its competitors should instead offer you $700,000 a year, thereby adding $100,000 to its bottom line; then another competitor should then be willing to offer you $750,000, because that still adds $50,000 to its bottom line, etc. No company should be willing to offer you $800,500 a year, or even $800,001, because this would hurt its bottom line. So in the ideal market of Econ 101, compensation should never ‘spiral out of control’—any firm that overpaid for employees, including CEOs, would hurt its profits and get beaten by its competitors until it ‘exited’ the market.

(I should note right now, up front, that there’s no necessary equivalence between the positive economic concept of ‘marginal product’ and the moral concept of desert. For example, when people suffer terrible accidents, their economic ‘marginal products’ can be reduced to zero, but we all agree we should provide for them because, well, they’re human. If I were unlucky enough to be born a slow learner with low motivation, my ‘marginal product’ might be $15,000 a year, but it still might be the right thing for society to redistribute some income to me to boost my happiness and security anyway. Similarly, if a CEO can add $20 million to her company’s revenues, that doesn’t mean it is ethically right for her to take all $20 million home untaxed.  If artificial intelligence advances so much that robots can replace almost all human labor, then almost all of us will have a near-zero marginal product; but I would hope we will find ways to redistribute income such that everyone will nonetheless be better off than today. A person’s ‘marginal product’ is influenced by luck and broad, unpredictable social and technological changes over which we have no control, so it does not amount to moral desert. This is worth noting up front, because it generates a lot of confusion when economists accidentally write as if asking ‘what is this person’s marginal product?’ amounts to asking ‘what does this person deserve?’)

But despite all that, ‘marginal product’ is useful for thinking about labor markets and CEO compensation. If we knew that a CEO’s ‘marginal product’ was $20 million a year, that wouldn’t prove that it was ethically ideal for him to take home all of that money untaxed, but it would mean that paying him some $19 million was at least instrumentally rational toward the goal of increasing the company’s profits. It would mean that shareholders were increasing their earnings by hiring him and he was not actively destroying value for anyone. And it would mean that his pay was tethered to some actual metric of the value he was producing and so wouldn’t arbitrarily spiral out of control.

So are CEOs getting paid their marginal products? I have no idea; I don’t think anybody knows. But let’s assume that CEOs are getting paid much more than their marginal products. Why would this be the case? How could reality differ from the ideal Econ 101 market described above? I see three main ways:

(1) Imperfect information and risk-aversion: I have no idea what a CEO’s marginal product is and neither do the boards of directors and compensation committees who sign off on their pay. I also have no idea what skills are actually needed to, say, develop strategies for and run a snack-brands company or how we would even begin to develop metrics for any of these things. In the academic literature, they say that senior executive talent is “unobservable.” Given all this uncertainty, what do boards of directors focus on when ‘hiring’ CEOs? My sense is that they, like most of us, first and foremost want to avoid royally screwing up. They don’t really shop around aggressively for the ‘best deal’ on a prospective CEO. Instead, they’ll want to get somebody who was already CEO of another, similar company and who didn’t make too much of a mess of it. And since lots of boards want this relatively small set of people (those who have already done okay jobs as CEOs of other companies), they can demand very high pay. Since boards are nervous about rattling investors with public personality clashes, they won’t protest too much when these CEOs ask for higher and higher pay. Hiring expensive CEOs can thus be seen as a sort of conservative CYA strategy, rather than maximizing net-present value. Hiring an unproven wunderkind 29-year-old CEO on the cheap might be the highest NPV decision; but if the decision happens to go wrong, the board will be publicly embarrassed. If you instead hire someone with CEO experience and pay her a salary comparable to her peers, then who could blame you? Note also that there’s a self-reinforcing feedback loop here: Insofar as boards mostly only hire people who have already been CEOs or senior executives, fewer people gain CEO experience, strengthening the market position of CEOs.

(2) Social incentives of board members: Directors’ incentives are imperfectly aligned with the interests of shareholders. That’s because board members have social relationships with managers and so there are personal ‘costs’ associated with pushing back at managers, even when this would be in the best interest of shareholders. For example, board members like it when company managers offer them cushy consulting contracts; these contracts also conveniently make them less inclined to alienate management. Board members often run their own companies, with their own boards of directors, which include friends and family of the CEOs whose boards they sit on. If you’re a board member, clawing back senior management’s pay would only increase you dividends a little bit, realistically, but it could seriously damage connections and networks that will be essential to the rest of your career. So, I scratch your back and you scratch my back, obviously.

(3) The classic problem of distributed costs (to shareholders) vs. concentrated benefits (to managers): Finally, each individual shareholder has both limited abilities and limited incentives to push back against the back-scratching collusion between directors and managers, because executive pay isn’t an overwhelming component of shares’ earnings. The highest paid CEO of a public company in 2012 was Larry Ellison, who took home $96.2 million. Oracle’s net income in the same year was approximately $10 billion. So even if shareholders were to cut Ellison’s pay in half, they would only see a .5% increase in their earnings for the year. That isn’t much money and staging a shareholder rebellion is costly, in terms of time and legal fees, so no shareholder has an incentive to be the ‘first mover’ to roll back Ellison’s pay. Meanwhile, since Ellison himself takes home all of his compensation, he has highly concentrated interest in developing good rhetorical justifications for and legal defenses of his pay.

***

Those three factors above help explain the spiraling costs of CEO pay. In sum: Because it’s really hard to know what qualities/skills a CEO needs and what a CEO is really worth (the value of long-term strategic decisions does not become apparent until much later in time), boards of directors play it safe by hiring experienced and costly CEOs and paying them at the high range of their ‘peer group’; CEOs keep asking for more pay and, because there are social ‘costs’ to the board of alienating management and the costs of over-compensation are spread out among highly-distributed shareholders, the directors and owners of the company don’t have a strong incentive to say no, and so the ‘peer group’ compensation metrics rise in a positive feedback loop.

If this diagnosis is correct, it suggests some preliminary prescriptions, beyond just wage-caps and higher taxes (which managers are adept at avoiding, with loopholes, getting alternative forms of compensation, etc.). First, we need better research on what CEOs are actually worth, how they add value to a company, and how we can evaluate and account for the impact of their decisions, skills, and expertise on firm value. Business academics have lots of great work to do here!

Second, the problem of back-scratching between managers and directors and the problem of distributed costs to shareholders vs. concentrated benefits to management could only be solved with some truly creative corporate governance innovation. Here’s a very preliminary sketch of an idea I had: Firms could create a special class of ‘C-shares,’ which would total, say, 10% of all company shares and would be identical to regular common stock except that all executive-compensation costs would have to be directly expensed from the earnings to these shares. The C-shareholders would then be entitled to elect a compensation committee, whose members’ identities would not be disclosed to management. Thus, C-shareholders would constitute a much more concentrated constituency for reasonable executive pay and compensation committees would not need to worry about the ‘social costs’ of pushing back at big-eyed managers. And since executive pay is not in fact an overwhelming component of total company expenses, the C-shareholders’ incentives would still be fairly well aligned with those of other shareholders—the C-shareholders, for example, would not want to vote compensation so low that management would quit in protest, or lack incentives to work hard, or have trouble attracting talent, or do anything else that would detract strongly from company profits, because profits would still be the major component of C-shares’ values. Such C-shares would even create arbitrage opportunities for activist investors, who could accumulate shares in companies with overpaid CEOs with the specific goal of voting down the next pay package.

***

I think the idea above is fun and worth thinking about and developing. And shareholders have every right, and are well-advised, to do what they can to make managers act in their interests. But I’ll close this post by noting that it’s not clear to me that executive compensation is actually a pressing moral public-policy concern. On an emotional level, of course, I resent CEOs who make thousands of times more money than I do. But morally and rationally, my goal shouldn’t be to restrain the rich per se; it should be to help ordinary people, particularly the poorest. And it’s not clear that CEO pay can realistically be called a major cause of poverty and want in the U.S., nor would cutting back CEO pay be an actual solution to the problems of the poor. By my calculations, based off of the data here, the top-100 highest-paid CEOs of public companies in 2012 took home a combined $2.2 billion. Let’s assume that the universe of domestic exorbitant CEO pay that we would like to roll back is equal to 100 times this sum, or $220 billion. The U.S. population is 315 million. So even if we were to cut out all of this exorbitant pay and give it directly to the ‘bottom half’ of people in the U.S. without destroying any value (which is all, of course, unrealistic), this would amount to about $1400 per person ($220,000,000,000/157,000,000). That’s not nothing, but it’s not very much either. We could generate the same kind of relief for ordinary people with a lot of other smart policies that would actually have a chance of being implemented: Like, e.g., destroying licensing cartels that increase the costs of taxi rides home and all kinds of other ordinary goods; reforming health-care to make everyone more cost-sensitive and to reduce doctors’ liability expenses; imposing higher equity-capital requirements on banks that could prevent future financial crises and recessions; allowing denser development at urban cores to decrease the cost of housing, etc.

Are the social costs of high CEO pay enough to justify my baroque corporate-governance idea, which would be a pain to implement, particularly given that there is so much ‘low-hanging fruit’ in the policy domains I’ve just mentioned? I’m not so sure.

Intellectual Property

One major policy domain of which I have a limited understanding is intellectual property law and policy. So I wanted to write a post to talk through my understanding of intellectual property and invite you, readers, to correct and improve me in the comments. In the first part of the post, I’ll try to lay out broad ideas about intellectual property; in the second, I’ll try to apply those ideas to some current controversies.

***

Property: In the modern world, at least since the fall of the Berlin wall, most of us believe that private property is important and deserving of protection. There’s both a deontological, ethical argument and a consequentialist, economic argument for why. The deontological argument hearkens back to John Locke, who famously argued in his Second Treatise that people initially assumed property rights to the land by ‘mixing their labor’ with it; the original landlords, he imagined, were those who put their selves into their land by founding, taming, and tilling it, making that land a kind of extension of their selves; the right to private property was therefore an extension of self-ownership. Modern intellectual heirs to Locke (including those who identify as Nozickians) would argue by extension that when I freely contract with others, offering them my talents and hard work in return for cash compensation which I use to purchase assets or commodities, then I have earned and deserved those assets, and so taking those things from me would be a violation of my self and a denial of my desert. The economic argument for property rights it that people will not build and invest in valuable assets unless they feel assured that they will continue to control those assets and, hence, be able to use them to their profit. Countries that don’t credibly guarantee to protect private property discourage investment and scare their own citizens into investing all their assets abroad, hurting their growth and prosperity. (See Argentina.)

Property, Intellectual: Intellectual property  – typically defined as property that is a work or creation ‘of the mind’ or the ‘result of creativity’ — is similar but different from regular physical property. It’s arguably similar in that (1) the things I create with my mind are a kind of extension of my self, and so it would be a violation for someone to claim my work as their own or appropriate my work for profit without my consent and (2) people and companies will not invest in new ideas, research, creations, and brands unless they can be assured that they will gain compensating benefits for those investments. Since in a competitive market you cannot gain any profit from a thing that everybody else has access to, recognizing exclusive intellectual property rights is thought to be an ideal way to incentivize research and innovation. But intellectual property is different, crucially, from physical property in that it is abstract and hence non-rivalrous — that is, someone can copy my algorithm or song or blog post without taking it from me. If someone takes my physical asset, like my land, I can no longer enjoy and use it; if they copy my song or algorithm or blog post, I still can.

What principles should we use in granting intellectual property rights? I would argue that it’s better to think about property rights primarily through the consequentialist, economic lens, rather than the deontological, natural-rights-based lens. Philosophically, it’s hard to parse the boundaries of individual desert: I largely ‘owe’ my ability to produce creative work to the parents who fed me and read to me as a child, to the public institutions that educated me, and to the political system and culture that were the basis for it all. More practically, even the most Lockean stalwarts would caveat their understanding of property-as-natural right when the consequences are great enough: Suppose a brilliant scientist had discovered and patented a cure for all cancers, but refused to sell or license the patent out of a Kaczynski-esque hatred of technological modernity; in the face of the potential to save millions of lives, would we really have an obligation to ‘respect’ this scientist’s ‘natural right’ to his discovery? For another thought experiment: If our ideas are extensions of ourselves, and thus our inviolable natural rights, then shouldn’t, e.g., a policy wonk or mayor who comes up with an innovative policy solution for managing mass-transit or Medicaid logistics be able to patent that method and prevent other municipalities from adopting it? If the creative inventions of our minds are our natural rights, why would we grant patents for, e.g., efficient computer algorithms for managing data, but not for efficient ‘social algorithms’ like those imagined policies? Finally, if intellectual property is a natural right, then how would we justify ever letting patents expire? In short, I think the property-as-natural-right argument doesn’t withstand philosophical scrutiny; we should instead think of intellectual property rights as constructed social tools, artificial legal rights that we as a society assign in the interest of promoting our shared prosperity and felicity.

The basic economics of intellectual property: If we accept the argument above, then we should think about intellectual property rights as economists do, as a tool to maximize social utility. To that end, we want to both (1) give people incentives to produce innovative and creative works in the first place and (2) maximize ordinary consumers’ ability to access and enjoy those goods. These two goals are obviously in tension: If you increase patent protections from 10 year to 15 years, then you give firms even stronger incentives to invest aggressively in research and development (because they’ll be able to command monopoly prices on the innovation for longer), but you’ll also increase by 5 years the length of time that consumers have to wait to enjoy the good at cheap, competitive prices. If you decrease patent protections from 10 years to 5 years, you’ll halve the time consumers have to wait for competitive prices on goods, but you might decrease risky and innovative R&D, as companies fear that they’ll find it hard to make a killing in that time. The economic debate centers on finding the social optimum, given this tradeoff.

As I’ve read up on intellectual property I’ve found that there’s basically a consensus among intellectuals and experts I respect about three very broad things: (1) The basic economic theory — that we generally need some IP protections to give incentives to creators, and the debatable question is how to optimize the tradeoff between giving creators these incentives and giving consumers earlier and more access — is sound; (2) But in its actual legal implementation, there are a lot of problems and abuses in our current IP-law system — our IP system is subject to abuse by extortionate ‘patent trolls,’ we grant patents for small, incremental changes to technologies that may not constitute truly creative breakthroughs, etc.; and (3) Our intellectual property legal protections are probably too strong overall. Item number three here is actually what one would predict given the theory of public choice. Intellectual property law lobbying is a classic example of “distributed costs and concentrated benefits.” Individual firms and patent owners get very big, very obvious benefits from legislative extensions of the protections on their intellectual properties and they lobby accordingly; we individual consumers get hurt by these in delayed access, higher product costs and health-insurance premiums, etc., but because these costs are diffuse and sometimes invisible, we don’t put appropriate pressure on our legislators to stop them. Thus our democracy produces laws whose aggregate costs outweigh their benefits.

There’s also a compelling heterodox viewpoint that our IP laws are radically too strong, and that we should radically weaken all of our IP protections and completely eliminate many of them. But in the rest of this post, I want to first touch on a potpourri of IP issues, using the consensus ideas, and then finish up by touching on the heterodox idea a little more.

***

Types of IP: First, let’s distinguish among types of intellectual property. The least controversial type is trademarks. Protecting trademarks simply amounts to preventing vendors from lying about who they are to consumers; this makes it easier for us find what we want from sources we trust. Very few people are against the indefinite extension and protection of trademarks. Industrial design rights basically amount to the same thing. Copyrights protect creative, artistic works, allowing their authors to control their use, replication, and distribution. Individual copyrights extend to 70 years after their authors’ deaths; corporate copyrights last until 120 years after their creation. It seems very hard to justify these extremely long copyright lengths. Personally, I find it hard to motivate myself with the prospect of the financial returns my blog posts will generate 1 year after my death, much less 70 years thereafter. Patents protect inventions and discoveries and allow their holders to control their use and sale for 20 years after the initial filing date (in the U.S.). While 20 years of patent protection doesn’t seem outrageous when compared to the length of copyright protection, it certainly seems like a long time for those hoping to access life-saving drugs at a competitive cost, or an energy company hoping to use some patented chemical in prototyping a new super-efficient battery, or non-Amazon e-commerce sites hoping to implement one-click shopping. In other words, it’s still an urgent question, are we offering too-strong patent protections?

Industry variance: In thinking about patents, we need to distinguish among industries. There’s no reason in principle to think that the ideal patent regime would be the same across all lines of business. Judge Richard A. Posner has argued that the pharmaceutical industry is a classic example of one that does require patent protection, due to its high, up-front R&D costs and uncertain payoffs, but that most other industries “would do fine” without patents. The I.T. industry in particular seems to be characterized by lots of lawsuits over patents held on ergonomic quick-fixes that seem more like part of the companies’ marketing than their R&D. Do we really think that Apple would not have developed the swipe function on the iPhone without the promise of patent protection? Or Amazon and its one-click-shopping option, for that matter? According to this table, all industries outside of pharmaceuticals and chemicals think the overwhelming majority of their patents would have been developed and implemented absent patent protection.

Patent trolls: These companies, which allegedly buy up dubious and less-than-innovative patents in order to shake down unsuspecting businesses with legal threats (usually coming formally from shell companies) are back in the news. In recent years companies using certain scanners and producing podcasts, for example, have received demands for cash from companies holding patents on ideas that only vaguely prefigured podcasts and contemporary scanners. People in the tech industry overwhelmingly say that they feel that innovation is being stalled by tech firms’ constant legal anxiousness that they’ll be found in violation of some esoteric, vague patent. Part of the problem is that the overstretched and understaffed U.S. patent office has granted a lot of vague patents that it probably should not have. The President is currently proposing new rules that would require that patent-holders be disclosed in patent arbitration cases; this would, at least, expose and hopefully shame the most blatant patent trolls. A more general idea for mitigating patent trolling is that we should be able to patent only implementations, not purely abstract ideas.

Music today (back to copyrights): The music industry today is an instructive case. As we all know, it’s very easy to download and torrent music for free online and so young people generally do not pay for the copyrighted music they listen to. And yet it’s commonly observed that musicians are doing better than ever before today. How’s that? The radically expanded access to music that we consumers are all now enjoying, and the ease with which we can share and recommend our friends, has whetted our collective interest in musicians and we now pay more to see more live shows than ever before. What musicians have lost in CD sales they’ve largely made up in ticket revenues. I suspect that in the future, the authorities will largely grow to accept a world characterized by (1) not-for-profit illegal downloading of media; (2) for-profit ventures like Spotify that stream music for users for small fees or advertisements and pay relatively small per-play fees to creators; and (3) pop music and movie producers that know they have to be extra spectacular to draw people into concert venues and theaters, and indie bands that learn how to cultivate voluntarily supportive cult followings. More generally, the fact that musicians have flourished despite the effective erosion of their copyrights, thanks to second-order effects of music’s increasing availability, strengthens the case for reconsidering intellectual property rights in other domains as well.

Financing medical innovation through public prizes: Pharmaceutical patents are possibly the most controversial domain of patents, first for the obvious reason that denying or charging prohibitively high prices for necessary medicine horrifies us, and second because many people see pharmaceutical companies as patenting a lot of not-so-innovative incremental changes to extant drugs, and then pushing these patented drugs, which they can sell at monopoly prices, on insurers, doctors, and consumers, driving up costs for all of us, without providing true innovation or benefits. (Now, notably, I think it’s silly to blame patent rights per se  or corporate greed here — the root problem (if I may pause to grind my ax) is the total lack of individual incentives in our insane health care system. If the medical market were more cost sensitive at every level, and we consumers were rewarded for our choices to reduce our costs, then we would simply choose to use less expensive, unpatented drugs, unless the more expensive, patented ones offered compensating benefits.) But given that my preferred healthcare policies are not likely to be implemented, how else could we mitigate this problem of wasteful pharmaceutical investment and innovation? One clever idea would be to stop granting new pharmaceutical patents and instead begin offering public prizes. I.e., the government would offer $5 billion for whichever company could first produce a drug that met some well-defined criteria in improving our treatment of AIDS/Alzheimer’s in XYZ ways. Theoretically, this would stop pharmaceutical companies from overinvesting in small, incremental pharmaceutical innovations and encourage them to focus on our most pressing health needs, as defined by smart public authorities. Once the government had awarded the prize to the victorious pharmaceutical company, any company in the world would have the right to vend it, and so its price would quickly be driven down to its marginal cost of production, immediately widening its availability. This is a clever idea, but it certainly has some problems: As a public-sector entity, such a prize-granting agency would face political pressures to focus on politically popular cures, and underinvest in less salient ones; with no ‘bottom-line,’ its revolving-door bureaucrats might overpay pharmaceutical companies generally, just as, today, government contractors are seen as overpaid; there would be huge liability issues and public outrages when prize-winning drugs turned out to have mild side effects or tradeoffs or cause 3 people 5 sleepless nights, which might also drive such a public entity to be way too conservative in awarding prizes and publicly offering drugs. I’m not sure whether our current system or this proposal is more imperfect.

Intellectual property abroad: Enforcing U.S. patents and trademarks abroad, particularly in China, India, and Africa, is a legally and morally tricky issue. Legally, sovereign nations are sovereign within their boundaries, and so U.S. patents qua U.S. patents simply don’t apply outside of the U.S. — we can only persuade and, sometimes, pressure these government through trade retaliation to adopt and enforce their own laws protecting U.S.-based IP. Morally, there’s an argument to be made that that developed-world creators’ primary markets will always be in rich, developed nations. If developed-world consumers produce strong enough incentives for innovation, then there’s a strong humanitarian argument for letting it slide when poor countries violate IP laws and use our innovations very cheaply to save lives and develop their lagging domestic economies. At the same time, we can also understand the distress American consumers feel when they find that drugs that were developed in U.S. labs and are prohibitively expensive in the U.S. are cheap and over the counter in India. And since China, India, and Africa contain most of the people in the world, as they develop economically, they’ll become increasingly important factors in firms’ incentives to innovate, and so at some point it’ll be key for them to get more serious about intellectual property. 

***

The radical argument: Some smart, legit folks argue that we should go a lot further than I’ve advocated here in radically weakening all, and completely abolishing much, intellectual property protection. Proponents of this viewpoint point to the fashion industry: There, new clothing designs enjoy no patent protections (because clothing, due to its utilitarian function, does not constitute art that can be copyrighted) and yet we still see plenty of innovation. Perhaps we would see the same in other industries if we abolished their IP protections: firms would arguably continue to invest in R&D and innovation, even absent patent protections, seeking the financial rewards of ‘brand value’ in being the first and the best to implement their innovation. In addition, absent protections, there would be more technologies and ideas in the public domain, which would give us all more resources to draw upon in producing new innovations. A software engineer would have a somewhat smaller financial incentive to make any new software innovation, but she would have a lot more ideas and software engineers to draw on in producing new ideas. So it’s plausible that abolishing patents here could produce more innovation in the aggregate. 

On the whole, then, these heterodox thinkers argue, we’d be better off with much, much weaker IP protection. I’m not sure this argument is right, particularly for firms in industries with very large up-front R&D costs, like pharmaceuticals and alternative energy. I also worry that once firms couldn’t enjoy monopoly rights to their patents, they would respond by aggressively and permanently guarding the secrecy of their innovations, which wouldn’t be great for human progress in the long run. But I think the example of the still-innovative fashion industry, and the surprisingly still-successful music industry should both push us to think very boldly about more narrowly circumscribing intellectual-property rights. 

Some thoughts on higher education

After healthcare, the biggest, growing expense that is dragging on every middle-class American’s well-being right now is probably the cost of higher education. Full tuition and expenses at top colleges in the U.S. is famously surpassing $60,000 a year. Newly-minted college graduates are taking five to six figures of debt into an economy with extremely high youth unemployment, in which a college degree is no longer a guarantee of a stable middle-class existence. New J.D.s are famously graduating from law school with six-figure debt loads and declining job prospects. American medicine is facing a shortage of general practitioners, at least partly because a lot of young M.D.s can’t bear the expense and work of medical school and internships if they’ll be condemned to a life making only (!) $300,000 a year, as non-specialists. These have a lot of serious, second-order, distributed costs that we don’t always think about: economically indebted young people are more risk-averse, less confident, more prone to depression and anxiety; the ever-growing costs of labor in services that employ credentialed professionals get passed on to all of us when we use their services; less savings ends up invested in other useful places in the economy; a kid whose parents grew up working class, but who are now middle-class and ineligible for financial aid, might choose to go to attend a state school instead of a prestigious Ivy League university, meaning that high college costs drag on social mobility even given generous financial-aid packages. But one cost that sticks out to me is that I think parents should be allowed to have a little fun and live large once their kids have graduated form high school. And many parents who fund their children’s educations are spending all of their savings — money which they could have put to a lot of other fun and worthy uses.

So it’s a big deal. What’s driving these rising costs? Economists who research this talk about a bunch of different things. First, since the 1970s, the “skills premium” in American wages has increased — that is, the differential between college-graduates’ and high-school graduates’ has grown. This, in turn, is explained by the fact that the U.S. has continued transitioning from a manufacturing-based economy to a services-based economy driven by information and knowledge. So as the financial returns to college education have increased, the purchase price that colleges can demand has naturally increased as well (particularly given that available spots at elite colleges have not kept pace with population growth). But colleges are non-profit — so where has all this extra money gone? One major rising cost is faculty salaries, and this has to do with a nifty economic concept called Baumol’s cost disease — since technology and globalization have increased the productivity of highly-educated professionals in other fields, such as law and finance, academe has had to raise its faculty salaries in order to compete with those industries for the highly educated, even though faculty productivity has not increased. Then, there are a lot of other assorted sources of growing costs: increases in administrative and non-faculty university staff (including yours truly!); all the indoor rock-climbing gyms and exorbitant athletic facilities and other frivolities designed to lure high-school seniors who do not know what money is.

These high costs and frivolities may be tolerable in a time of affluence. But since the recent recession, people have become increasingly upset. A spate of books have been written questioning whether college is still worth it. (For the record, in terms of financial returns, strictly, there’s no question that college is still ‘worth it’, in that the college wage-premium easily repays the cost of college, though there is a legitimate debate about the source of this advantage, i.e., whether it comes from real ‘value-add’ to graduates or mere signaling). In the startup community, it’s increasingly fashionable to advocate “hacking” your education, outside of prestigious brick-and-mortar universities.

Normatively, it’s very important to look for policies and innovations that can decrease the cost of providing higher education. Descriptively, colleges may face a much less compliant clientele unless they lower their prices (already, law-school applications have fallen of sharply). How could this happen? As in any other industry, decreases in costs will have to come from competition and technological advance. The major technological change that could impact higher education is the internet in general, especially Massive Online Open Courses (MOOCs), such as those being offered by edX and Coursera. The best argument for universities experimenting with employing MOOCs is that college costs are currently so unacceptably high that we should be open to almost any experiments to help control higher-education costs. But in the rest of this post, I want to consider MOOCs, and argue that they’re not just a valid experiment, but they’re likely to be a part of the right answer as well.

***

What is the value of higher education? It might be most helpful to partition higher education into two parts. Part 1 is higher education’s instrumental value — i.e., it’s practical, it’s skill-acquisition, it’s relevant to jobs, it’s giving people abilities that will match them up with what the market is demanding. Part 2 is about education’s intrinsic value — i.e., finding yourself, inhabiting unusual and novel perspectives on life, learning to better understand and empathize with others, asking question that are just worth asking for their own sake, etc.. We probably get more of Part 1 in STEM classes and lab work. We probably get more of Part 2 in English and philosophy classes and in the conversations we have with our fellow bright young collegians. Now, this taxonomy is imperfect. It’s likely that things like “communication skills” and “teamwork” and “leadership” — all skills that employers look for — are things that we develop in late-night conversations and philosophy papers and extracurriculars. It’s also the case that computer science, cognitive science, and physics all are intrinsically meaningful and beautiful as well, and can even expand our curiosity and empathy. But this imperfect schema might help our thinking a bit as we move forward.

In particular, I think there’s little controversy that MOOCs could be extremely useful in at least contributing to the provision of Part 1 of higher education. Indeed, MOOCs might be able to take over the majority of the work for many classes in this category. This past year, I took an introductory computer science course in my free time and never once attended the lecture in person. I sometimes watched a live feed of the lectures from my office — I usually watched them after the fact. But it wasn’t clear to me why the professor was still lecturing in person — few people attended class in person anyways, and  he’s been giving the same intro course for many years now. Tellingly, when a Monday class was cancelled due to the Boston marathon bombing, the professor simply had us watch his lecture from the previous year. In these courses, I also didn’t get much individual attention from my overworked, grad-student Teaching Assistants. I benefited more from online fora where I could exchange questions and tips with other students. And my problem sets probably could have been graded by a computer instead of these TAs — professors can easily write programs that, in a few seconds, throw thousands of different potential inputs into a program to make sure that the programs output the correct answer.

So I think it’s a no-brainer that universities should broadcast and offer credit for MOOC-based intro CS courses and other similar introductory STEM courses. For intro chem and bio classes, universities would likely employ mixed model, where students would watch lectures online, but attend lab in person. This would free professors up from intro teaching duties that they generally don’t enjoy. And by allowing students to choose from a variety of MOOC courses to use toward their college credit, students can be matched up with professors whose teaching styles fit them best, whatever university they’re situated at. This choice could also (once professors receive compensation for the MOOC use of their courses) bring competitive pressures to bear on professors’ teaching efforts.

For more advanced classes across the STEM category, I imagine mixed models would prevail. For a higher level course on, e.g., the theory of efficient algorithms, professors might want students to watch some lectures, write some programs, and master some content via MOOC-style recorded content, and automatically-graded problem sets, but the professor might want the students to then attend some seminar discussions on the much trickier theoretical stuff. Or a university might offer calculus, linear algebra, differential equations, real analysis, and mathematical logic, as MOOCs, while expecting math majors to attend seminars on the later, pure math theory courses in the curriculum, in person.

But the coolest thing about MOOCs is how they might provide more freedom and flexibility to people in seeking necessary job credentials. If you were, say, a successful engineer in Pakistan, but you moved your family to the U.S. (out of fear of religious persecution, or a desire to provide a brighter future for your grandkids), your lack of U.S.-based academic credentials might prevent you from landing a job in the U.S. that could fully employ your talents. Or if you’re a 25-year old mother stuck in a mid-level job, you might feel that your kids’ existence will preclude you from going back to school for a law degree or a CS masters degree. If these people could attend classes online at night, and get credit for their actual knowledge, however they attained it, and then get matched to jobs that are appropriate for their talent levels, that would our economy a lot more fair and efficient.

***

Can MOOCs also change the provision of Part 2 of higher education? I have a couple of thoughts about this. First,  “asking deep, meaningful, philosophical questions for their own sake” sounds really nice — and it’s something I sincerely believe in in the abstract — but it’s probably not of much interest to the vast majority of people and is probably, in fact, a luxury that disproportionately appeals to that class of people who write about ideas and run universities for a living. The idea that a good education should not concern itself with utility is a luxury of those who will never need to really worry about unemployment.

Second, it’s hard to say how MOOCs will contribute to Part 2 of higher education, because it’s really hard to define what exactly Part 2 is, and how we measure it in the first place. We say that the liberal arts should make us more empathetic people and open our minds. So what do we make of the value of the liberal arts education of a recent cultural studies BA who gives no money to charity, spends no time interacting with people who lack his cultural markers and affiliations, and is completely intellectually incurious about non-Marxist veins of economic thought and aggressive towards those who are? Did this person fail at his liberal-arts education in the same way that, say, a computer science major who couldn’t build an app did?

Third, every other Yale College graduate I talk to says the same thing, that the most meaningful aspect of their time at Yale was their constant conversations with each other — i.e., the philosophy and political theory that happened outside of the classroom. Right now, people tend to have these excellent transformative experiences at college. But in principle, it’s not clear why that has to be the case — and it’s also not clear how much of Yale’s $200,000 tuition expenses are necessary to facilitate those experiences.

So how will MOOCs transform Part 2 of education? The conventional wisdom is that they’ll only slightly change it, as part of a blended model — i.e., that  students may watch recorded lectures from great teachers, will still attending seminars in person and having their essays graded by people. But I think it would be interesting to see how far we could pushing using digital technologies for Part 2. Professor Gregory Nagy, a professor of Greek at Harvard, has made a compelling case that automated multiple-choice grading in Humanities courses can be useful, when well-designed:

A little later, Nagy read me some questions that the team had devised for CB22x’s first multiple-choice test: “ ‘What is the will of Zeus?’ It says, ‘a) To send the souls of heroes to Hades’ ”—Nagy rippled into laughter—“ ‘b) To cause the Iliad,’ and ‘c) To cause the Trojan War.’ I love this. The best answer is ‘b) To cause the Iliad’—Zeus’ will encompasses the whole of the poem through to its end, or telos.”

He went on, “And then—this is where people really read into the text!—‘Why will Achilles sit the war out in his shelter?’ Because ‘a) He has hurt feelings,’ ‘b) He is angry at Agamemnon,’ and ‘c) A goddess advised him to do so.’ No one will get this.”

The answer is c). In Nagy’s “brick-and-mortar” class, students write essays. But multiple-choice questions are almost as good as essays, Nagy said, because they spot-check participants’ deeper comprehension of the text. The online testing mechanism explains the right response when students miss an answer. And it lets them see the reasoning behind the correct choice when they’re right. “Even in a multiple-choice or a yes-and-no situation, you can actually induce learners to read out of the text, not into the text,” Nagy explained

But there’s another possibility. Everything I’ve discussed so far has centered on simply complementing or replacing some of the features of current universities, within the structure of universities as they exist today. But the most truly “disruptive” proposal for online education is currently coming from the Minerva Project. The Minerva Project intends to have a highly-selective admissions process (it aims to get ‘Ivy-League quality students’) and then house them at different dormitories, on a rotating basis, over the four years of their education. Meanwhile, they’ll watch recorded lectures from top scholars online (meaning–the top scholars only need to be involved in the production of course material once), while they’ll interact with, and be graded by, newly-minted PhDs who are currently out of jobs. Minerva claims that by cutting out the expenses of university infrastructure, athletic fields, etc., it will be able to charge half the tuition of most top-tier universities today. And by housing elite students together, they’ll maintain the benefits of late-night dorm-room conversations, etc.. By moving them around the world, from Paris to Sao Paulo, etc., every few months, they’ll make them more cosmopolitan citizens of the world.

Will it work? It’s not clear. But we need to try.

Inflation basics [Econ for poets]

I recently had a conversation with a smart acquaintance about monetary policy, and we discussed the new Bank of Japan’s governors’ promises to push for higher inflation in the country. I tried to argue that we had good reasons to believe that such an inflationary policy could boost the real economy, while my friend argued against me. But eventually, I realized that the friend and I were doing a bad job articulating what, exactly, drives inflation, and this was a drag on our conversation. I suspect that there are a lot of us who know how to use all the words we see associated with inflation in magazines (“money supply,” “loose monetary policy,” “inflation expectations,” etc. etc.), who may even remember a mathematical formula from Intro Macro (MV = PQ), but who, when we dig a little deeper, have to admit we don’t have a clear grasp on what’s going on. So I thought I could do the blog world a favor by writing a very back-to-basic post (in English words) on what inflation is exactly and how it happens.

***

What is inflation? It is a rise in the prices of goods and services. What causes inflation? Most people would say that  inflation is driven by an increase in the amount of currency or money in the economy — the “money supply.” The intuition here is that if an economy produces the exact same amount of goods in year 1 as in year 2, but there is twice as much money in circulation in year 2, then prices will have to double in order to sort of “soak up” the extra money. I think that’s the implicit metaphor most of us have for how it works: The monetary price of real goods is determined by the amount of money in circulation relative to the amount of real goods; and inflation (and deflation) is driven by increases (and decreases) in the money supply. Now, the interesting thing about this is that it is mostly true in practice but not entirely true in theory. To get a much better grasp  on this, we need to go back to very basic theory, to make sure we’re clear on things, and then we need to clarify exactly what we mean by the “money supply.”

Who sets prices? Theory: In a market economy, everybody sets prices. That is, the price of anything in a market economy is the price at which sellers choose to sell their goods, provided that they can find buyers. So any full explanation of inflation has to answer the question: Why, exactly, did sellers choose to raise their prices and why did buyers go along with it? So let’s start with an incredibly simple model: Adam and Barbara are stranded on a desert island and they have their own economy. Adam grows peaches on his peach tree; every day, he harvests a bushel, eats a peach for himself, and sells the rest to Barbara; Barbara then eats a peach, turns the rest into peach juice, drinks some of it, and sells the rest back to Adam; Adam drinks some of the peach juice and uses the rest to water/fertilize the soil of his peach tree. One day, a $10 bill falls from the sky. Adam and Barbara decide to use this for their transactions: First, Barbara gives Adam the $10 bill in exchange for his peaches; then Adam gives Barbara the $10 back for her peach juice.

Now, suppose that two more $10 bill falls from the sky, one into Adam’s hand and another into Barbara’s. What will happen? Will prices triple? Well, that’s up to Adam and Barbara. They might just decide to save their new $10 bills and continue trading one day’s worth of peaches and one day’s worth of juice for $10, every single day — the only thing that would have changed from before would be their “savings.” But it also is possible that prices could increase. Maybe one day Adam gets greedy for dollar bills, and decides to demand $20 from Barbara for his peaches — he knows she has the money, and since he’s her only supplier, she has to consent. At that point, since Barbara now expects she’ll have to pay $20 for future supplies of peaches, she’ll start charging $20 for a day’s worth of peach juice in order to maintain her living standard. So suddenly prices double, just like that. And it’s also possible — this is the really interesting part — that prices could more than triple. Perhaps Adam gets really greedy and starts to charge $40 for his peaches — more than all the currency in the economy — and Barbara responds by charging $40 for her peach juice as well. One way this could work is that, first Barbara buys half a day’s supply of peaches for $20, makes half a day’s supply of peach juice and sells it for $20, and then uses that $20 to buy the next half-day’s supply, etc. Another way they could do this would be to use the magic of credit –  Adam or Barbara hands over $20 for the full amount of peaches/peach juice and also a promise to pay another $20 that night. At the end of the day, after their two transactions, each is $20 in debt to the other, but each earned $20 in cash from that day’s transaction, so they simply swap $20 to settle up.

Now, notably, this simple model is not a good a good one, because it leaves out (1) the reason money is useful and influences our behavior in the first place, namely that is completely fungible and usable across a broad array of transactions that would otherwise be complicated by barter and (2) competition, which is the major thing that stabilizes prices in the first place. But the point of this model has been to get us beyond our implicit metaphor that prices have to “soak up” the supply of money. Adam and Barbara — the market — are in charge of the prices they set, and they do so according to their own purposes. They could randomly double or halve their prices at their whims. And what’s true for Adam and Barbara is also theoretically true for all of us. If every single person in the world were to wake up in the morning and decide to double the prices they pay and charge for absolutely everything (including doubling, e.g. the amount of credit they demand from and extend from others), then this could work without a hitch — every numerical representation of the value of every good would change, and nothing else would.

The above is just a verbal expression of the familiar “Equation of Exchange” that we see in Econ 101, MV = PQ. In this equation, P represents the price level and Q represents the total quantity of real goods sold — multiplied together, PQ thus simply represents the nominal value of all real goods sold in a given time period. So in the second iteration of our fictional desert-island economy above (where Adam and Barbara were each charging $20), PQ = $40 per day. What about the other side of the equation? M represents the supply of money (a total of $20 in that part of the thought experiment). And V is stands for velocity of money, or the number of times any given unit of that money changes hands in a transaction, per time period; in our thought experiment, since $40 worth of goods changed hands a day, and the amount of money was only $30, then the velocity of money was 1.333 transactions per day (($40 of transactions/day) / $30). If you think carefully about this, you can see that MV = PQ is an axiomatic mathematical identity: The total monetary value of all transactions taking place in a given period of time must necessarily be equal to the amount of money there is times the number of times the average unit of money changed hands in a transaction. If prices suddenly double, while everything else stays the same, it must necessarily be the case that money is changing hands twice as fast, doubling V.

So let’s now think about some of the things that happened in our thought experiment, in terms of this identity, PQ = MV. At first, there was $10 in the economy, and $20 worth of purchases, because the $10 bill changed hands twice a day. So PQ = $20 and MV = 2 * $10. It balances! Then $20 fell from the sky. In one scenario, Adam and Barbara didn’t change their prices, so PQ still was equal to $20. Since M was was now equal to $30, V must have fallen to 2/3rd. In other words, since they were still just doing the same transactions, at the same dollar value, even though there were two new $10 bills hanging around, the ‘velocity’ of any given $10 bill was now 1/3rd of what it had previously been — only 2 $10 bills changed hands per day, even though there were 3 of them in the economy. In the scenario after that, both Adam and Barbara raised prices to $40, meaning that PQ was now equal to $80. Because M was equal to $30, V was necessarily 8/3 transactions per day — that is, the average $10 bill changed hands more than twice, because of how Adam and Barbara transacted four times per day.

So going forward, let’s keep in mind this main theoretical takeaway: The only fundamental constraint on prices is the mathematical identity that PQ = MV. So, if the money supply, M, say doubles, that could cause prices to double, but it’s also possible that the extra money could get “soaked up” by a lower velocity of money, i.e., people choosing, for whatever reason, to hold on to any given dollar in their hands for longer before spending it (and it’s also possible that we could see a little bit of each, or that velocity could surprisingly increase, leading to more than double inflation, etc., etc., etc.)

What influences prices? Practice: In theory, the only certainty about the price level is the identity that MV = PQ — the velocity of money could double one day, and halve the next, making prices double and halve in turn. But in practice, things are much different. First, we don’t, in practice, all just wake up in the morning and all collectively decide to double or halve the velocity of money. If I own a shop and I double my prices one day, my competitors probably won’t, and so all my customers will leave me and buy from them. If I suddenly halve my prices, I’ll run out of goods real quick and won’t make a profit. So, because most firms (hopefully!) face real and prospective competitors and don’t like selling things at a loss, the velocity of money, V, doesn’t just randomly, wildly oscillate on its own. This means that if both the quantity of real goods an economy is producing, Q, and the money supply, M, are held relatively constant, then we won’t usually see wild fluctuations in the price level, P.

And second, in practice, changes in the supply of money do not usually get entirely absorbed/cancelled out by changes in the velocity of money. Just think about it: If you suddenly had an extra $100,000 would you hide it all under your mattress? Maybe you would hide some of it (you would probably save much — but these savings would be someone else’s credit, which we’ll get to later), but probably you would increase your spending at least somewhat. And if all of us suddenly got an extra $100,000 we would all probably start to spend a bit more. Since our increased spending would amount to an increase in nominal demand for goods, we would expect prices to rise. So the Econ 101 explanation here is that increases in money lead to an increase in nominal demand, which causes nominal prices to rise. If you prefer narrative to graphical style thinking, think of it this way: if we helicopter-dropped an extra $100,000 into everyone’s bedroom, workers would demand higher pay to work overtime (since they already have such great savings), people would take vacations and bid up the price of spots at restaurants and on airplanes, everyone would be willing to pay more for houses, bidding up prices, etc., etc. But people also would hold onto or save much of that $100,000, meaning that velocity of any given dollar would slow down at first, and so the extra money supply wouldn’t be immediately ploughed into higher prices. So usually the price level should correlate and moves with the money supply, but not immediately in a perfect, linear 1-to-1 relationship.

What is money? In the first few iterations of the desert-island thought experiment, “money” basically means “paper currency.” But in the modern world, most of what we call “money” is actually just debits and credits in bank accounts. For example, if you have accumulated $10,000 in cash at work, and you put that into a checking account, you still have $10,000 in “money” (because you can withdraw at any time) even though your bank is not keeping those $10,000 locked away in a vault. Your bank likely lent most of those $10,000 in cash out to somebody else, and so now there is $19,000+ in “money” resulting from your deposit, even though there was only $10,000 in cash. Indeed, if the person who got that loan from the bank spends her $9,000 to hire somebody a job, and that hiree then saves his $9,000, and the bank then loans out those $9,000 in cash to somebody else, then there is now $28,000 in money. As we can see, in the modern world, “money” is very different from “currency,” and so economists have very categories for measuring the money supply. “M0″ refers to actual physical currency in circulation; “MB” (the Monetary Base) refers to currency in circulation, currency stored in bank vaults, and Federal Reserve credits to banks (see below); “M1″ refers to currency, bank deposits, and traveler’s checks; “M2″ includes savings accounts and money-market accounts as well; “M3″ includes all those and a few other savings/investment vehicles. As you can see, M0 through M3 are ordered according to their relative liquidity — M0 is just actual cash, which is completely liquid, and M3 includes things that might take a bit more time for you to withdraw — savings accounts and money-market funds. Money, in the modern world, exists on a spectrum of liquidity. Indeed, it’s arguable that ‘money’ in these traditional categories is too conservatively defined. If you have $10,000 invested in an index ETF, and you can exit the ETF at any moment, you might think of those $10,000 as your money, but the Federal Reserve, at least when it pays attention only to M0-M3, would not.

So how does the Federal Reserve control the money supply? It doesn’t do so by “printing money,” as Fed-skeptics often put it — it’s even more aerie than that! The Fed actually mostly influences the money supply just by entering credits and debits into its and other banks’ digital balance sheets.  Suppose a bank has $100 in deposits from savers like you and me, and it has loaned those $100 to General Electric. At this point, there are $200 ($100 in deposits, and $100 in cash on hand for GE). But now, the Federal Reserve can buy GE’s debt obligation from the bank; the bank thus gets $100 (or whatever the market purchase price of the loan was) in cash credit from the Federal Reserve, which it can then loan out to another company, like Ford. So now there’s $300 of money in the economy ($100 for GE and Ford each and $100 for the banks’ original depositors), with the extra $100 having been created simply by the Fed crediting another bank’s account.

In reality, due to ‘fractional reserve banking,’ each purchase of X that the Federal Reserve makes creates much more than X new money, because banks often lend to other banks, or banks’ loanees deposit some of their loans in other banks, etc. So the Federal Reserve can have a large impact on the money supply simply by purchasing banks’ assets — by giving these banks fresh money, it allows them to lend more money to other people/banks who will lend to other people/banks who will lend again, creating new money at each iteration.

***

I hope this is all the basic background one needs to understand the talk about inflation that we see in the business press. But I want to quickly touch on some implications:

1. This reason all this theory is important is that it explains why Federal Reserve policy is controversial and debatable. If there were a simple, linear relationship between the money supply and the price level, there would be no controversy — we could easily and uncontroversially predict inflation by quantifying the money supply. But Fed policy right now is controversial, for some, because we can’t actually be sure how changes in the money supply will affect inflation over the long run. It’s theoretically conceivable that a central bank could increase the money supply while observing very little inflation, because people largely hide their new money under their mattresses, only to see that 5 years later, everyone suddenly starts spending their mattress-savings, sending prices skyrocketing. The complex psychological factors that influence the velocity of money, including self-fulfilling expectations about inflation (see below), mean that there is always some uncertainty about what the consequences of the Fed’s actions will be. For the record, I’m not very worried about the prospect of very high inflation. The market’s expectations for future inflation are priced into price difference between TIPS (Treasury Inflation Protected Securities) and regular, non-inflation protected Treasuries. And TIPS continue to show low inflation expectations. If I were smarter than the market, I should probably be a billionaire right now. People who are very certain that high inflation is coming should put their money where their mouths are, by putting most of their savings in inflation-protected securities.

2. Expectations for inflation are largely self-fulfilling: If you expect wage rates to rise 10% next year, you might try to lure a new hire with a contract at a 8% premium (relative to current wages), to lock her in at a price that will be a 2% savings relative to what you expect for the future. If you expect prices for your supplies to rise next year, you might raise prices on your merchandise right now, in order to earn enough cash to afford those higher-priced supplies. If you think your competitors are raising their prices right now, then you know you can raise your prices without losing customers. Etc., etc., etc.. The fact that inflation is a sort of self-creating phenomenon, ultimately based on everyone’s best guess about what everyone else thinks about what everyone else thinks about how much prices will rise in the future, is one thing that sometimes makes it hard to control. Most episodes of hyperinflation ultimately originate from governments printing massive amounts of new money — but from there, inflation radically outpaces the printing presses, as everyone keeps raising prices in response to everyone else’s price hikes in a downward spiral. More, one of the most effective ways for the Fed to control inflation is for the Fed chairman to literally make statements — in words — about future inflation. If the Fed says, “we are committed to ensuring that inflation is 3% next year,” the average company will have a good reason to raise prices by 3%.

3. Most mainstream economists believe that moderately higher-than-usual inflation can help boost an economy out of a recession. There are at least four mechanisms through which inflation can benefit a recessionary economy:

          (i) If you own a company and you expect prices to be 8% higher next year, all else equal that fact will make you more inclined to purchase more merchandise now, while prices are still lower. You also might ramp up your production and investment right now, so you’ll be well-position to meet that high nominal demand.  This boost can help an economy get out of the recessionary downward spiral in which low demand and low production begets more low demand and low production.

          (ii)  Most of us think about our salaries in nominal terms. Most of us do not like to take paycuts. However, during a recession, individual workers’ productivity decreases (i.e., if I’m a car salesman, I’m worth more to my company during a time when lots of people want to buy cars). The problem is that if workers’ contribution to companies’ bottom lines decreases, but workers’ salaries stay the same, then firms will hire less and fire more, and/or become less competitive. Inflation allows firms to lower their employees’ real wages, without needing to lower their nominal wages. Economists think this is a good thing — the alternative to lower real wages during a recession is mass unemployment and bankruptcy.

          (iii) Inflating a currency typically devalues it relative to other world currencies. If we make the dollar worth less relative to the Brazilian real, then Brazilians will be able to more easily afford to buy American goods. This should help America’s exporters, which is another thing that can help drag a country out of a recessionary downward spiral. (The flip side of this, of course, is that it will be more expensive for Americans to import things from Brazil — so policymakers have to think carefully through the full industrial implications of a devalued currency).

          (iv) Inflating a currency benefits debtors (at the expense of creditors). If I owe my very wealthy landlord $1 million next year, but prices rise 15% in the interim, then the “real” value of my obligation to my landlord will only be some $850,000. If I as a middle-class consumer am more likely to spend extra money than my ultra-wealthy landlord, then this inflation-driven decrease in my debt/increase in my wealth (and decrease in my landlord’s wealth) will mean greater net demand in the economy. Again, this short-term boost to demand can help jolt an economy out of a downward spiral. You often hear that the problem we’re facing in the U.S. is that, after the financial crisis, everybody tried to “de-leverage” (that is, reduce their debt obligations) at the same time, which led to a “demand shortfall.” (This is often called the “paradox of thrift” — saving more money is good for any individual, but when everybody does it at the same time, it can cause a recession). Inflation can make it easier to reduce our debt obligations, thus weakening the demand shortfall problem that comes with deleveraging.

On the flip side, most mainstream economists believe that in non-recession times, relatively low, stable inflation is good. This is because it’s easier for people to enter into short-term and long-term economic contracts when they can have relatively certain expectation about what things will cost and be worth in the future.

The weird and awful/wonderful economics of taste and contemporary artisanship

This is a post about a weird and interesting space in economic theory, but it starts with a short anecdote.

Today, I went to my local barbershop and sat for an extra half hour browsing terrible magazines so that I could get my hair cut, specifically, by the owner of the place, an older man with blazing white hair and a thick Greek accent that he still retains from his boyhood in Samos. I feel subjectively that I look better when I get my haircut by the owner, as compared to the other barbers. But as a good junior social scientist, I always try to be skeptical of subjective impressions. Objective social science has been very good at obliterating a lot of our pious impressions about the superior quality of goods produced by lofty artisans and craftsmen — in blind taste tests connoisseurs can’t distinguish a fine wine/cheese from an ordinary one, etc. So what about my haircut? Is there any objective basis for my belief that the owner gives me a better one? What could explain my impression?

I have a couple of hypotheses:

#1: My first was that it’s is that it’s totally an illusion and I’ve just been primed by the owner’s foreign accent, old age, etc., to trust him as a craftsman. I.e., perhaps when the other barbers, with thick south Boston accents, cut my hair, prejudice leads me to watch their work with an overly-critical eye. I look in the mirror afterwards seeking to identify their mistakes and misjudgments and find them for this very reason. The owner’s wise-old-man aesthetic primes me to discover the evidence of his excellent good taste when I look in the mirror, and I see it for this very reason.

Is hypothesis #1 correct? Maybe — perhaps even probably. But my girlfriend, who is a fairly unbiased intellect and never present when I get haircuts, has agreed that my haircuts with the old man have been better. So I want to investigate the possibility that his haircuts really do look better. What in turn could explain this?

#2: The main objective difference I can observe in the barbershop is that the old owner of the place uses only scissors, while the other barbers use the modern electric tools. Could this explain the difference? It’d be really easy to say something like this: “Modern electric scissors save time but sacrifice quality. They impose uniform lengths and increments on men’s hair, while a truly good look depends on the layered textures, and smooth, non-discrete cuts that come only from scissors and the experienced judgments of a craftsman.”

This story could be true, but I’m skeptical. The reason I’m skeptical is that in an alternative universe people might be telling the exact opposite story just as plausibly. Supposed we lived in a world in which fine electric-mechanical devices were prohibitively expensive and rare. Scissors were abundant, but electric scissors were a luxury that only elites could afford. In this world, I’d bet that the electric look would be vaunted as desirable and superior. People in this world would probably say things like: “The electric haircut is a huge improvement over its pre-industrial equivalents. It allows the highly trained electric-scissor-certified barber to cut the hair in fine and exact geometries, as opposed to the rough, shabby, hastily layered looks of the past. A buzz cut is chic, crisp art deco on your head. Such a pity that only a few can afford it…”

See the problem? Our story about how the truly authentic scissored haircuts are better sounds nice; but there’s no way to objectively confirm it, so a person who is a critical outsider to our culture would argue that we’re just reverse-engineering a rationalization for our prejudices. If this is true, my impression makes a lot of sense: I don’t like the look my head gets when electric-scissored because of the cultural/affiliational/class-based reactions that have been ingrained into all of us. In my city, the buzzed, electric-scissored look is associated with the military, chains, Budget Cuts, etc. The look of hair cut by scissors, by contract, is associated with people and places that are willing to pay and wait extra to achieve a more fashionable appearance. And so the old-fashioned-scissored look seems more attractive not because of anything inhering in its geometry, but because of associations inhering in our culture and affiliations.

***

So the theory here is that there’s a kind of circular process going on: (1) Aesthetics and taste are not objective. (2) Electric scissors take less time and training to operate properly, so haircuts done with them are cheaper. (3) Therefore, aesthetics aside, income-constrained people will be more disposed to get electric-scissor haircuts; the hairstyles of elite people and elite urban areas will disproportionately be drafted by real scissors. (4) Therefore, the culture will come to associate electric-scissor haircuts with low social standing and regular-scissor haircuts with high social standing. (5) Therefore, the old-fashioned scissor haircuts will be upheld as “objective good taste” and self-conscious elites will be willing to pay more and wait longer for them, which will reinforce the distinction.
It is the superior price efficiency of the electric scissors that causes the look they produce to be associated with low social-standing, which causes it to be devalued. A generalization of this insight is that in matters of ‘taste’ (which is to say: in markers of social distinction) democratizing, price-lowering innovations are at least partly self-defeating.

***

This basic idea is key to understanding a lot of markets based around taste, cultural affiliations, etc., and is also troubling to the general optimistic picture of how markets work. Normally, we hope markets work something like this: When we all really want and/or need something, we bid up the price of it; the high price attracts entrepreneurs who want to make a lot of money meeting this demand; entrepreneurs uncover new technologies and production processes to make the thing more cheaply; the entrepreneurs compete with each other to market the good, driving their prices down; and so now everyone can get the thing they want on the cheap. See, e.g., automobiles, computers, etc. But for goods whose value comes at least partially from social distinction (i.e., “positional goods”), entrepreneurs can’t do quite so much good for us, because the technology and production processes that broaden access to the good will, ipso facto, reduce the value of the good (and be panned by cultural arbiters as ‘bad taste’). The value that electric scissors could provide to the world has been partially limited by the fact that their efficiency created a new distinction.

I find this interesting purely as a theoretical contrast to classical economic theory: In these domains, technology improves the objective features of a good, but in doing so detracts from its value as a token in human social hierarchies. In the supply-and-demand curves we saw in Econ 101, the demand for a good increases as its price declines; for these positional goods, the relationship is more ambiguous. But beyond theory, there are a couple interesting implications:

(1) Right now, Apple enjoys famously high margins on and earnings from its products. As Apple faces increasing competition and loses market share, it might be tempted to lower its prices, the natural response for any company fighting off competitors. As an economist, I should love this decision — more individuals could buy more great Apple products more easily. But if I were a consultant to the company, I might be hesitant: It seems to me that a large part of Apple’s brand value comes from the price distinction itself. Today, buying a non-iPhone smartphone labels you as someone who’s too eager to save a couple hundred bucks, a gaffe among yuppies. So Apple lowering its prices might not unambiguously raise its sales. What can Apple do? Personally, I think there’s just realistically no way Apple can keep up its current earnings and margins and so the company warrants its very low PE ratio. But this is not what consultants are hired to say.

(2) This theory provides some hope for an “artisanal economy” in the future. The basic idea, which I first heard proposed by Adam Davidson, is this: Throughout human history, improvements in technology have improved human welfare overall, even though technological disruptions caused short-term harm to the workers whom they made obsolete. But now some really smart people are starting to worry that this time is different. Once artificial intelligence advances sufficiently that robots can do literally anything that humans can do, there will be no way that we humans can complement technology and we’ll all start to be replaced by it instead. So who will have jobs in the future? Well, people who are part of protected licensing cartels might: As long as the government says you need to see a human doctor to get XYZ prescription, doctors will still have jobs. The people who own the capital and intellectual property used to make the robots will also still have plenty of income. But what about the rest of us?
Davidson has proposed that the future looks like Brooklyn, NY, in whose hip neighborhoods you can find artisinal offerings of just about anything. How is this economy supported? Mostly by people across the river, in Manhattan, whose incomes are either directly or indirectly tied to financial services. Are artisanal versions of goods better than their mass-produced industrial counterparts? A lot of artisanal foods probably wouldn’t come out ahead in a blind taste test, but artisanal goods in general are useful for us for expressing cultural affiliations and in-the-know-ness, or adding a unique quality to a dinner party or a unique aesthetic to an interior design. Artisanal goods are mostly useful as social tokens. And that’s a good thing. As such, they’re largely protected from competition from technology, because getting them cheap and efficiently is not the point — the point is having the experience of visiting the artisan’s boutique shop in a hip neighborhood, and telling the story of the good when you bring it home. I wonder if the economy of the future will look a bit like the economy that currently crosses the East River: technology does all the real work in satisfying our objective basic needs; the owners of capital and intellectual property earn huge profits as a result; and the rest of us are employed in vaguely creative professions, doing things that objectively robots could do, but which some rich capitalists want a unique human fingerprint on. I will let the reader decide whether that is utopia or dystopia.

Singapore’s Healthcare System

I’m going to caveat this whole post by saying that health policy is not my expertise. (I spend a lot of time reading about economics and policy, and still would have trouble fully explaining the structure of health-care provision in the U.S. — but maybe this is part of the problem with U.S. healthcare?) But I’ve read a number of attractive things about Singapore’s healthcare system, and so I wanted to share my understanding of, and takeaways from, its Platonic ideal.

The basics, as I undertand them, are this:

First, everyone in Singapore has health-savings mandatorily and automatically deducted from their paychecks and placed into high-interest accounts. Since most people’s health expenses are low when they’re young, most people quickly accumulate a substantial buffer of health savings, which continue to compound over time.

Second, when it comes time to go to the doctor, you can pay for many, but not all, things out of this ‘Medisave’ account. Most medically necessary interventions and prescriptions qualify. Checkups for minor and non-life-threatening ailments or prescriptions for drugs that are helpful, but not actually cures for dangers-to-be-insured-against (e.g., an Ambien to help with jet-lag on international travel), might not be. This ensures that people don’t burden their health savings too much with their neuroses and sniffles, but also ensures that, when medical interventions *are* necessary, the money is there. It also requires medical providers to lower their costs to a point where they can actually attract demand in a free market — e.g., if people have to pay the full cost of Ambien, rather than a meaningless copay, you have to lower the price to a point where it’s worth it from an individual’s perspective.

Third, very interestingly, you can ‘donate’ some of your accumulated medi-savings to your family members. This increases your incentive to keep saving more and more and not overspend even if you are precociously and congenitally healthy, and provides an extra line of support to those who are congenitally and precociously unhealthy, provided that they have loving families with some healthy members. (It’s also interesting and heart-warming to me, because in economics we usually think of incentives as working on individuals, but this is an example of incentives working on the ‘extended self’ of a family. It also provides an extra level of ‘socialization of risk’ at the extended family level.)

Fourth, the government offers very low-cost and subsidized catastrophe insurance. This catastrophe insurance is ‘means-tested,’ meaning that if you have a million dollars of wealth lying around, the catastrophe insurance might not pay out even if you get in a car accident that runs up to $40,000 of medical expenses — because while your accident was tragic, you can plainly pay for it yourself. But if you’re middle class and that same accident would bankrupt you and your lifetime Medisavings, the catastrophe insurance would cover it. Catastrophe insurance represents the most basic, important function of insurance — to socialize the risks of unpredictable, rare, and extremely costly events, so that people don’t have their lives ruined by events over which they have no control.

Fifth, there are basic subsidies for the very poor. For some people, the regular required Medisave and catastrophe-insurance contributions are quite costly, and they, and they alone, receive subsidies. This means that the most vulnerable members of society are supported in procuring healthcare, but the median consumer of medical services has no incentive to consume more than is rational from his own cost/benefit analysis. By targeting subsidies at the very poor, Singapore’s health-care system provides universal access without (as we do here in the U.S.) incentivizing the over-consumption of medical resources.

Sixth, the government makes the market for medical services more competitive by enforcing radical transparency. Healthcare providers are required by law to publish their prices for services, in order to enable and encourage people to shop around for bargains. The U.S. system is radically untransparent. If your child has an ear infection in the middle of the night, and you go to an overnight emergency room to pick up a basic antibiotic (which must be a highly dangerous and addictive drug, given that only AMA-certified mandarins with seven years of education are allowed to dispense it!), the doctor who scribbles her signature on the prescription may charge $500. But you never see that cost — it is absorbed by your insurer who incorporates it into the annual costs paid by your employer, which employer has its medical costs subsidized by the government. We are five or six levels removed from the actual costs of our medical decisions, and so it’s no wonder at all our expenses are so irrationally high.

Seventh, at a certain age, Singaporean citizens can pass on what they have remaining in their Medisave accounts into their savings or retirement accounts. That is, once they’ve made it through most of their lives, they are rewarded for their efforts to control costs and allowed to spend the cash on other needs and wants. This simply closes the circle of giving people incentives to keep their costs low and allowing them to make their own tradeoffs about medical vs. other goods.

***

This system seems pretty theoretically ideal. It guarantees universal access via subsidies for the very poor and a mandate to ‘Medisave’ on everyone else. It achieves the most basic, fundamental function of insurance via cheap catastrophe insurance. And it keeps the costs on the public very low by relying on strong incentives at the individual and family levels, price transparency, competition, means-testing, and the general principal that individuals ought to bear their own costs for most things. (Ideal theory suggests that it might also be optimal to provide extra incentives for preventive steps — e.g., subsidizing gym memberships to nudge us to be healthier, and less costly, later on. But given that real-world governments are imperfect and subject to corruption and capture, Singapore’s more basic, keep-it-simple-and-stick-to-the-fundamentals approach is probably a better template for real governments.)

Singapore’s system is based around recognizing realities and trade-offs which are unfortunately a “third rail” for politicians to speak of in the U.S. Namely, medical resources are scarce, and health is one good among many that we want to enjoy in life. So, yes, sometimes it is rational to not get this checkup and not to get that prescription. If people knew and felt the costs of their medical services, they would be able to make these trade-offs more rationally. More, insurance adds value when it actually insures — socializing the risks of the irregular, the unpredictable, and the unavoidable. (Auto insurance does not cover the cost of refilling our gas tanks, because that is not what insurance is for.) And the Singaporean healthcare system exemplifies this. I would like an Ambien the next time I travel to Asia, but ‘I would like x’ is not tantamount to ‘it is rational for x to be fully covered by insurance.’ It would be better for society as a whole if I would bear the full cost of my non-clinical sleep aid and if the company that makes the drug were forced to meet me at a price which I myself would be willing to pay.

***

One thing that struck me about Singapore’s healthcare system is that in popular political cosmogonies, we posit the ideals of ‘strong active government’ and ‘individual choice and competition’ in opposition to each other. But Singapore’s system could be seen as both more government-driven and more market-driven than its Western equivalents. It begins with a universal government mandate in order to provide a well-defined public good — but then relies on intense competition, individual choice, transparency, simple and understandable rules, and strong incentives, to keep costs low.

This is my way of saying that I think the popular political cosmogony is misleading, and we should have fewer conversations about ‘big government’ in the abstract versus ‘free-markets’ in the abstract, and keep our eyes on the goal of ‘efficient provision of public services’ while being open to intelligent combinations of government mandates and market incentives/competition in achieving that goal. It’s not useful to say that Singapore’s system is characterized by ‘bigger’ or ‘smaller’ government than the U.S.’s — it’s just smarter government.

Anat Admati’s simple proposal for stopping financial crises: Target debt, not assets

Obviously, the financial crisis of 2008, and the subsequent recession and anemic recovery, was a really big deal. Even if we bounce back to 3% GDP growth rates in this year and the next, the second-order and aggregate effects of the financial crisis will continue to drag on American and global economic growth for literally decades to come. Probably the biggest cost of the recession has been high unemployment among the young, which has prevented members of my generation from accumulating skills, experiences, and savings that they otherwise could have — skills, experiences, and savings that could have done much to contribute to our economic future.

So how can we stop a financial crisis like our last one from happening again? Well, to massively oversimplify, the last financial crisis happened because banks had taken out huge amounts of debt to buy assets whose values were tied to the housing market, and the housing market faltered, causing the value of those assets to decline, which left some financial institutions insolvent, fundamentally unable to meet their obligations to others, and all the panic and uncertainty meant that even fundamentally sound banks lost access to credit they needed to hold them over through the crisis. So how do we stop this from happening again? Well, most of the discussion has centered around regulating banks’ assets. Most people want more regulations and stronger regulators on banks asset purchases–passing regulations to require banks to take on less risk and giving regulators more authority to look at their balance sheets and make them change their asset allocations if they’re being too risky.

But there’s a theoretical problem with this line of thinking: Financial institutions really don’t like going bankrupt (though, notably, the policy of Too Big to Fail can cause a problem of “moral hazard” here). They really do their best to find assets that will increase in value over time. Plus, banks these days — for better or worse — employ a lot of the smartest people in the world — economists, physics and math PhDs, etc. — to model what’s happening in the economy, figure out the probable impacts on their assets, and use that to figure out how to help their bank prosper. And this means that it’s not realistic to expect that the next financial crisis will be averted because a few government regulators getting paid $120,000 a year go up to a few Goldman Sachs economists making $5 million a year, and say, “Hey, look, your assets are going to decline in value, and you’re going to go bankrupt,” and the Goldman Sachs economists will say, “Oh, crap, we hadn’t thought of that.”

If that sounds snarky, let me put it more formally: The value of an asset represents the market’s best assessment of the total discounted future expected returns of that asset. To say that “the value of these assets will decline in the future” is an inherently counter-cultural, quixotic, non-consensus prediction, because the market incorporates its predictions for the future into the current market value of assets. If regulators are smarter than the market and can predict the future better than the market can, then they all should have already made billions and billions of dollars doing their own trading by now. (They generally have not.) In other words, declines in the value of assets are by definition unpredictable — so giving regulators power to stop banks from buying assets that they (the regulators) think are unwise purchases will almost certainly not work. To illustrate this basic theory with actual history: In the mid 2000s through 2007, the Fed assured us over and over again that the housing market was no cause for concern — in late 2007, most economists did not think that the U.S. would enter a recession in 2008 (we were already in one at the time). Regulators will not predict the next financial crisis in advance, because financial crises are by their nature unpredictable and unpredicted.

So what else can we do? Instead of giving more power to regulators, could we give more power to formal, unbiased, conservative regulations about the kinds of assets banks can hold, i.e., requiring that they buy relatively higher amounts of very safe assets, like U.S. Treasuries? This is, in my view, a better line of thinking, but not the ideal primary policy approach. Indeed, one could argue that one contributor to the last financial crisis was, e.g., the requirement that banks hold a certain portion of AAA-rated assets, and the ratings’ agencies stupidly giving Mortgage-Backed Securities AAA ratings. Ironically, the fact that banks could formally meet some of their requirements for AAA assets by buying these MBS actually helped drive up the demand for, hence the price of, MBS, which could have occluded and distorted price signals about their riskiness. In other words, ultimately the “more regulation of asset purchases” idea falls to the same argument as the “stronger regulator power over asset purchases” argument — if we knew which assets were risky in advance, they wouldn’t be so risky. Another objection is that we as a society actually do want banks to do plenty of risky investing, in, e.g., innovative but young companies with uncertain but potentially awesome futures. The tech bubble of the late 90s eventually got overheated, but it’s basically a pretty great thing that American capitalism could hook up a lot of brilliant entrepreneurs in California with the money they needed to implement their crazy ideas to change the world. It’s not clear that we’d be better off as a society if more of that money had gone into pushing yields on U.S. Treasuries even lower.

So what do we do instead? The big idea that’s catching on in the econ blogosphere, and which I’ve been persuaded by, is that we ought to stop focusing on banks’ assets per se, and instead focus on how they finance those assets. One way to think about this is that, as I wrote above, we’ll never see the next big decline in asset values in advance — it will always, by its nature, be unpredictable — but we can increase the chances that the financial system will be robust through such a period. How could we do this? It’s simple: If banks financed more of their assets with equity, and less with debt, they would be able to suffer a greater decrease in the value of their assets without becoming insolvent. So we simply force banks to have more equity relative to their debts: we could do this by simply making them reinvest all their earnings (i.e., not pay out any dividends) until they met the desired ratio. This idea is being advocated most powerfully and vociferously by Professor Anat Admati, as in her new book, The Bankers New Clothes.

Let’s step back to make sure we’re all absolutely clear on the terminology here: If I’m a business, every purchase I make is formally financed by either equity or debt. When I first start my business, I invest $10,000 — that’s equity; when I get a $10,000 loan from a bank, that’s debt. When I spend that money to buy up office space and inventory, then I have $20,000 of assets, financed equally by debt and equity (meaning I have a ‘capital structure’ of 1 to 1). If I make $5,000 right away, then those profits count as new equity immediately, and so I have $15,000 of equity for $10,000 of debt. If I pay those $5,000 out to the owner (myself) as dividends, then those $5,000 are in my personal bank account, and longer on the company’s balance sheet, so the company is back to the 1 to 1 capital structure ($10,000 of debt and $10,000 of equity). If my office catches on fire and now my assets are worth only $10,000, then I now have 0 in equity, because I still owe $10,000 to my creditors. If I invite a partner to come share ownership of the company with me, his/her investment is new equity.

In the run-up to the financial crisis (and still today), banks were famously highly ‘levered’; Lehman Brothers’ assets were financed by some 30 times as much debt as equity. This is sort of like buying a house for $300,000, while making only a $10,000 down payment. What’s so bad about taking out all this debt? The problem is that, the more debt/less equity you have, the greater are your chances of bankruptcy. You legally have to pay off your debts regardless of circumstances (your debt does not decrease because you had a bad year) but your equity just goes with the flow of your assets. If my company has $100,000 in assets, with a capital structure of 1 to 1, and our assets then decline in value to $80,000, then that sucks for me and my fellow owners — our equity just fell from $50,000 to $30,000 — but we can still pay off all our debts and remain a going concern. But if we had financed our $100,000 in assets with a leverage ratio of 9 to 1 ($90,000 in debt and $10,000 in equity), then the same decline in the value of our assets would leave us completely insolvent.

When banks are levered up 30 to 1, just a 3% decline in the value of their assets can leave them insolvent, unable to meet their obligations. When lots of banks are levered up this much, even smaller declines in the value of their assets can put them at risk of insolvency, which can, in turn, force them all to sell off assets in fire-sales, pushing down the value of financial assets even further, or cause them to lose access to credit, leading to a self-fulfilling prophecy, financial contagion, and a credit crisis necessitating bailouts, etc. In other words, each bank’s leverage has negative “externalities” on society as a whole.

***

Why do banks take out all of this debt? There’s one fact everyone agrees on: One major contributor is the debt bias in the U.S. tax code. Corporations can deduct the interest they pay on their debt for tax purposes, while they cannot deduct the dividends they pay out to shareholders — indeed, dividends get taxed twice, first as corporate profits and then as income for the owners who get them. This debt bias gives banks a relatively greater incentive to take out more debt. It also means, unfortunately, that if we did undertake Admati’s proposed reform without getting rid of the biased tax incentives against equity, banks would see their costs of funding rise, which could increase the cost of credit throughout the economy. (N.B.: She does want us to get rid of the debt bias as a part of her proposed package of reforms.)

But what if we could get rid of the debt bias? Then could we all agree to increasing banks equity-ratio requirements? This is where the discussion gets tricky and contentious. A lot of bankers are arguing that even if we could get rid of the debt bias, higher equity-ratio requirements would be a bad idea, because they would decrease banks’ Return on Investment (ROI), and hence their value. Think of it this way: Suppose I invest $50 million in a bank, and the bank gets another $50 million in loans, and buys $100 million in assets, which appreciate, over the year, to become worth $120 million. The bank needs to pay back $55 million to its creditors ($50 million plus 10% interest), but the other $65 million is all mine. I make a 30% ROI, even though the bank made only a 20% return on its investments, because the bank was levered up. If it weren’t so levered up, I wouldn’t make as much. If the bank had funded all of its assets with a $100 million investment from me, then I would only get a 20% ROI.

And this is definitely, obviously true — when a company is doing well, leverage multiplies the amount it can return to its shareholders, particularly when interest rates are low. The problem is, when the company is not doing well, leverage multiplies how much the shareholders get hurt. There’s a formal mathematical expression of this idea which proves that (in the absence of tax biases), the capital structure  of a company is irrelevant to its value. The math is hard to express, but here’s an easy way to think about it: Suppose a company has a very reliable business model, and so it’s thinking about levering itself up an extra two times, in order to increase the take-home ROI of its owners.  This isn’t a horrible idea, but it’s also not necessary, for a simple reason: If the investors have faith in the company’s reliability, then they could just lever their own investments in the company up, taking out debt to increase their equity stakes, which would have the exact same effect on their take-home ROIs. So the debt-equity capital structure/ratio is irrelevant to the company’s value to its shareholders — it just shifts around the risk.

***

One last quick note: A bedeviling misconception is the language that suggests that higher equity-ratio requirements mean that banks will have to ‘hold’ more equity, which will decrease their ability to lend, hence the supply of credit in the economy. This is totally insipid and false. Banks’ loans are assets — equity vs. debt are the way of financing those assets. Banks do not ‘hold’ equity. As soon as I invest in a bank, it can lend that money out. Banks ‘hold’ reserves as the Federal Reserve — but this is not at all affected by, and has nothing to do with, their equity. Admati’s proposals have nothing to do with how much cash banks have to keep in the bank.

***

So here’s a three-step process to make our financial system ten times as safe as it is right now:

(1) Get rid of the debt bias in the U.S. tax code.

(2) Require banks to have equity ratio requirements of 20%. An easy and orderly process for getting banks to reach this level would be to forbid them all from paying out dividends (i.e., requiring them to reinvest all of their earnings) until they reach that level.

(3) Let banks make all the risky investments and chase all the profits they want — and next time their bets don’t work out, let their shareholders, and not the U.S. taxpayers of the financial system as a whole, bear the cost.