[Two caveats before beginning: First, I have no familiarity with, and nothing to say about, Nash’s major intellectual contributions outside of game theory. Second, this post is a verbal reflection on the big ideas that a smart, non-technical person could take away from Nash—so there will many elisions, glosses, and small technical abuses in this piece.]
John Nash made extraordinary contributions to game theory, which has, in turn, become the foundational toolkit for understanding human social behavior. My hope in this post is to convey an appreciation for the depth and importance of these contributions and their utility in our intellectual toolkits.
First, what do we mean by game theory? The most down-to-earth way to define what social scientists mean by a ‘game’ is that a game is a situation where several people have goals and want to take actions to achieve those goals, but each individual’s ability to achieve their goals depends on the actions the other people take. (In the preferred jargon, a game is a situation in which ‘players’ choose ‘strategies’ to maximize their ‘payoffs,’ and each player’s payoff is in turn determined by the ‘cross-product’ or ‘profile’ of all players’ strategies.) As you can see, this an extremely general concept that encompasses basically all of human social interaction and that’s why game theory is a foundational concept for most socials sciences now.
So, to make the definition more concrete: If I’m, say, playing at a fully mechanical, random slot machine at a casino, and I know the probability of winning the jackpot, then my decision about whether and how to play is not (under this definition) a game—instead, I’m just optimizing against a purely random, mechanical process that takes no account of my thinking at all. But if I’m, say, hoping to negotiate with a prospective employer for the right to take unpaid vacation time, then I’m playing a game, in which I have to think about her goals and subsequent actions conditional on the assumptions she’ll make about my likely productivity and fit with the firm conditional on the things I ask for during the interview and negotiations process, etc.
What do we expect to happen in these situations? Very simply, we expect that people will do the best they can conditional on the choices the others are making. And that, in short, is a Nash equilibrium, which is arguably the most important concept in game theory. The concept is actually very simple: A particular outcome in a game is a Nash equilibrium if every individual is doing the best they can to achieve their goals given what everybody else is doing. (In the preferred jargon I referenced above: A strategy profile is a Nash equilibrium if each player’s strategy is his own best response to the others’ strategies.)
In general, we would expect that in ‘games’ (or, more broadly, strategic situations) that we observe in real life, the agents are probably in a Nash equilibrium because, if they weren’t, then, by definition, at least one person would do well to change their behavior. The way I think about this is that any possible outcome of the game that is not a Nash equilibrium is self-refuting or at least self-correcting: We assumed that people were trying to achieve some goals, but if the outcome of some social situation is not an NE, that means that at least one person was not making the choice they wanted to make.
This may seem like a fairly trivial, obvious idea, and so alert readers may be wondering why Nash equilibrium is considered such an important, foundational concept. The way I think about this is that using the concept of a NE allows us to understand human social interactions by immediately understanding and characterizing the outcome of a social process, without having to worry about the extremely messy and complicated path that the social system will take to get to that outcome. An easy way to think about this is with a sort of pedantic example from high-school physics:
Suppose that we had a solid glass tube with two halves separated by a glass panel; and suppose that one half of the tube were filled with air at atmospheric pressure and the other half were a vacuum. What would happen if the glass panel were suddenly removed?
Hopefully, the answer to this question is obvious: Air molecules would rush into the half that had been a vacuum and pressure would become evenly distributed throughout the tube. If you were so inclined, you make even further predictions, such as that at any given time there should be approximately as many molecules on the left side of the tube as on the right, that the two halves should be approximately even temperatures. In short, the new, conjoined tube will induce a new equilibrium.
The interesting question to ask yourself is why do you feel confident making this prediction? After all, when I was a child, I thought that we could only answer this question by appealing to more foundational mechanics, i.e., starting with the physics of individual air molecules, estimating the individual positions and trajectories of the air molecules before the glass panel was removed and then modeling what their trajectories would be after its removal and then writing down what their positions and velocities would be, and then (finally!) using those positions and velocities to calculate the pressure and temperature throughout….
And while in theory this might be a rigorous way to truly understand this physics problem, in practice, it is just far, far too complex—we can’t observe the positions of so many molecules with enough precision, we don’t have the computational power to model each one’s trajectory—and we already know we can skip over the precise mechanics of how individual molecules bounce around and still know the answer to what the end result will be. We know that the contained chamber must be in equilibrium for the tautological reason that if it were not in equilibrium, it would keep changing until it were. If the air in one section of the chamber were higher pressure than the air in another, then the high-pressure air would expand into the low-pressure section, decreasing its own pressure and increasing the other’s until equilibrium was reached. And etc.
Moreover, the fact that the outcome will be an equilibrium is probably most of what we need to know in practice, and allows us to say many more things to characterize and describe what’s in the chamber.
So, in short, there are lots of questions in hard sciences that we don’t answer by going, as we might expect, causally ‘upstream’, to some more basic science, in order to trace the path of how some process will unfold—quite the contrary, we ignore the path and the upstream sciences and their implied mechanics altogether, and immediately leap straight to the outcome knowing simply that it must be equilibrium.
The thing that a lot of people don’t realize is that this is exactly how economics proceeds in understanding social processes. (People don’t realize this—I didn’t realize this at first—because the typical econ class introduces you to such a dizzying array of new notation, jargon, and unrealistic modeling assumptions that the student typically doesn’t have the mental space to ask the fundamental question of why we are we doing things this way?)
When I took my high-school science sequence, I saw biology being built on bio-chemistry, bio-chemistry built on chemistry, and chemistry built on physics. My assumption was that when I got to college and started studying “social sciences,” this construction project would continue, and neurology would be built on biology, psychology built on neurology, and economics built on psychology. So I was pretty surprised when I walked into my first economics and saw, instead, a completely self-contained lecture, with the professor simply starting with the abstract construct of a ‘utility function’ and then building to an ‘equilibrium solution,’ with none of the upstream sciences involved at all.
But as fashionable as it is to beat up on economics, there’s a great deal of—sometimes subtle, usually not explicitly argued—reason in this “axiomatic” approach, in which we devote our attention to things like giving very abstract, general descriptions of what people’s goals might be (as in their utility functions) and then characterizing what kind of equilibria could result among such people depending on what kind of game they’re playing, rather than tracing out individuals’ “paths.” The reason is that we humans are even more complicated than the air molecules in the physics problem above, and so the task of describing our social behavior via a mechanistic approach—of using ‘upstream’ sciences to describe the path each and every one of us would take—is completely hopeless.
(Some readers might imagine that “behavioral economics” is taking this approach, of rebuilding economics on models of actual human behaviors, rather than via the axiomatic approach above. But my understanding is that behavioral econoimcs has mostly added asterisks to particular applied economic models and some of their restrictive assumption, but has not replaced or supplanted equilibrium as the overarching approach to characterizing the outcomes of social processes.)
Instead, just as in the physics question above, economics can give us a great deal of insight by ignoring the individual molecules’/people and their current positions, and going up a level of abstraction, and noting that the final outcome must be an equilibrium, and there are many important and useful things we can know and say about the equilibrium per se. For example, in macroeconomics, we can’t really keep track of what all of America’s 300 million citizens are up to and actually feeling on a daily basis, but we can use an equilibrium notion and some general assumptions to about people’s preferences to figure out some of what must be true.
A concrete example of this is to think about the question: How should I invest my money? A lot of people seem to think that in order to invest your money, you should read up on firms and pay attention to their debt levels and stuff and make up your mind about whether it’s a good or bad company. But if we simply assume that the world is roughly in equilibrium we have a different way of looking at this problem: If everybody is doing the best they can do, no investor would want to sell me a stock if (s)he could get a better price by selling it to someone else, and so my counterparty’s willingness to sell to me at price $x means that nobody in the whole rest of the market thinks that stock is a incredible bargain at price $x. In other words, I should never, ever expect to be able to buy a stock at a bargain. A world in which I could buy a stock at a big bargain would be self-refuting, self-correcting, and not in equilibrium. As such, unless I have some private information advantage, I should just hold (levered) index funds.
That paragraph may seem pedantic to people who are already well-versed in finance, but this is often a very surprising gestalt shift for those who are not, and so I’ll emphasize it again. Brilliant and well-educated professionals outside of the social sciences—doctors, engineers, computer scientists—in my experience often have trouble wrapping their mind around this, because they are used to working with objects that they can objectively evaluate on the merits of the object per se. But finance is different, because the critical property of interest—the price of the asset relative to its future returns—is not a fixed object, but is itself the product of a social game. In an engineering context, a car’s specifications are objective features, and if I want to know how fast the car can go, I should look at them. But in a financial market, a stock’s price is itself the equilibrium output of a social process, and so I already know the answer to the question ‘what is this stock likely to return?’ without knowing any objective facts about the underlying company at all. The answer is: ‘the market rate, adjusted for its risk.’ I can answer this question without knowing anything about the stock’s debt, market share, CEO, etc., just by knowing the nature of the game I’m playing and short-cutting directly to its equilibrium.
So that’s how I think about equilibrium. The major contributions of John Nash to this, in my understanding, were that, first, he developed the notion of Nash equilibrium, which is the appropriate equilibrium concept for human social systems (since its just about individuals doing what they most prefer—whatever that is—conditional on others’ actions) and showed its depth and generality. And second, he did a lot of the important technical groundwork in proving some surprising mathematical things about Nash equilibria in games. Specifically (and most famously), he showed that there is at least one NE for any finite game, that is, any game with a finite number of players and a finite number of possible choices.
One example of a finite game—a finite number of players with a finite number of choices, whose decisions will then determine how well everyone does—is human life. Like—sorry to be cheesy—there are only 7 billion of us who have only so many options, and so it’s mathematically demonstrable there’s a Nash equilibrium that must be an outcome to the game we’re all playing. (That doesn’t necessarily mean we can know enough about individuals’ preferences to accurately model this game in practice.) More generally, the fact that Nash equilibria always exist for any realistic social setting means that it can reasonably be called the unified framework for understanding human social behaviors and its consequences.
So on a deep level, that’s why Nash and Nash Equilibria matter, from my perspective. This blog post does not describe the fun little games—the prisoner’s dilemma, matching pennies, etc.—because describing these games requires describing some technical assumptions about game structure that would confuse matters. Instead, I want to keep this post on the level of appreciating the generality and flexibility of game theory.
Game theory provides a general unified framework for thinking about how individuals’ actions aggregate into social outcomes. The framework, too, is surprisingly general: A lot of people think that game theory’s language of ‘payoffs’ and ‘utility functions,’ etc., entails an assumption that people don’t care about others, are materialistic, or have unrealistically ‘rational’ preferences, but this is just not true. The game theory and economic equilibrium frameworks are certainly general and flexible enough to accommodate actual human beings’ preferences in these regards. The machinery of game theory can show how some surprising results about: how individually rational behavior can aggregate into collectively horrible behavior (e.g., prisoner’s dilemmas/externalities); how individually selfish behavior could aggregate into collectively wonderful outcomes (general equilibrium with complete information); how we humans can end up in situations where we sink enormous amounts of money, time, etc., sending signals about the kind of people we are, via education, luxury good purchases, etc., to differentiate ourselves from others, and how everyone can find this wasteful state of affairs to be their own best response (e.g., information economics, separating equilibria, signaling models, screening models); and how we humans don’t think about public policy and finance in the right way, because we’re used to engineering-style thinking (e.g., most of instinctively love the idea of legally requiring firms to do some Unambiguosly Good thing for their employees, without considering how that requirement will affect their proclivity to hire new employees in the first place, etc.).
There’s a good argument to be made that Nash equilibrium should not be the exclusive basis of theorizing in the social sciences. But there is a good reason it’s become the main one.