Extremism, the Imagination, and Socially Free Speech

Back in my college days I had ornery libertarian inclinations. So, I did what ornery college libertarians do, and applied for a Koch fellowship to help fund some work at a think tank over the summer. At the Koch fellowship, I met some really ueber-libertarian college students. Not libertarian in the sense of “fiscally moderate/conservative and culturally cosmopolitan,” but libertarian in the sense of supporting Ron Paul, railing against monetary authoritarianism, and generally talking about politics in the strongest, most Manichean language. Libertarian in the sense that they thought that Congress was divided into two parties: the socialists and the fascists.

Today, I inhabit a more mainstream world – researching for a university and ghost-writing for a moderate. But I’m still Facebook friends with a lot of those old college libertarians, many of whom haven’t left that world behind and are still going on like they did three years ago. On my Facebook newsfeed I get regular glimpses into an ideological world in which the Federal Reserve is always pushing us ever closer toward hyperinflation, Iran only poses a threat insofar as we aggress against it, and the modern welfare state was built with the cynical goal of enriching the political class.

Needless to say, I think these beliefs, and the people who hold them, are wrong. But while I don’t think these people should be in positions of power, I’m glad they are on my Facebook newsfeeds. Ideational extremists—people who are radically outside of the mainstream—are extremely intellectually valuable, because they expand our imaginations. By challenging our sources of intellectual authority, they help us understand them better. In my college economics classes, it was taken as a given that transportation infrastructure was a natural monopoly, hence a market failure requiring public procurement. But at the Koch fellowship I encountered people who were convinced that there were wonderful possibilities for a free-market system of transportation infrastructure—competing toll roads, etc. Though I’ve ultimately decided that the mainstream economic opinion is probably correct, I went along with the libertarian ideas for a bit – and I’m glad I did. The mainstream ideas were kind of stale in my mind before they had been challenged. Only when I encountered ueber-libertarians who would actually say things like, “There is no such thing as a market failure, properly understood,” was I forced to think really hard, critically, and independently about what market failure was.

The same goes for my friends on the far Left. I learn a lot more about the world, and get more intellectual excitement and imaginative gestalt shifts, from my barely-reconstructed Marxist-Leninist labor-activist friends, than I do from the mainstream, ever-friendly, ever-palatable-to-the-professional-classes’-tastes New Democrats. And I predict that as the pace of social and technological change continues to accelerate, it will be my peers who spent their younger years as ideational outcasts who will be best positioned to find imaginative solutions to our future problems – adjusting public bureaucracies to the modern age, finding an intelligent approach to growing income inequality driven by technology and globalization, etc.. People who grew up in the technocratic mainstream may only have the mindset for, and the solutions to, today’s problems.

Second, I find that extremist libertarians, in particular, are well-positioned to point out hypocrisy. Because they have such strong, unyielding, ideological objections to foreign military interventions—and don’t frame their objections in pragmatic terms—they’re better than most of us at being equally critical of Republicans’ and Democrats’ chosen wars. With the exception of Glenn Greenwald and a few others, the biggest critics of President Obama’s detention and Guantanamo policies have been writers of the libertarian “Right,” rather than the liberals who were so critical of the same policies under W.  Again, I don’t agree with libertarian foreign policy, but I’m glad they’re there to point out our hypocrisies, as they will be in the future.

So this was my attempt to reconstruct a cliche that is often said but less often spelled out—that extremists have an important role to play in our discourse.

***

There’s a ritual that takes place in the middle-brow media every couple of weeks. Some outrageous figure will say some outrageous thing. Then, outrageous media outlets will whip up an outrageous controversy. Other outrageous figures will attack the original outrageous figure in the most vicious terms, demand his resignation or defunding, or whatever. Then, defenders of the original outrageous figure will decry these attacks, demanding that “free speech” be protected. In turn, their opponents will point out that “free speech” is a political constraint—i.e., it prevents the government from punishing him—that also guarantees the right of critics to say whatever they want.

That final position—the last sentence—is clearly true. The first amendment gives us protections against political reprisals, not social reprisals, for our speech. So it guarantees our enemies the right to hate and exclude us for our speech as much as it gives us the right to speak. Back in my college days, there was a consensus among my smart Yale Political Union friends that we not only (rather obviously) had a right to criticize and stigmatize people who have said wrong things but that (more controversially) we had the obligation to do so. This was, we thought, the only way to maintain a free society. If we didn’t punish objectionable or potentially harmful speech at all, then we would allow harmful ideas to reproduce. But if we used government to clamp down on those ideas, we would be giving away our freedoms up front. So a free society, our thinking went, must offer political protections for free speech, but must also rely on a culture that regulated such speech through social pressures and stigma.

I think there’s definitely a lot of truth to this argument, and we definitely should stigmatize overt bigotry, group hatreds, etc. Stigma can be the most powerful way to advance social change.

But otherwise, partly because of what I wrote above about the social functions of extremism, I worry that our society has taken social censoriousness about people with “bad ideas” too far. I’m disturbed by the media’s practice of ideational gotcha politics—of finding acquaintances of political figures who are outside the mainstream, and guilting the political figures by association. This is common among both Right and Left media outlets. My fear is that these practices will scare my and future generations out of  indulging their curiosity and seeking out ideologically idiosyncratic groups, in a way that will reduce the imaginativeness and openness of our political discourse in the future.

There’s a sweet spot here—a place where bigotry is stigmatized out of existence and power, but students don’t need to feel nervous about indulging their curiosity about Students for a Democratic Society or the Intercollegiate Studies Institute, or whatever.

The Environmental Economics of Locavorism

I hope my readers will accept this question in good faith, and not think that I am just rationalizing a swipe at hipster trendiness. I am curious about the full logic of the argument that buying food from local sources is good for the environment. As my earliest readers know, I think there is a logical argument to be made that local food could have superior robustness and healthfulness that we are as yet unable to detect. I also think that if people get some sort of intangible, aesthetic pleasure, at the idea of upholding farms that are near their homes, that’s great — whatever floats your boat. But I don’t yet fully accept the argument that local foods are good for the environment.

The basic argument that buying local foods helps the environment is very simple: Buying local reduces the total amount of trucking that goes on in the world. Fewer trucks driving from mega-farms in Indiana to Cambridge, Mass. means reduced carbon emissions, means less global warming and lung cancer, etc., etc.

Pretty simple, right? Perhaps deceptively so.

Let’s think beyond stage one about the full economic consequences. Suppose I, an aspiring-to-be-responsible consumer, have a choice: I can buy an ear of corn that was delivered from an industrial farm in Pennsylvania at my local Foodmaster, or I can buy an ear of corn from a new local farm just around the corner from my apartment in Cambridge, Mass. Which should I buy? Put differently, should I resist the temptation to buy the corn from Pennsylvania, which is cheaper? The corn from Cambridge must be more expensive for some pretty obvious reasons: Cambridge is home to Harvard and MIT, it’s part of the the major dense, metropolitan area of one of the most well-educated parts of the country. So the labor here is very productive, and the land is highly sought after, because employers want to get access to those productive workers, and productive workers want to get access to those clusters of firms. That means rents and property values are high, relative to the rest of the country.

The reason I mention this is that thinking about why the local food is more expensive points to a whole bunch of ripple effects that complicate our picture of “local=good for environment.” The fact that there is now a new farm near my apartment in Cambridge means that something else is no longer there. The farm has displaced some other businesses or residences by out-bidding them for the property. When I pay a premium for corn from Cambridge, a lot of that premium essentially goes to helping that farm recoup its losses for buying expensive property in an expensive part of the country. So that extra money has essentially gone to driving up the demand for property in the Boston area, pushing up the price of property here, and occupying slack supply. The local farm has just displaced something else — whether a business or residences —  which now must be farther from me, and from my desirable and sought-after urban cluster. And that means that this something else now has to travel a longer distance, presumably by carbon-emitting vehicles as well. So have I really reduced my carbon footprint?

It’s unclear. I can even see an argument that buying local food in a high-productivity area is seriously problematic, since agriculture is typically only done on a single floor, whereas service professions can be done in skyscrapers. I.e., by paying a premium for local, Cambridge food, I could be displacing 10 floors worth of health-care consultants for one floor worth of agriculture.

Is this an unrealistic thought experiment? Definitely. Local food advocates don’t actually advocate we actually get our food from farms in Kendall Square. They’d just, in this case, advocate getting them from somewhere in eastern Massachusetts, rather than somewhere in Indiana. But I think if you follow the basic logic of our thought experiment above, we’ll see that this is still problematic.

In brief: If local food is less expensive than non-local food, it will naturally prevail in market competition, because both ordinary savers and conscientious yuppies will prefer it. Local food only becomes a debatable topic when it is more expensive than non-local food. And when local food is more expensive, because of higher land and labor costs, the price difference indicates that the local area is a more productive region than the alternative region. So buying local food is, in a sense, a demand that more agriculture be done in this high-productivity area and less in the low-productivity area. And that is not clearly an environmental good. Putting more agriculture in high-productivity areas displaces other industries, and residences, in ways that force those industries and residents to drive further to their workplaces and delivery stations, etc. By supporting a farm in the suburbs of Boston, you push more people to the exurbs; by supporting a farm in the exurbs, you send little ripples through the whole Massachusetts labor market that maybe pushes more people out to Indiana, a less productive, less dense, less walkable, lesse green state.

***

What shall we do about this? Well, first we should rend our hair and despair of the fact that it’s impossible for us to be really sure of what are actually environmentally responsible decisions. And second, we should support revenue-neutral moves toward greater reliance on emissions taxes. If the environmental harms of emissions were fully priced into the cost of everything we buy, we individual consumers wouldn’t need to worry about any of these calculations. We would just need to demand lower prices. And in that world, if local farms were less costly to society as a whole — in terms of both environmental and non-environmental costs — their produce would prevail in markets. If the produce from Indiana were still worth it, it would win in the market.

Why America Will Win the 21st Century

[File this under “ideas written in an extraordinarily broad brush that will be useful as a base, jumping-off-point for later, more specific posts.”]

For the first decade of the millennium, I thought that America was in decline. The tech bubble had burst. The economic headlines that had once focused on ever-rising NASDAQ prices now focused on China’s spectacular economic growth in the wake of its WTO accession. September 11th made us fear that the 21st century could be one in which new, non-uniformed, non-traditional enemies could constantly disrupt and damage American life and business. It then dragged us into two wars that became expensive quagmires, and cost the U.S. enormous amounts of money and international political capital and esteem. W. struck so many of us as an incompetent embarrassment; his presidency (though he was not solely to blame for this) oversaw growing, toxic political and cultural division in the U.S. And at the end of his term, America suffered a financial crisis that brought its economy into the worst recession since the Great Depression. We spent every new quarter after the crisis debating why the recovery was not as strong as we thought it should be.

But through that, the news from Asia remained promising. America’s hopes for the 21st century looked dim. Was it, like every great empire before it, now entering its own twilight?

Over the past year, I’ve become increasingly optimistic about America’s future, and I think the 21st century will be another American century. It’s not that I don’t think America has huge problems and challenges. It does. But every prospective alternative global leader has even bigger problems. More, America is “locked in” to several key advantages and it has several key features that make it particularly suited to enjoy the benefits of the 21st century economy. Let me lay these out in brief:

 

Alternatives:

Who could possibly replace the U.S. as the most economically and geopolitically significant country in the world in the 21st century? The most obvious option is the most populous country in the world, with the second-largest and fastest-growing economy in the world—China. And there’s absolutely no doubting that China will continue to become more and more of a force in the future. Its sheer population size means that it must, so long as it even does the bare minimum to drag its median citizen out of poverty. The spectacular and consistent economic growth it has enjoyed since its late-70s, early-80s market reforms have shown some small cracks in recent months. But those cracks are indeed small relative to its long, sustained trend. Its technocratic government appears to have done well in managing its economy through several stages of development. China even has a number of apparent cultural advantages. For example, both Chinese citizens and Chinese diasporas appear to be very successful in educating their children, which will help them succeed enormously in the future, knowledge-based economy. These are all serious advantages.

But I think we tend to go overboard with these. For one, China’s spectacular economic growth since the 1970s really shouldn’t be too much of a mystery. What we really ought to marvel at are the countries who haven’t achieved it. There’s a very simple reason why China has grown so quick, and it is this: It had been so poor. Because China was and is poor, labor is very cheap. Because its labor was so cheap, Chinese manufacturers could easily undercut the prices of developed-world manufacturers in international markets. Lots of foreign multinationals consequently wanted and want to locate and source their manufacturing there. That brought in a lot of technological know-how and competition for labor that has allowed the country to modernize, industrialize, and develop much more quickly than had been possible for today’s developed countries when they were at that frontier. China’s growth is what we should expect from a country that is poor, but meets basic standards of some rule and law, protection of property, citizens who want to work to get richer, etc. The fact that other poor countries haven’t kept up with China is a testament to their most basic dysfunctions.

But the thing to keep in mind is that eventually you run out of this kind of growth. I.e., as China gets richer, its wages go up, thereby undermining its entire export-based growth model. Already, we are reading more and more stories about American companies that are “insourcing”—once again relying on domestic manufacturers, now that China’s wages have risen to the point where they no longer compensate for the country’s disadvantages. This leads to what economists call a “middle-income trap”—a point at which countries that were once growing quickly get trapped, once their export-oriented formula has been exhausted.

In order to avoid the middle-income trap, the country needs to find really smart ways to let its old, export-oriented manufacturers die off, and transition the country to develop a comparative advantage in more technologically advanced, higher margin industries—i.e., communication technologies, financial services, computers, biotech, etc. Will China manage this transition well? Maybe. Few have. But there are a couple of reasons to be doubtful. First, China has a number of obvious political problems (to put it more plainly: it is not a democracy) that could come to the fore just as China is managing this difficult transition. It’s likely the Chinese government will come under significant popular pressure for democratic reform. Why? As countries grow wealthier and better educated, its citizens come to demand more liberalism (in the broad, classical sense of the word). The CCP propaganda directives that have helped dampen public demands for democracy will become less effective as social media and digital communication technologies become ever-more ubiquitous.

So, I see a critical inflection point sometime in the next two decades. (1) Young Chinese will have enjoyed lives of sufficient wealth and education and media exposure to make political demands—pro-democracy, anti-government protests will become a major force. (2) Meanwhile, the economy will inevitably slow, as export-oriented growth opportunities evaporate.  The slowing growth will further undermine the CCP’s claims to political legitimacy, and could result in widespread youth unemployment. (3) This will, in turn, make it extraordinarily difficult for the government to manage the middle-to-high-income transition. There will be enormous political pressure to use government subsidies and currency manipulation to help the then-old (now current) manufacturing industries to survive. This will limit political leaders’ ability to facilitate—and business leaders’ ability to undertake—a transition to a higher-tech, higher-margin industrial structure. More, it’s possible that as China makes a messy transition to more democratic government, several of China’s political problems—its dubitable rule of law, which discourages investment, and the ethnic conflicts in its interior—could be exacerbated, while its political advantages—the technocratic, unified, and hence-fast-acting nature of its government—could be eroded. Even if it China gets through that tricky passage, it will eventually face unavoidable demographic problems. China’s population will eventually start to decline (a product of its one-child policy), which will further raise its labor costs, discourage investment in the country, and burden its government. China might not only get ‘trapped’—it could then follow with steady decline.

If China had transitioned to a stable democracy well in advance of reaching middle-income levels, I think it would have a better chance at becoming the major power of the 21st century. As it is, it still seems likely that, simply thanks to its enormous population, China will surpass the U.S. in terms of total GDP sometime in the next two decades. But because most of its population will still be mired in poverty, and because of the questionable legitimacy of its government, this will not be enough to make it the major geopolitical power.

So what are some other alternatives? There’s Russia, the successor state to America’s former superpower rival. It has grown quickly over the past decade. But this growth is almost entirely attributable to rising global energy prices and its extraction of its domestic oil riches. So Russia has the problems a lot of oil-rich countries have—its oil wealth has helped revalue the ruble in ways that have actually perhaps hurt its other industries, leaving the country without much industrial diversity, or a clear economic future that isn’t solely dependent upon oil. And the fact that “control of the pumps” is preeminent has left the country with enormous inequality and political corruption (a “resource curse”). Russia’s population is also in steep decline; it lacks international weight and legitimacy; and so I doubt it can challenge the U.S.  in the 21st century.

What about the EU? This seemed plausible until recently. Europe’s current financial crisis, political disorder, and long-term demographic issue (precipitous population decline), however, make it seem ever less likely. As we read over and over again: You can’t do a monetary union without a fiscal and political union. And full political union is seeming ever less likely—so the EU’s basic model seems flawed.

What about India? I actually think India has the best chance, but the past year has witnessed unexpected cracks in its economy. It remains to be seen.

 

Advantages:

Anyways: Let me move on from just being critical of the alternatives. What does America’s future look like? Well, one of the big problems for a lot of potential challengers is demographic decline. Simply put, it’s hard to carry much geopolitical weight when you are not very many people. It’s hard to carry much economic weight when your not-many people are consequently producing not-many goods. More, population loss over time brings a lot of problems and dysfunctions. As people retire, they stop doing as much spending (all things considered) and start drawing down savings. This means that “gray tsunamis” lead to negative demand shocks on an economy, which risks leading to deflation (which is devastating). The collective drawing-down of savings deprives the country of loanable funds for investment. Modern welfare states largely rely on pay-as-you-go social security systems; they were premised on continuing population growth. But once states’ populations start to decline it becomes well extremely difficult to pay out all of their pension obligations without drowning in debt.

So one of the U.S.’s biggest advantages is decidedly unglamorous—it’s just that we’re still growing, population-wise, and are on pace to do so for a long time.  That means we’ll continue to be a place where everybody will want to market their goods. That means we have good long-term growth prospects such that everyone will still want to invest here. That means we’ll have a bigger talent pool in the next generation from which the next Steve Jobs might emerge—more kids mean more prospective geniuses. That means that—for all the talk of our growing national debt—we may have an easier time meeting our obligations than other nations.

Why is the U.S. still growing? It’s a mix of a two factors: (1) Immigration—people love to come to the U.S. (land of opportunity and all that) didja know? (2) Fertility rates are somewhat higher than in other developed countries, for somewhat mysterious cultural reasons.

But America’s advantages go beyond that. The main one is that we are enjoying enormous success in” frontier industries”—i.e., those high-tech, high-margin, firms that are building the world’s technological future. As I write this blog post from Chicago, I do not have a single item of clothing or bag made in the U.S.A., but every glowing gadget I am using, every window I have open in my browser, etc., is all American. Apple sends all of its low-skill, low-cost manufacturing overseas—but its most well-paid employment in research and software engineering and systems-management goes on here, in the U.S. And these well-paid, intellectually challenging jobs attract immigrants from elsewhere to the U.S.—an ironic contrast to its manufacturing outsourcing.

Why is that the case? Well, first, people want to live here, because life is safe, reliable and pleasant—and the taxes on the wealthy aren’t reputed to be that bad. But, second, it’s because we have lots of high-tech computing “clusters,” which give us big advantages from “network effects.” That is to say, Apple computers started in California. They got tech-savvy workers to come work for them, and helped train them. Those workers sometimes founded their own tech firms, which staid nearby and attracted new, tech-savvy employees. And so on and so forth, until California and Silicon Valley had such a critical mass of tech savvy workers and know-how and tech-business infrastructure (i.e., the bankers and consultants with tech expertise), that nobody could doubt that this was the place to start a tech firm. (This is, of course, a highly stylized narrative.) If you want to do high-tech well, the place to go is Silicon Valley, because, well, that’s where all the high-tech is. It pains my Northeastern heart to say it, but it’s not so much the case that America is the future as it is more plainly the case that California is the future.  America is very lucky to have this important cluster at home—and its seems we’ll be locked into the advantages that it will accrue over time.

More generally, the fact that America’s universities are so extravagantly funded and are the envy of the world means that America will always have cutting edge research, super smart PhDs, and the high-margin industries and jobs that come with them. Again, remember that being good at lower-tech, lower-margin industries, such as manufacturing or basic services (like call centers), can only get you so far. Once your wages catch up to those of the countries you export those goods or services to, you’re out of luck. To be the world economic leader, then, you need to be higher up the technological ladder than others—producing goods, programs, websites, apps, nano-bots, etc., that have production processes that are so technologically and socially complex that they can only be made in your clusters of super-skilled workers, such that countries with the competitive advantage of lower wages can’t undercut you, because they just simply can’t produce what you’re producing. And the fact that Facebook, Google, Apple, etc., are all here, in close proximity, bodes well for our ability to maintain that advantage.

Does America face a lot of problems in the future? Definitely. The most obvious one is: not everyone can be a computer engineer or nano-technician. We have extraordinarily high levels of income and social inequality. We have clusters of brilliant computer engineers in California; and we also have cities in which the public schools graduate less than half of their students. I think it’s almost certain that inequality will become even greater in the U.S. in the future, due to technological change and globalization. The historical legacy of slavery has been vicious cycles of racial inequality and alienation yielding more inequality. Our racial inequality has proved stubbornly persistent. It seems unlikely the U.S. will overcome it soon, as it must if all of America is to join in on the 21st century’s prosperity.

But here are two things to keep in mind: First, the wages for low-skilled service jobs get pulled along with those of the dominant advanced industries. A barber in Manhattan or Silicon Valley gets paid more than a barber in Cairo or Slater, Missouri, because the wealthy, high-tech industries bring with them more dollars demanding labor for the service industries that support them. So a growing and prosperous high-tech industry in the U.S. will benefit the rest of us too, even if not as directly as it will benefit brilliant computer programmers. Second, inequality isn’t always, necessarily, the worst thing, from the perspective of the country as a whole. When I was in Japan, I had a fascinating conversation with a major hedge fundie, who argued that the source of the American economy’s dynamism was that it combined first-world features and third-world features (or developed and undeveloped country features). That is, we, obviously, have clusters and firms at the technological frontier, but we also have an influx of low-skilled, low-wage immigrant labor from south of the border that allows our most primitive industry, agriculture, to still be competitive. If America’s growing inequality in the future will also be accompanied by growing opportunities for low-skill immigrants, and even opportunities for social mobility, then it will be worth it.

Does language reflect or reconstruct reality?

Recently, I had a Facebook dialog with an old Yale friend, who just graduated this year with high honors. In my senior year, I had written an exposition of Nietzsche’s philosophy of language, which my friend had asked to read. A little background: In my essay, I wrote that Nietzsche made persuasive arguments that language does not actually reflect nature as it is. Rather, all linguistic conventions are arbitrary, all of the words we have chosen to use are grounded in metaphor, and so our linguistic world is ‘anthropomorphic’ in that in organizes and taxonomizes the world according to human needs and wants, rather than objective reality. Finally, I argued that Nietzsche’s philosophy of language explained his aphoristic, literary style, because it suggested that a scientific, analytic representation of the truth about the world in language was impossible. So Nietzsche urged his “new philosophers” to speak like him — using aphorisms, ironies, puzzles, declarations, and stories to deconstruct old, conventional, hardened ways of seeing and speaking about the world, in order to force us off our conventional taxonomies and cliches, to explore new, more imaginative and original metaphors and ways of talking about the world. So I concluded that my exposition of Nietzsche’s philosophy of language argued against the form in which I presented it (an analytical, academic essay). I thought my friend’s questions were interesting enough that I might publish our dialog. 

***

J: I’m writing a paper on Nietzsche, and I just read over your essay for some guidance on his philosophy of language. A few thoughts: if your non-ironical, clarity-aspiring paper recommends its own destruction, why is it worth reading? Is clarity a ladder to be kicked away? And why does the conventionality of language render it arbitrary?

…..

MS: I’m glad you’ve found my essay useful. I in contrast (I’m sure you know what I mean) cringe to read anything I wrote more than a few minutes ago, it included, and am blushing at the idea that you have a digital copy. But regardless… Toward a response to your questions: (1) There’s only an awkward half-defense. We really should just get what Nietzsche is doing in his later work, understand his implicit critique of language, and learn to speak in his style. But since we don’t all get that, my clarity-aspiring approach, complete with its appeals to our human weakness for taxonomies and structures, is needed to make the point clear, at which point we can finally move on. (2) So yes. A ladder to be kicked away, in your fine metaphor. (3) In my use of the words, it is almost tautological that the ‘conventionality’ of language renders it ‘arbitrary.’ A ‘conventional’ thing is, etymologically, something we humans have just ‘come together’ around—it is justified by broad agreement and choice rather than its status in nature itself. And since language is not tied to anything in nature, i.e., outside of convention, how language develops and evolves is necessarily the product of human arbitration. But I use ‘arbitrary’ in a non-normative, certainly non-pejorative, sense. Indeed, language is a useful metaphor for all other social conventions—they’re all arbitrary, and yet absolutely essential for our sanity and society’s functioning.

…..

J: Another thought on Nietzsche: are all linguistic “conventions” equally arbitrary? Suppose I make up a word – “grark” – to refer to my left toe, the moon, and the set of prime numbers under 30. Isn’t there a sense in which this doesn’t “fit” nature in the same way the word “leaf” does? I’m not convinced that every abstraction is equally violating of the natural order…

…..

MS: Tough, good, pressing question. And you’re surely, unavoidably correct—our linguistic taxonomies and categories definitely do fit the real, natural world, better than a randomly assigned lexicon would. But let’s work with your own example, the word ‘leaf’ — we use that noise to refer to both the photosynthetic organs of fauna and sheets of paper. This makes sense to us, because both are thin and flat and light. But I can imagine an intelligent extraterrestrial for whom that pairing wouldn’t make sense. Maybe, in her world, flat and thin things are trivially common, but each flat and thin thing has a radically different function, and these differences are essential to survival. Her eyes and brain would not have evolved to taxonomize things according to a flat and thin shape as we do. So our pairing of the paper with the photosynthetic organism just might not register with her. Or maybe this extraterrestrial’s civilization has been technologically advanced for so long that their language has evolved to make no distinction between natural and artificial technologies. They refer to their solar panel as ‘leaves’ because both turn sunlight into usable energy — and they would think us curiously backward for not doing so.

So I guess our taxonomies mostly have some grounding in nature, but always have an anthropomorphic inflection as well.

…..

J: I’m on board with what you say here about language being grounded but anthropomorphic. It’s not clear, though, that you’re still being a Nietzschean. Awfully realist about properties, nature, some things “fitting” the world better than others, etc. Let’s talk more this summer.