How I use large language models in my writing process

Ruminathans are pretty rare. I publish one about every two months on average, with high variance. I could probably increase my output substantially if I used generative AI (gAI) to the fullest. The reasons I don’t are:

  • The output of gAI is persistently not up to my quality standards.
  • I want to exercise my writing and thinking ability, so outsourcing that work to gAI would be like asking a robot to go jogging on my behalf.
  • I genuinely enjoy writing and thinking, so outsourcing that work to gAI would be like asking a robot to make love on my behalf.

That said, here are some ways I use gAI in the writing process: What it does and doesn’t do for me, and what I think that reveals about the future of essay-writing as a human endeavor.

Research

As the quality of traditional search declines in the face of the SEO arms race and the explosion of (gAI-produced) slop, it’s easier to find empirical data by posing research questions in ChatGPT (I’m using a paid version). In my recent posts about macroeconomics it helped me find the right databases. You have to verify that the links exist, that they are reputable, and that they say what ChatGPT says they say.

It’s far too easy, though, to use this “research” to cherry-pick data. You can ask ChatGPT to “Find data that supports [INSERT PET THESIS HERE]!” and it will oblige. It won’t push back and warn you that the balance of the evidence undermines your pet thesis.

You can, of course, prompt it to evaluate the available data. But that reveals the limitation: You can prompt it to produce anything you want. Asking it to evaluate the available data implies that you are looking for a balanced assessment. It will oblige, leaving you wondering if it just mirrors the biases embedded in your own word choice.

An easily confused sparring partner

Exploring a single topic in a longer conversation with ChatGPT requires a lot of care. The fact that it does not get meaning becomes apparent when you whiplash it around. Ask it to defend a thesis, then defend the antithesis, and then ask it to revert to arguing the original thesis: After that back and forth, it tends to get very muddled about which argument supports which position. It neither notices inconsistency nor is it embarrassed by it.

Say what you will about humans and their ornery unwillingness to update their priors: You can count on them to stick to their guns, and that’s what makes a good sparring partner.

ChatGPT is not a real sparring partner: Dance around it for a minute or two, and it will start punching itself in its tattooed-grin face.

Flattery will get you everywhere

Speaking of its ever-chipper attitude: Once I write a finished draft of an essay, I ask ChatGPT to evaluate it for clarity, elegance, and originality. Even 5.0, which is supposed to have overcome the obsequiousness of prior versions, is still an inveterate flatterer. And hey, it’s nice to hear some positive feedback.

To its credit, if I push a half-written fragment through, it identifies that it’s flawed and provides some feedback about how to improve it. But its concrete suggestions are largely unusable: It will never sit you down and say: “I don’t understand your point” or “This is too clunky to be salvageable” or “Stop beating this dead horse.” And that’s feedback that you need to hear sometimes.

Bottom-line: This step adds very little value to the writing process. I still do it because, yes, the charitable feedback provides a little pick-me-up after I’ve reached a draft. And I’m still hoping that someday it sounds a big alarm… but I’m not holding my breath.

My sworn enemy

You can, of course, prompt it to take a critical view. My preferred approach is to ask it to write a 300-word review of the article from the perspective of the author’s sworn foe.

The result is a cold slap in the face. There’s little of substance in the critique: usually it does not identify anything – neither stylistic nor content-related – that I did not anticipate and consciously choose. I find this step helpful, though, insofar as it shows me how someone might react emotionally.

To some extent, I can anticipate the perspective. There are people who are, shall we say, uncharitably disposed towards me. I can – equally uncharitably – simulate their voices in my head without ChatGPT’s help. But after the initial flattery, my bot-enemy’s perspective is a salutary palette-cleanser.

After my foe has weighed in, I usually ask ChatGPT which of his critiques are fair. If I agree and can make some changes without too much effort, I do so, but usually not the changes ChatGPT suggests.

The “best available editor”

Ethan Mollick, one of gAI’s biggest boosters, introduced the concept of the “best available human” to argue in favor of widespread use of gAI. “Sure,” says Mollick, “the most skilled humans can still do many tasks better than AI. But the issue is not what the most skilled humans can do but what the best available human can do. And much of the time, the skilled humans aren’t available.”

Frankly, I’ve been underwhelmed by Mollick’s writing. In this case it seems like he’s unfamiliar with the concept of negative numbers and the value “zero” as a valid solution to many problems. If the best available human would do a billion dollars of damage, and an AI agent only does a million, the correct choice is to abandon whatever the f#*k you’re doing, not outsource the problem to AI.  

Still, in Mollick’s spirit: I’ve tried to prompt ChatGPT to be an editor at a national magazine who is willing to read my article and ask guiding questions and then give me a thumbs up or down for publishing it. An actual national-league editor is not available, but her simulacrum is.

The editor succeeds at asking a set of interesting questions, and it can be prompted to stop asking and reach a verdict. As interesting as some of the questions are, the problem remains: It will not shoot me down. And sometimes, an idea has to be shot down.

Helpful, question-led suggestions – while perhaps leading to incremental improvements – serve only to lead me further down a rabbit-hole, when what I really need to do is cut my losses and move on to the next idea.

The value of blind spots

We all have our blind spots. They come naturally from being an embodied soul with a particular life history. What we think and what we write is just as much a product of what we don’t see as what we do see.

That’s what makes conversation so compelling – and reading and writing is a form of conversation. Through others’ eyes we can peer into our own blind spots. But it’s necessarily their own constrained perspectives that give them that vantage point. An outsider’s perspective is not valuable only for the unfamiliar ideas she brings; it’s equally valuable – sometimes more valuable – for being unburdened by the ideas the insiders take for granted.

In a good conversation – including one with an editor – your partner will point out your blind spots unprompted. I’ve not seen ChatGPT do this. I’ve tried. This essay went through the process I described above, intentionally without this section on blind spots, which I had mentally sketched out but not committed to pixels. The idea of “blind spots” lay in the incomplete essay’s blind spot. ChatGPT did not point it out (nor did it point out any other noteworthy blind spots).  

You can, of course, prompt the bot to look for blind spots. An explicit prompt (“Look for blind spots”) does not work: You get no fresh insights. You can ask it to write a comment from a particular writer’s perspective (“Imagine you’re George Orwell”) and it does better, but mostly it delivers a superficial emulation of the style, not a fresh perspective. It’s nowhere near the experience of being confronted with someone who makes completely different assumptions from your own.

Large language models seem to encompass every perspective, and the perspective from everywhere is a perspective from nowhere.

I could easily use gAI to create more output, more quickly. But I struggle to see what service I’d be doing others or myself.

Making machines smarter or ourselves dumber? Why I’m betting on model collapse.

If the emergence of artificial intelligence is as profound a development as both its boosters and its doomsayers predict, then we all need to make some pretty important decisions: bets about where we invest our savings, our time, and the time of our children.

In decision theory, you choose a course of action from several options based on the expected value: You look at representative scenarios, value them in whatever terms you want (monetarily or in terms of your personal utility), and weight the scenarios by their respective probabilities of occurrence. Then you choose the option with the highest expected value.

I’m acutely aware of the limitations of this approach to making decisions. But let’s take it as a given for the moment: What are the representative scenarios for how the world evolves with AI? I’ve been thinking about three:

AI becomes smarter than us: We train models that outperform us humans at every conceivable task. The emperor really has clothes.

We become dumber than AI: We train ourselves to no longer appreciate the difference between “intelligence” and its parody. The emperor has no clothes, but nobody is capable of calling him out.

AI fizzles: Whatever mechanisms have made AI appear smarter from iteration to iteration cease to deliver even marginal improvements and even lead to deteriorating performance (aka “model collapse“). The emperor has no clothes, and we’re willing and able to call him naked.

Our next step would normally be to come up with probabilities of each of these scenarios occurring and then devise some strategies with respective payoffs for the three scenarios.

But this approach makes sense when there are positive outcomes. It doesn’t matter what the probabilities are, if all but one of the payouts are zero (or less) under any conceivable strategy, then those scenarios aren’t even worth planning for.

I’m struggling with imagining any positive payoffs in the first two scenarios.

Society works because we have a stake in each other’s existence. If we become unnecessary to each other – if we no longer need to tie ourselves into a web of reciprocity – then we will become charity cases either for whoever controls the “Artificial General Intelligence” or for the AGI itself. Being interdependent, as we are now, is risky; being just plain dependent is a blind alley. I cannot see this as a positive outcome.

I don’t believe this scenario is very probable. But it doesn’t matter. No matter what its probability, it has 0 (or even negative) outcome value. Unless all scenarios have non-positive outcomes, there’s no sense in planning for it.

If you put a gun to my head and asked which of the three scenarios was the most probable, I’d have to go for scenario two. If the overwhelming majority of people do not see a difference between real intelligence and its parody, they cannot value that difference. And so in that scenario, I will also become a charity case because I cannot sell anything anyone wants to buy. Unfortunately, that feels like the direction we’re heading in. Nor am I alone in thinking that.

That is why I don’t think the elaborate game of decision theory makes sense here. There is only one scenario that has positive outcomes. I’m placing my bets on AI fizzling and the strategies that optimize for that scenario: Maintaining my skill set without AI assistance. Investing in a broad range of asset classes. And helping my children learn critical thinking and, hopefully, the courage to call emperors naked. And maybe if we all take that approach, we’ll make Scenario 2 less likely.

We cannot afford what we do not produce: social security, demographics, and the hidden cost of innovation

Across the developed world, hands are wringing vigorously about the sustainability of public pension schemes. Due to changing demographics, fewer and fewer workers will have to support more and more retirees.

What gets lost in the debates about the solvency of Social Security or similar programs is that the problem is not caused by the public nature of the scheme. Any conceivable private alternative would face the same underlying challenges. And those underlying challenges lie at a deeper level than demographic change.

Misfortune, senescence, and our long childhood

More than any other creature, we humans face a unique set of survival challenges:

  • Misfortune: Our own productivity is subject to variation for reasons beyond our control, whether from illness, climate variation, or monetary policy
  • Senescence: We are likely to live beyond our ability to provide for ourselves
  • Long childhood: We arrive in this world unable to provide for ourselves, and we have to learn survival skills through a long phase of cultural transmission.

To address these challenges, working adults have to produce surpluses. We have to produce more goods and services than we can consume, and we have to find ways to store the surplus against both rainy days and old age. But no matter how productive we are, and no matter how ascetically we live, we cannot individually store enough food, water, shelter, medical supplies, etc. to get by for more than a few months, maybe a year at best.

At its simplest, we solve the problem with the intergenerational compact in which working adults provision their elderly parents while raising their own children. But we’ve always mutualized that compact beyond our immediate bloodline. Whether as hunter-gatherers, pastoralists, agriculturalists, industrial workers, or knowledge workers: We have always had to throw ourselves at each other’s mercy, paying forward some of what we don’t consume today. In turn, we look to others to provision us in our times of need. “What goes around comes around” is our great hope.

Project Civilization has been about creating institutions to buttress that hope. Central distribution hubs to store food (aka cities). Credit contracts and written records thereof. Private property. Insurance. The thing we call “money.” All those institutions are grounded in our collective will to allow a state to enforce these arrangements. With rule-bound violence if necessary.

Because it’s ultimately the state’s enforcement capacity that keeps our hopes credible, we’ve even short-circuited these institutions and have asked the state itself to collect and redistribute surpluses. That’s what public pensions like Social Security do. And that’s equally the premise of publicly funded education for our children and income security programs such as unemployment insurance to handle misfortune.

It doesn’t matter which mechanism we use: Whether collected by the state, saved in money, or invested in private enterprise, without surpluses generated by others there is nothing to store and nothing to distribute.

Demographic change: cause or effect?

The problem with our pension systems is not in how they are financed. The problem is that the working age population may not produce enough surpluses to simultaneously provision the retired and provision and invest in the next generation. It’s not about surpluses of money. It’s about surpluses of goods and services.

Economists speak of the age dependency ratio. There are different formulations thereof, but in its simplest form, it’s the ratio of the population under 15 or over 65 to the working age population between 15 and 65.

The naïve narrative about the looming crises in public social security systems is about demographic change. On the one hand, extended longevity has meant that people spend longer in the “retirement” phase where they cease to be net surplus generators. On the other hand, we are having fewer children, which means that there are fewer surplus-generating adults.

In the naïve telling, when we set up our public pension systems, we assumed our population would stay pyramid-shaped forever, with a few surviving retirees supported by an ever-widening base of new workers. In the US, the ratio of worker to beneficiary went from 3.3:1 in 1985 to 2.8:1 in 2021. I’m using 1985 as the baseline because that is 45 years after the first Social Security payment, so roughly when the system should have hit a steady state.  

According to the mid-point projections, the US is headed towards a ratio of 2.1:1 in the next decades. In countries like Germany, with lower birth rates and lower immigration, the ratio is already 2:1, with projections taking the number of workers per beneficiary even lower.  

The naïve story comes with naïve policy recommendations: People should work longer (US, Germany). We need “pro-natalist” policies incentivizing women to bear more children (US, UK). We should privatize social security (US, Germany).

And the ultimate policy recommendation is always: We have to increase productivity with our tried-and-true approach of greater labor specialization enabled by more technology.

The fact that the explanation of the problem and the recommended solutions are naïve (at best) or proposed in bad faith (at worst) is illustrated with two simple observations:

  1. The problem exists whether the mutualized surplus-storage solution is organized privately or publicly.
  2. Any system – public or private – built on indefinite exponential growth of the number of human beings on the planet is absurd from the get-go.

I would like to take the analysis of the underlying problem a step deeper than demographic change and the dependency ratio. I will show that causes and effects are intertwined, the policy measures miss the point, and “more technology” cannot always be the answer because it’s often the problem.

The intensification of child-rearing

Let’s start with another naïve observation. Fewer children should also mean fewer “unproductive” mouths to feed and more time available to generate surpluses. So on the face of it, having fewer children could also alleviate the problems facing pension schemes. In fact, economists speak of a demographic dividend that accrues when societies reduce their birth rates. This is a point we’ll return to a little later.

But having fewer children doesn’t necessarily mean we need fewer surpluses if the amount of resources dedicated to child-rearing intensifies. It’s one thing to have seven children when you live on a farm in a low-density settlement and can count on them to start contributing to farm production from a very young age. Or to have five children in a cramped one-room apartment in a high-density city if you can send them to shovel coal into a factory’s furnace, and that’s all the job that will ever be available to them.

If, however, becoming productive in a hyper-specialized, knowledge-based economy entails 16-30 years of dependency, 10-24 years of which are devoted to resource-intensive education, then you may not have the resources to afford two children, let alone five or seven. The US Department of Agriculture estimates the cost of raising a child born in 2015 at $233,610. Crucially, that is the cost for raising children to age 17 and does not include college education and job training, nor the provisioning of young adults until they become net surplus generators. Nor does it include the opportunity cost – borne mostly by women – of parents who take off time during pregnancy and earliest infancy.

The high cost per child in places like the US might partly reflect overly generous consumption. There’s some truth to that. But the fact remains that when labor becomes hyper-specialized and knowledge-based it requires more resources to get to the point at which you become productive. It’s not just the basic needs and education. The more specialized and the more knowledge-based the economy, the harder it becomes to objectively measure whether someone has acquired knowledge and skills. Effort and resources go into signaling: prestigious degrees, resumé-padding activities, exclusive networks, fine clothes, all the things that might be subsumed under that horrid but real concept of a “personal brand.”

Raising the next farm hand or coal miner simply did not take the same amount of resources as raising the next oncologist, robot-plant operator, or social media influencer.

Pro-natalist economic policies like tax breaks and subsidies have been tried in many countries. They rarely and barely move the needle. And it’s not hard to see why. Having one to two kids on average is as much as we can afford on average.

The root cause is ultimately not that people don’t want to have children. The root cause is the amount of investment it takes to get children to be productive in the high-technology, hyper-specialized economy.

Insofar as “more technology” increases specialization and increases the need for learning, it may not contribute to the solution of inadequate surpluses. It cannot be a default answer to the problem.

Age or skill obsolescence?

The adult worker gets squeezed at both ends. With a bit of poetic license: She has to work and scrimp for 50 years to raise her child through 25 years of education to become the geriatrician she then has to pay to keep her parents alive during 25 years of retirement.

When our pension schemes were devised, they assumed people would be net surplus providers until around 65, then live off the next generations’ surpluses before shuffling off their mortal coils at around the biblical three-score and ten years.

Yes, people now live longer. And not just that: They haven’t magically started to live longer. They live longer because we’ve unlocked ways to keep people alive longer. Those medical technologies are miraculous. But they also consume massive amounts of resources and labor. My father had a medical procedure that could have extended his lifespan by a decade or two but that cost somewhere close to the median after-tax annual salary. In his case, it failed, sadly.

Rather than retirement being a phase of reduced consumption, it often involves an intensification of consumption. Peering beyond the veil of money to the underlying real economy of goods services: We train and provision geriatricians and oncologists who might otherwise have become schoolteachers and carpenters.

Historically, however, old age did not mean you stopped contributing. You shifted your contributions from, say, hunting and gathering, to childcare and cooking. That’s what freed others to do the surplus-generating hunting and gathering. And childcare meant imparting the skills and knowledge required for children to become net surplus producers. Or knowledge about how to survive once-in-a-lifetime ecological crises.

Age is not the problem. The problem is living beyond the obsolescence of your skills. As technology changes, the skills and knowledge you acquired when you were young – the ones that enabled you to generate net surpluses – are now likely to become obsolete in the span of your lifetime. Sometimes more than once.

In some professions, it is possible to retain a high enough level of productivity to be a net surplus generator past age sixty-something. But probably not in the majority of professions. Whether you “retire-in-place” and still pull a salary, or spend time retraining to become productive in another domain, you are de facto benefiting from others’ surpluses in that moment. Through no fault of your own.

In a world in which technological obsolescence and global labor specialization can destroy your community’s economic foundation overnight, working adults have to be mobile. So the option of shifting from direct surplus generation to indirect support by providing childcare is not open to every retiree. And even if it is, many of the life skills he has to impart might as well be from the Stone Age.

Demanding that people work past 67 is likely to be as ineffective as tax breaks for having more children. As for lower fertility rates, the real root cause is the rapid pace of technology-enabled labor specialization.

Misled by nostalgia

We face interlocking constraints that are both causes and effects.

  1. The resources and time required to become and stay productive are increasing.
  2. People live longer past the obsolescence of their skills.
  3. The horizon over which our skills are valuable is decreasing.

The dependency ratio – even in its elaborations – does not capture the reality of our predicament if it is not weighted by the intensification of child-rearing and eldercare, and the risk of obsolescence.

It’s a largely unchallenged dogma that technological and organizational innovation solve more problems than they create. However, we may have passed a point where that’s true.

Much of what we believe is rooted in nostalgia about the period known as the post-war boom known in France as the Trentes Glorieuses (“The Thirty Glorious [Years]”) and the Wirtschaftswunder (“Economic Miracle”) in Germany. Whether we experienced it directly or not, that period has shaped our collective consciousness. It birthed youth culture and rock’n’roll, and all the values, beliefs, and expectations we hold onto, no matter where we fall on the political spectra.

But the Trentes Glorieuses were an anomalous happy optimum where:

  1. The investment in skills required to become productive was not too onerous, meaning children became net surplus generators more quickly.
  2. Because children became net surplus generators more quickly, people could also afford to have more of them.
  3. You could count on your stock of skills to see you through 40-50 good years of surplus generation without becoming obsolete.
  4. People did not live for decades beyond the obsolescence or deterioration of their skills.

In contrast, now we face a world in which technological innovation extends our lives at great material cost, while rapidly rendering obsolete the skills we so heavily invested in. Both factors impose a set of constraints such that we can invest only in one or two children, if any at all.

We face this predicament globally, though its urgency will hit different countries at different speeds for historically contingent reasons.

“More technology!” is the answer for many: Make the fewer and fewer people more and more productive thanks to machines and AI. But if AI actually undermines skill development faster or more deeply than it augments human ability, the calculation will not work out.

But that is another essay and will be expounded another time.

The “inevitable” march of technological progress

My daughter and I got a kick out of this relic at a visit to Munich’s technical “Deutsches Museum.”

It’s basically a chemistry set for hands-on learning about nuclear power. Including radioactive material.  

According to the description, this 1950s “toy” was created to spark interest in science generally and specifically in nuclear power, the promising new technology that would unleash our potential thanks to unlimited cheap energy. The box advertises the US government’s $10,000 reward for the discovery of new uranium deposits and includes instructions on how to prospect for uranium ore.

The Atomic Energy Lab was not commercially successful for a variety of reasons, safety concerns not being high on that list. Still, we did not all grow up tinkering with uranium, de-ionizers, and cloud chambers as we prepared to become nuclear engineers. Safety concerns eventually limited how far we developed nuclear technology and how widely we used it. We certainly didn’t put it in under every Christmas tree or advertise it during Saturday morning cartoons.

We make choices about what technology to develop. We make choices about if, when, and how to make technology available to our children. There is nothing inevitable about the march of technological “progress.” There is no destiny to the direction it marches in. When we’re told technological progress is inevitable, that’s an act of persuasion. Believing it is an act of surrender.

Nuclear power is particularly instructive because it has not been an either/or choice. Different countries have taken different approaches. In some, advance has accelerated, in some slowed or reversed. There has been enormous internal political pressure to develop it both for peaceful and for military purposes. There has been enormous external political pressure to prevent rivals, even allies, from developing it for either purpose.

It’s also not a given that we made the “right” set of choices about nuclear energy. There is no obviously correct choice. Maybe if we had all grown up playing with the Atomic Energy Lab, and more resources had flowed to nuclear technology – as they did, for example, in France for many years – we’d have had a few more incidents and moderately higher rates of cancer for a few decades. But with all that tinkering and fired-up imagination we’d already be using near limitless clean fusion energy and wouldn’t be facing a fossil fuel-driven climate crisis. We placed bets. We mitigated some risks and embraced others.

We’re in the process of making a choice to develop generative AI with few restrictions on who uses it, how, when, and for what purposes. We’re putting these little virtual labs into children’s hands. As with the Atomic Energy Lab, we’re not terribly worried about what long-range impacts they might have on our budding scientists.

Somehow, we’re finding the will to invest in a massive reconfiguration of our electrical power generation infrastructure in the service of generative AI, with funds that seemed impossible to find when it came to transforming that same infrastructure with a goal towards sustainability and energy independence. In fact, the generative AI’s voracious need for electrical power is forcing us to reconsider the choices we made around nuclear power.

The choices we’re making are political in the best sense of politics: The mechanisms of collective decision-making we turn to when the win-win solutions of economics aren’t on offer. In the rollout of generative AI, there will be winners and losers. Whether we’re conscious of it or not, we’re in the midst of the negotiation about who reaps rewards, who bears the burdens.

As the nuclear power example illustrates, we have the power to choose. The notion that “technological progress is inevitable” is neither a logical nor empirical truth. It’s a strategic negotiating bid in the political battle about what role generative AI will play, and who will benefit from it and who will lose.  

Was the grass greener on the other side of the pond?

“The more troubling aspect is not that Germany is currently about 30% behind the United States in GDP per capita terms. What concerns me more is the relative deterioration over just one generation.”

That is an excellent point raised by my friend Detlef in a comment on my previous post (thank you, Detlef!), and one I should have addressed. It captures where a lot of the Sturm und Drang comes from among politicians and pundits in Europe in the face of US economic “outperformance” as measured in per capita GDP. It’s not the absolute difference in terms of GDP per capita, it’s the trend that counts, and that trend is deterioration.

Detlef’s concern motivates an overall narrative (which I won’t ascribe to Detlef) that European countries like Germany are in decline relative to the US because they are over-regulated and over-taxed, and because the US works harder, with more creativity and risk-taking. While Germany tied itself up in regulatory knots to prevent climate change and taxed its Macher (“movers and shakers”) to death (or into exile), the US unleashed Steve Jobs, Elon Musk, and Sam Altman to bring about the 4th Industrial Revolution.

Or so the story goes.

It’s a satisfying narrative for those in the US who like to self-congratulate and for those in Germany who’ve been brought up to self-flagellate. There may be a kernel of truth to it, as there must always be to any narrative that resonates. But the data don’t support it. Let’s look at the trend.

I’m going to use 1995 as a baseline for the trend comparison. Partly because it’s a neat 30 years. Partly because it corresponds nicely to the dawn of my own economic awareness. And partly because it corresponds roughly to the “information age.”

When Detlef wrote that “Germany is currently about 30% behind the US in GDP per capita terms” he is – like Yasha Mounk – referring to raw GDP per capita numbers: $55,800 versus $85,800 (2024; all figures sourced from the World Bank database again unless otherwise indicated). And indeed, if you look at 1995’s raw numbers, Germany had higher per capita GDP than the US: $31,700 versus $28,700.

But these figures ignore relative pricing levels. When you take pricing levels into account, you see that Germany’s per capita GDP was actually lower than the US’s: $23,700 versus $28,700 (the US’s price level is used as the reference, so the US’s PPP-adjusted per capita GDP is identical to the un-adjusted value). That’s a difference of 21% — 21% higher PPP-adjusted GDP per capita in the US in 1995 than in Germany.

Today that difference in PPP-adjusted GDP is 19%: PPP-adjusted GDP’s per capita of $72,300 versus $85,800. So while the average American is “richer” than her German counterpart, that was also the case 30 years ago, and if anything, Germany has “caught up” slightly.

Poof goes the narrative.

But how can this be? Didn’t Silicon Valley change the world while Germans took six-week vacations and called in sick whenever they sneezed more than twice in two minutes? Didn’t tech superheroes enrich our lives with free maps and on-demand-streaming while German engineers dithered around with obsolete internal combustion engines?

Part of the narrative is that American job-creating maverick entrepreneurs broke all the rules and created a crazy new world in which high-paid coding wizards cruise to work in their self-driving Teslas and then return to the kinds of glam downtown apartments you see on Friends and Sex in the City. But those rule-breaking mavericks may have destroyed a bunch of industries and jobs as well. It is called creative destruction after all. It’s not a given that technology makes people richer in aggregate, though it may make individual people rich.

But what’s important to understand about the information technology sector is that – whatever impacts on GDP per capita it might have – those impacts have not been disproportionately recorded in US GDP, or not wildly so. The consumption of those goods and services, the labor used to produce them, the supply chains of the IT industry, and even the financing are all globalized. At the end of the day, America can claim some bragging rights for birthing brands like Apple, Google, Facebook, and Microsoft, but in terms of GDP contribution, the value creation is spread all around the world. So while the tech sector may have generated a lot of GDP growth it will have done so globally.

The healthcare sector is a different beast. Its value chain is much more domestically based. Both the US and Germany have experienced disproportionate growth in their healthcare sectors – annual growth higher than GDP growth – since 1995. The sources I found indicate growth of 268% and 477% of growth in health spending, respectively in Germany and the US (caveat: I cannot tell from either source whether these numbers are adjusted to the respective countries’ inflation rates).

Both countries have seen their populations age over those three decades, though Germany’s more so. So given that Germany is older, and given that the US’s health outcomes are worse, it is strange that the US’s economy has become so much more dominated by the healthcare sector than Germany’s has. I can’t help but see this as a literally “unhealthy” development.

It certainly invites a different narrative than the one about the job-creating innovators unfettered by regulations and taxes. Instead, it invites the story of a nation of nursing home attendants working themselves sick in order to pay the medical bills for their sickness. Not to mention their legal bills for suits against the insurance companies who denied them coverage, and the interest on their student loans for a degree in communications from the University of Phoenix.

There’s a kernel of truth to that narrative, too. Perhaps more than a kernel as my previous post tried to argue.

In any case, there is no good case to be made that Germany’s per capita economic position has deteriorated relative to the US’s over the last 30 years.

That does not mean that Germany is not at an inflection point now, and that it may need to do something – maybe even dramatic things– to adapt to an aging population and the decline of the internal combustion engine. But there’s no need for self-pity. And no need for Elon-envy.

Is the grass greener on the other side of the pond?

Jane works three jobs and generates an after-tax income of $78,000. This income suffices to make ends meet. Beyond food, utilities, and shelter, her ends that need meeting include:

  • Treatment of various medical conditions created or exacerbated by her inability to find time or energy for rest, recreation, and exercise.
  • Legal bills due to lawsuits with a former employer, a medical service provider, and a restaurant whose undercooked pork dish resulted in one of her chronic illnesses.
  • Servicing a loan for an education at a private college that has no bearing on any of her three jobs, but whose credential she needed to differentiate herself in the job market.

Johann works a single job and generates an after-tax income of $50,000. This income suffices to make his ends meet:

  • He has ample time for rest, recreation, and exercise, and so he has no chronic illnesses.
  • The regulatory environment he inhabits has established clear rules for employers, healthcare providers, and restaurants so that individual cases do not need to be adjudicated in courts.
  • The educational institutions in his country are standardized and do not compete with each other for the signaling value of their credential.

Who is “doing better” – Jane with her $78,000 income or Johann with his $50,000 income? And what will per capita GDP tell us about the relative economic success of a nation composed of Janes and a nation composed of Johanns?

I am an American living in Germany, and I read media from both sides of the pond. And on both sides, a narrative has taken hold that the US economy has been more “dynamic” or achieved better outcomes in the last two decades, based on the observation that GDP has grown more rapidly in the US. (The most recent example I could recall is by Yasha Mounk, but it’s just one of many of its ilk.)

Unambiguously, Germany has a lower GDP per capita than the US, and the gap has widened since 2000. The raw data from 2022 shows a huge gap: $49,700 versus $77,900, i.e., the US GDP per capita is nominally about 55% higher (Unless otherwise stated, the data come from the World Bank’s Data Catalog which can be downloaded as a spreadsheet). The trend has widened since then (Mounk uses an even wider gap to make his point; nonetheless I will use 2022 numbers because I found World Bank data for the main variables I wanted to investigate only up to 2022).  

However, the raw numbers do not take into account different pricing levels. I’m not sure why Mounk does not acknowledge that, especially because the concept of “purchasing power parity”(PPP) is well-known and the data are readily accessible. When you use those figures, the gap shrinks to $5,200 or 10%.

The question is: Does that mean Janes are better off than Johanns? And should Germany (and other European nations) emulate US policy to “catch up?”

GDP tracks economic activity insofar as it can be observed through monetary transactions. Not all measured economic activity is a sign of human flourishing. And not all flourishing-enabling activities are measured. Hire a chauffeur and his income contributes to GDP. Marry him and it drops back out.

GDP is an imperfect measure, and all economists know this. The question is whether even developed economies like the US and Germany differ systematically in how poorly they capture actual value-adding economic activity.

In the discussion to follow, I will look at how the US records more economic activity in three crucial areas that collectively account for much (possibly all) of the PPP-adjusted GDP per capita gap, and I will argue that these are areas that can profoundly mislead about welfare and prosperity.

Healthcare

In the argument that actual American Jane’s are not really better off than actual German Johann’s, whatever per capita GDP might say, healthcare weighs the heaviest. Healthcare obviously contributes to human flourishing. But if Johann spends €0 on healthcare in a given year, there might be two reasons. One would be that healthcare was unaffordable for Johann. But the other would be that Johann just didn’t get sick. We can agree that a world in which Johann didn’t spend because he didn’t get sick is better than not spending because he couldn’t afford treatment. But it’s also better than a world in which he was sick and generated €1000 of economic activity in healthcare services and products.

In a meaningful sense, the “best” world would be one in which no healthcare spending took place because diseases and accidents didn’t happen.

So one thing to investigate would be whether the US experiences more illness than Germany, generating more healthcare-related economic activity than Germany does. If so, then that’s economic activity Germany can happily do without. What are the numbers?

In 2022, the US spent $12,434 per person on healthcare versus $8,453 spent per capita in German. Importantly, these figures are adjusted for purchasing power parity already (the gap is wider if the raw numbers are used). That means the average Jane spends $4,000 more on healthcare than the average Johann, and not just through higher prices. She truly consumes more healthcare services.

Now it would be one thing if Jane could point to better health outcomes: She can afford an additional $4,000 of healthcare spending compared to Johann, and has better health to show for it. But it is hardly news that US health outcomes, measured along metrics such as life expectancy and infant mortality, are inferior (easily confirmable in the World Bank data). So the US pays more for less. As unnewsworthy as that is, what hasn’t been highlighted as much is how much of the US’s outperformance is due to its sicker state. Out of the $5,200 per capita GDP gap, $4,000 of economic activity is due to being sicker!

The remaining gap, $1,200, is 2.5%.

Are there other economic activities Americans spend money on that might not really be signs of human flourishing?

Legal services

A world in which people never have a need to sue each other is better than one in which they can’t afford to get justice. But it’s also better than one in which they can afford to get justice and have to avail themselves of the courts.

The US and Germany have different legal traditions and approaches to regulation. Loosely speaking, Germany makes rules in advance: rules designed to prevent undesirable outcomes, e.g. pollution, medical malpractice, etc. There’s no shortage of discussion about how burdensome Germany’s regulation can be. But having rules devised in advance gives economic actors some planning certainty and levels playing fields for competitors.

The US leaves much up to the courts to sort out after something bad has happened. The “freedom” from regulation means that you have to be worried that someone will sue you for something later on, possibly spuriously. And the battle in the courts is an arms race, in which the side that can afford to “lawyer up” the most may well win the day, regardless of the merits.

How much measured economic activity originates in the legal profession in the US and in Germany, respectively? This is a bit harder to tease out because it’s not in the World Bank data. But the direction is clear. In 2023, American Janes spent a total of $363 billion on legal services. I failed to find exact numbers for 2023 in Germany, but in 2024 it appears to have been around $36 billion. So the US has to spend around ten times as much on legal services in an economy that is only six times the size.

The comparison of economic activity due to the legal profession is less clear cut than the healthcare situation, and it’s also a smaller factor. But I wanted to highlight it for three reasons:

  1. Americans are famously litigious, but I hadn’t seen the measurable economic impacts.
  2. One of the policy comparisons people like to make between the US and European nations like Germany is the supposedly lower level of regulation with supposedly beneficial impacts on economic activity; this is, however, never juxtaposed with the costs of the American approach of case law and the threat of expensive civil liability.
  3. If you run the legal expenditure numbers on a per capita basis, you get to a difference on the order of around $600, which would make up half that remaining gap of $1,200 in relative per capita GDP.

Legal expenses can have an arms race character: You spend not to get quality per se, but to get a quality edge compared to your rivals. Everyone (except the lawyers) would be better off in a legal dispute if they all simultaneously agreed to “disarm” and spend less. But as in a military arms race, it can be hard to reach such an agreement, especially not with a rival or foe.

A society that is spending on competitive arming for military and legal battles is not spending on things that improve the lives of its citizens. Are there other sectors with such an arms race character?

Education

Learning unambiguously delivers social and private benefits. But a lot of education spending is not about the capabilities you are building. It’s about the signaling value of your credential. Instate tuition at the University of Texas (UT) is currently around $12,000 annually. Annual tuition at Harvard is around $60,000. I could be persuaded that there are some differences in quality in the learning experiences between the two. But it’s hardly controversial that you’re paying five times as much to be able to drop the H-bomb in the dating and job search games.

According to OECD data, the US spends about $20,400 per student on education (all levels including tertiary and R&D, and including public and private spending). The comparable figure for Germany is $17,200. These are 2024 numbers, as they were the most readily available, so it wouldn’t be 100% clean to compare them to the 2022 numbers I used for the most important factor, healthcare. Still, this shows the directional difference. These are PPP-adjusted numbers, so the US is spending more on education even after accounting for price levels.

Crucially, if you look under the hood, the US spends slightly more on primary and slightly less on secondary education than Germany does, and in both cases, the vast majority of the spending is public. But in tertiary education, where a greater proportion goes to private institutions like Harvard (and where even “public” institutions like UT are partly funded by individual tuition), the difference is enormous: $22,000 for Germany vs. $36,200 in the US.

Whereas stats like life expectancy and infant mortality demonstrate pretty clearly that the US is driving some of its measured economic activity from unwanted illness, it’s harder to pinpoint whether the US might actually be getting a better or worse educational outcome for its higher spending on education. Meanwhile, Germany has some of its own arms race dynamics when it comes to academic credentialing, as demonstrated by its many political scandals of politicians resorting to plagiarism to attain PhDs.

But at least some of the US’s excess spending on education must be attributable to the signaling arms race.

I won’t elaborate on the financial services at length. But it’s worth observing that the higher spending on tertiary education is financed predominately through loans rather than through taxes. That means the raw spending on tertiary education understates the amount of actual spending by the amount of interest students pay on their debt. Those interest payments contribute positively to GDP and are one – of very many – reasons that FIRE (financial services, insurance, and real estate) make for a bigger proportion of GDP in the US than in Germany. (The US spends around $6.3 trillion on FIRE, and Germany spends around $550 billion. So more than ten times as much spending on FIRE in an economy only six times larger).

I question whether the interest paid on the excess cost of education caused by the credentialing arms race contributes to human flourishing.

What about housing?

Arguments like Mounk’s see the need to not just compare raw numbers – even when they fail like Mounk’s to take into account pricing levels – and to try to identify ways in which the higher incomes translate into observable human flourishing. Mounk’s main case relates to housing: Specifically the relative sizes of average US and German homes: 2200 versus 1200 square feet (here’s his source).  

He correctly points out that more “space” may or may not be intrinsically good, but that it affords the option to have more things, including very attractive time-saving things like dishwashers and dryers.  

Although this is true, he fails to take into account that bigger houses are not really caused by policy differences and economic dynamism. It’s substantially a matter of density and available land.

And while it’s nice to talk about the positive aspects of big houses, especially when things like dishwasher and drier purchases contribute to GDP, it’s also important to highlight the downsides.

Greater density means that I have option value when it comes to choosing transportation in Germany: I can get from A to B in daily life by car, by public transport, on foot, or by bike. All four are realistic, safe alternatives. When I lived in the US, many trips were not viable by anything but car. Biking and walking contribute much less to GDP than driving, but may have the same impact on my flourishing (or arguably, much more).

All this by way of saying that “Americans live in bigger houses” is weak evidence for their greater prosperity, and even weaker evidence for the superiority of American economic policy or business culture.

Conclusion

The subtext of these articles comparing per capita GDP is always “Europe has been doing something wrong in its economic policy,” and “the US model is worth emulating.” I can understand why the raw numbers suggest that. And there may be interesting questions to ask about why the US’s information technology sector is so strong compared to Europe’s.

But to jump from raw GDP numbers to the conclusion that European economies are underperforming where it counts –creating value for citizens – is unwarranted. If there’s more measurable economic activity going on in the US because Americans are sicker and competing in more arms race-type games then I don’t think Europeans should believe the grass is greener on the other side of the big pond.

Europe should look at itself and say “we could be doing better (and we’ll have to because of an aging society).” But it need not look to the US for a model to emulate.

Why I’ve invested in stocks – and why I’m reconsidering

A friend of mine has maintained for two decades that the US stock market is substantially overvalued and that stocks are not a good investment. As I understand it, he subscribes to the simple argument that you pay much more for a dollar of corporate earnings than you used to: The price-to-earnings ratio is high compared to the long-term average (roughly, you used to pay 10-20$ for a dollar of profit; in recent years you pay more like $20-30).

After two decades, the claim that “stocks will eventually have to crash” is no longer an interesting prediction. Across two decades, stocks will experience major reversals, and they have done so in the last 20 years. The largest drawdowns – provoked by the credit-fueled Global Financial Crisis and the Covid pandemic, respectively – are not events whose timings are predictable. But they aren’t really exotic occurrences, as any student of history knows.

Both times, my friend believed the overdue crash had finally arrived; both times, the markets recovered their prior values and went on to new highs.

Through all of this – and against my friend’s warnings – I have invested my savings predominately in stocks. Early on, I did so before I really understood stocks’ value proposition. I invested in stocks – naively – because I had understood that stocks have historically offered better returns than the other (liquid) asset classes (cash and bonds).

As I learned more, I came to understand my friend’s argument. It’s solid. Yes, stocks may have delivered good returns in the past. But if everyone believes that naively – as I did – then they will buy stocks and that will drive up the price to a level such that stocks will not be able to deliver as good a return in the future.

People in the 1950s had less collective experience in stock markets to look back on, hence their confidence in the stock market was lower. They were willing to pay less for a dollar of corporate earnings, keeping stock prices low. Later, as people gained confidence in the stock market, their buying behavior bid up the price of stocks, giving investors from the 1950s great returns from price appreciation. Historical stock performance since the 1950s looks good because of the earlier lower confidence in the stock market. Ironically, greater confidence leads to higher prices, leading to future performance that is lower than the historical average. From my friend’s point of view, this realization just hasn’t set in yet.

This dynamic – where beliefs about probabilities in a system actually shape the probabilities in the system – lies at the heart of the Ruminathans. But contrary to my contrarian friend, I have so far continued to invest in stocks. Not because I disagree with his analysis. I’m expecting lower returns. But I don’t see great alternative ways to save.

Instead, my continued willingness to invest in stocks rather than the alternatives (credit, cash, and real estate) rests on three main pillars.

  1. People will tinker to find ways of creating more with less. We will find new, surprising ways to delight each other and we will find ways to do so while consuming fewer resources (including our own time).
  2. You can’t bet against the institution of private property. It’s not that I believe private property is intrinsically sacrosanct and that humanity won’t reconsider it. It’s just that if we jettison the institution, then no asset class is a safe store of value.
  3. Markets will continue to be organized within a rules-based institutional framework in which people by-and-large speak the truth to each other because they believe everyone else is doing so, too.

In recent years, I have invested in stocks not because I hope to earn their historical rates of return. I have invested in stocks because I’ve been betting on human ingenuity and willingness to find cooperative solutions in the many strategic games we find ourselves locked into playing.

I have not invested in stocks because I expect to get 6% average annual returns after inflation. I have invested in stocks because people will continue to find new ways to create value, and in a rule-based political economy with private property, stocks are the best way stake a claim on that newly created value.

There are no iron-clad laws of nature that guarantee these conditions. It’s just that a world in which they no longer obtain is so different that it’s not clear what remaining ways there might be to store value.

However, recently I’ve been thinking about whether generative artificial intelligence is chiseling away at all three pillars.

  • If generative AI does much of the work, will we still have the desire to tinker towards innovation? Will we have the skills to do so? Will we even develop the values that silently underwrite the very notion of “improvement?” Will an alien intelligence share those values and create things that we recognize as improvement?
  • The great leaps in productivity and innovation came from extending the old – and not always progress-oriented – notions of private property in land to property in ideas. What happens when ideas are not generated by people, and when people’s property in ideas is cavalierly exploited without compensation by algorithms?
  • If the vast majority of the information is no longer generated by people, how effective can our existing punishment and reward frameworks for maintaining trust between strangers still work?

I hope to explore each of these threats to the pillars of stock investment in coming Ruminathans.

The real political divide

In the early 2000s, I used to read the conservative political journal National Review. I wanted to follow what I believed was a good-faith presentation of the conservative perspective, an outlook I fundamentally respect intellectually. It’s also hard not to empathize with conservative political philosophy if you were partly raised by Gandalf.

Not all change is for the better. Not all progress – no matter how well-intentioned – improves the lives of the disadvantaged. Babies do get thrown out with the bathwater. I get it. And there are positions that I think are logically coherent even if I have reached different conclusions. I can understand how someone can support the death penalty while defending the life of the unborn.

I gave up on National Review because – between the defenses of lower taxes and the raids on progressive sacred cows – there was always something else, like an intermittent odor that vanishes every time you try to sniff it out, only to return just when you thought you were imagining it.

I once ordered an intriguing-sounding dish off a menu in Colombia. When it arrived it smelled elusively pungent, reminding me of something I couldn’t put my finger on. Then – bang – the word “cowshit” came to me, and I realized I had ordered tripe stew. In the same way, I finally recognized the odor haunting the pixels of the National Review when, as a young professional, I encountered the word “leadership.”

The German word for “leader” is “Führer.” Yes, as in “Der Führer.” And as someone with complex ties to Germany, the idea of “leadership” sends shivers down my spine, to this day. “Führer” was not just a title in Nazi Germany; the “leadership principle” (Führerprinzip) was its ideological foundation. The will of the supreme leader has overriding force over people and laws.

The belief that we need a strong leader to keep the trains running on time is the basic ingredient for the catastrophes of not just the 20th century, but of most of the centuries of the written word. Hearing the word “leadership” – or worse, “strong leadership” – used positively, not just at National Review, but all over the US political and corporate discourse, rang alarm-bells. “Any society that idealizes ‘leadership’ is headed towards authoritarianism” was my exact thought at the time, twenty years ago.

Of course it occurred to me that my reaction was overblown. And so said anyone else I mentioned it to at the time, too.

But the word “leadership” connected a word to the ephemeral stench I sensed at National Review and other “conservative” media. It gradually grew stronger and stronger. And when an odor grows stronger gradually, the risk is that you no longer notice it.

It took a particularly egregious fart of fascism for me to abandon the National Review entirely. In a 2007 article, contributor Mona Charen presented as fact a completely unsubstantiated claim that Prime Minister Benazir Bhutto had survived an assassination attempt via a bomb strapped to a baby (just weeks before her successful assassination by adults). In her article, Charen wanted to illustrate the utter depravity of “the terrorists,” implicitly letting the US off the hook for a “clumsy” war in Iraq and the use of torture in Guantanamo.

Disregarding facts in the service of demonizing an opponent and justifying heinous acts – that’s on page two of the authoritarian’s playbook. But it would be unfair to pin the “authoritarian” label on Charen. Instead, I see her article as an eruption of what lies beneath in us all, but in the last two or three decades has grown, surfaced, and found a home in the US political party nominally representing “conservativism.”

I can empathize with the tug towards authoritarianism. At its root lies the disillusionment that comes when – in the process of becoming adults – we first confront the fact that we don’t all agree on what’s true and good. It’s a nauseating experience. The realization that there may be no such thing as the absolute truth, and that even if there is, your own view will never encompass the whole of it, is like an abyss opening at your feet.

How you react to that abyss determines what basic political ideology you adopt. Some of us search for a strong hand that holds us back from falling, even if that hand extends from the sleeve of a brown shirt.

Some of us see the abyss as a void that we can fill through our collective efforts. By communicating – talking, writing, arguing – we can like the five blind men touching the elephant, make overall progress on a working theory of reality. The project is never complete. And that means that everybody, regardless of the gifts they have or haven’t been born with, or whether they have been born at all yet, can contribute pieces to the puzzle.

2007, the year of Charen’s article, also witnessed the birth of a blog by a writer recently interviewed in the New York Times, Curtis Yarvin. Yarvin is explicitly in the camp of those who reach for the authoritarian’s hand at the edge of the abyss. He argues that a strong executive, a “monarch,” can more efficiently lead the way to the “common good.”

If the “common good” were obvious, then maybe an efficiency argument would have some foundation. But the common good is elusive. It’s one of those things – maybe the most important thing – we’re collectively striving to discover and negotiate about.

Deliberative democracy is the set of rules and institutions we have adopted to ensure that as many people as possible can participate in our conversations about what is good and true. The voting part of democracy is simply a mechanism to halt the discussion periodically so that we can choose a course of action – a policy – that constitutes a test of our hypotheses about the good and true. Other decision-making mechanisms besides voting are conceivable. We can even include coin-flipping when we reach an impasse. Just as important as the decision-making mechanisms are the rules we adopt to structure the deliberation that informs the decision.

Deliberation is what helps us discover which decisions are important and which aren’t, what the options are, and most importantly, what criteria – what values – should inform whether we prefer one course of action over another.

The Supreme Leader achieves efficiency not by cutting through deliberation and red tape to achieve a known goal. The Supreme Leader’s “efficiency” comes from ending the search for the good and true, offering a plastic substitute in its place.

Say what you will, Yarvin should be commended for surfacing what many nominally “conservatives” like Charen have been keeping submerged, consciously or unconsciously, for the last two decades. And by all accounts, Yarvin’s views are resonating among those with the money and access required to change the rules and institutions through which we order our affairs.

The true political frontier doesn’t divide conservatives from progressives. The true dividing line is between those who embrace the collective and unending quest for the good and the true and those who want to end it by the most expedient means: submitting to the will of a supreme leader.

We have been born into the interesting times in which that line is being drawn more clearly once again.

The power of ideas and the power of people

I recently finished Matthew Stewart’s An Emancipation of the Mind – Radical Philosophy, the War over Slavery, and the Refounding of America. An Emancipation traces the role played by early 19th century German philosophical currents in reinvigorating the abolitionist movement that culminated in end of slavery in the US.

It was a fascinating read for someone like me, whose knowledge of US history is a bit patchy. I’ve looked at some eras like the Civil War itself deeply and critically, but for many others I have only spotty impressions informed by conventional narratives. Take the abolitionist movement: I had understood it as a steadily growing movement in the Northern states that led to the formation of the Republican party, which eventually captured the presidency, prompting the South’s secession.

But Stewart makes the case that abolitionism, rather than steadily growing in power, had pretty much hit a wall in the early 1800s. An Emancipation tells a story of a successful subversion of the ideals of the American Revolution by the slave-holding class in the South. This anti-democratic counter-revolution was successful not only by being better organized and by skillfully exploiting the minority rights embedded in the US Constitution. It was successful because most of the North was perfectly willing to adopt the South’s values.

It took the influx of ideas bubbling up in Europe – above all in Germany from the likes of Hegel, Marx, and Feuerbach – to galvanize and revive the faltering abolitionist movement. Stewart traces the direct and indirect influence of the German thinkers on figures including Frederick Douglass, Theodore Parker, John Brown, William Herndon, and ultimately on Lincoln himself. An Emancipation is a compelling testimony to the power of ideas to shape the world.

And at the same time, it testifies to the need for ideas to be transmitted by living, breathing people: particular quirky people with their particular quirky talents and passions. It’s not just the radical German ideas that made it across the Atlantic. In the wake of the failed revolutions of 1848 – especially in all the German statelets – it was radical political refugees who crossed the waters, carrying the ideas, yes, but equally importantly, carrying their passion.

Stewart places the – certainly intimate and most likely romantic – relationship between Frederick Douglass and German journalist Ottilie Assing at the narrative center of his book. But it’s a stand in for all human-to-human collisions that changed hearts and minds.

Stewart doesn’t speculate on it, but for me the book raises the question of what role the movement of people played in the history of ideas in Europe. Could the emptying of Germany of its passionate radicals during the second half of the 19th century have contributed to the terrible direction Germany took in the first half of the 20th?

I picked up An Emancipation for unrelated reasons, but t is a serendipitous choice in November 2024.

Why I stopped caring about intelligence

Given our world’s current obsession with “artificial intelligence” I’ve been thinking a lot about the natural kind. And what I’ve concluded is nicely summarized in a book, above all in its title: You’re Not That Great (but neither is anyone else).

You’re Not That Great’s author urges us to stop worrying about what innate qualities we have or don’t have, or how much more or less we have than others, and focus only on pursuing improvement from where we are at the moment.

If there is something to “being smart,” then the safest assumption is that I’m probably average. So, yeah, maybe I’m not that great. It’s a tough message, but a liberating one. Accepting that fact is a superpower that I’ve found to be far more practical than any notion of intelligence. It’s allowed me to search much more effectively for truth, beauty, and sound financial decisions.

Part of why our new chatbot friends attract so much of our attention, so many fears and hopes, is that we’ve wrapped so much of our sense of self into the idea of intelligence. We tell ourselves stories about who we are in terms of our intellectual capabilities: Intelligence is what separates us from animals. We spend the better part of childhood striving to achieve grades that attempt to divide us into the smart, the stupid, and the drab mediocre. As organizations, we want to outsmart our competitors. You can’t understand today’s global political discourse without a resentment of “elites,” and we all know that we’re not talking about elite athletes at the Olympics. Nor about the world’s best forklift operators.

We’re obsessed with intelligence, constantly judging others and judging ourselves in reference to a vague scale of smartness. But how many of us have clue about what it means to be objectively brilliant or clueless?

There are many different definitions of intelligence. There’s a proliferation of types of intelligence. I know a successful business lawyer who can’t tell left from right, and a molecular biologist who’s lousy at personal finance. There seems to be some – but not universal – agreement about whether the different types of intelligence tend to appear together in the same people. There’s controversy about how to measure each type of intelligence and whether they can be measured at all. There’s controversy about intelligence’s causes and how an untold number of factors on the nature and nurture side contribute to it.

Those controversies and complexities alone suggest that worrying about an intelligence scale – and where you or anyone else measures up to it – is a colossal waste of time.

But supposing there were some kind of “general intelligence” that aggregated all the different ways we try to solve the problems the world throws at us. And suppose we could define a score that would allow us to compare each other, such that the statements like “Jill is smarter than Phil” are meaningful. Until such a (horrific) time comes that everyone’s general intelligence gets rigorously tested and displayed publicly, judging whether Jill is, in fact, smarter than Phil based on available evidence will be a matter of expertise. Of skill. Something you could be bad at and get better at.

Like any skill, judging other people’s intelligence is probably something subject to the Dunning-Kruger-Effect: Most people will significantly overestimate their ability to judge intelligence.

In that case, there are several principles that help guide me about how I think about myself, other people, and the world:

  1. Absent evidence, my first assumption is that I am average on that smartness scale.
  2. Because I’m always going to pay more attention to evidence that suggests I’m above average-smart, and unconsciously repress evidence that I’m below the average (a form of Confirmation Bias), I will stick by principle 1 even in the face of evidence.
  3. Thanks to the Dunning-Kruger-Effect, most people are going to overestimate their skills in judging other peoples’ smartness… and their own.

It’s not that there is no such thing (or really, things) as intelligence. It’s that there are structural reasons we’re going to misjudge ourselves and others. People who worry about smartness will overestimate their ability to judge how smart other people are and systematically overestimate their own intelligence. Both factors will lead them to make poor judgments, especially when their judgments about other peoples’ intelligence contribute to their decision-making, say in evaluating the quality of information or analysis provided by someone: “Phil’s a sharp tack: His analysis of the cost structure of our product must be sound.”

The overconfident will make mistakes whether or not they score higher than you do on some kind of objective smartness scale.

In combination, these three principles suggest a generalized strategy for making your way through the world, and contributing positively to it while avoiding some big pitfalls:

  • Stop worrying about how smart you are. Or anyone else.
  • Observe carefully what people who do worry about smartness say or do:
    • Second-guess their judgments; they’re vulnerable to the combined impact of Confirmation Bias and the Dunning-Kruger-Effect.
    • Never become economically overexposed to the consequences of their judgments (that’s important when choosing employment and investment opportunities).
    • Look for opportunities to create value for everyone by identifying the mistakes of the overconfident.
  • Keep at it: Be willing to second-guess, but follow through and look for new answers.

“But aren’t you just saying that you’re smarter than the overconfident if you can identify their mistakes and fix them?” Not really. Identifying errors doesn’t require deep insights. It mostly requires the willingness to ask “What if X were wrong?” and the diligence to pursue that question. You can do that at your own pace. You can find people to help you.

If “intelligence” means anything, it’s the ability to discover what’s knowable about the world. If the world is a readable book, and you think you’ve found an undiscovered chapter, you’re probably mistaken. Chances are someone else will have got there before you. Many people will have. You’re more likely to find a unique new way to contribute to the world by questioning the assumptions of the overconfident.