| CARVIEW |
In Talmudic/Rabbinical (also known somewhat misleadingly as "Orthodox") Judaism, the sabbath laws forbid carrying objects across property boundaries, but not through shared communal spaces. The standard workaround is an "eruv," a physical boundary within which a community notionally combines its territorial holdings into a single communal plot on the Sabbath, allowing Sabbath-keeping Jews to bring items such as strollers and housekeys to and from the synagogue.
I am not Shomer Shabbas, because I am not a member of a Talmudic/Rabbinical Jewish community, because their collective intelligence is intentionally incurious about the generators of Enlightenment thought. And, living in highly atomized and digitalized contemporary America, it's hard to maintain a social life for my children without keeping my phone on. I do, however, leave it in the car when I bring the children to the local Conservative-affiliated synagogue. I do this because I can afford to. I can, likewise, afford to go without my phone during other well-bounded intervals when I don't need to coordinate with people who are physically distant.
I think the basic principle that makes an Eruv work, is the same one that motivates me to leave my phone behind when I walk into the synagogue: I'm entering a space with a high density of people who have applied their collective prudence to providing in advance for likely needs during that length of time in that physical space.
Likewise, the boundaries of an Eruv are likely limited to areas walkable to a single synagogue (or, in cases like Pittsburgh, a cluster of mutually walkable synagogues), and contain a high density of people who care about the Eruv and the sabbath and have mutually acceptable standards for Sabbath observance, so that they're jointly motivated to e.g. make social plans in advance of the Sabbath or when they're physically collocated at a conventional meeting place like the Synagogue.
I was better able to observe a Sabbath when I was mostly unattached and not working on anything urgent, and I'd be better able to observe one in community with others.
]]>But I'm not asking about the third-degree simulacrum of journalism invoked by vaguely newsy websites or brands that mask the absence of an underlying reality. I'm asking about correspondence with the underlying reality that the idea of journalism was originally supposed to represent: the idea that somewhere, some readily intelligible events are happening to people, which many people have an interest in knowing about, so paid specialists go find out what's happening, write it up (or record a verbal or audiovisual description, sometimes with supporting direct recordings of the event), and publish these descriptions of new events periodically, so that they're available for the general public to read about, listen to on the radio, or watch on television.
I've heard some friends suggest that news reporting is mostly no longer happening, despite the continued creation of ostensible news content - e.g. that while you can still get a stream of characters from the New York Times that satisfies the demand for a master signifier of the form "news," you can't find out what's happening in the way you might have been able to a few decades ago. But is this true? If so, what would it look like?
Well, I was vaguely aware that there were big "protests" following the public summary execution without trial of an accused counterfeiter by police officers in 2020, and that there was some violence in Minneapolis. But until I read Lydia Laurenson's article describing why she never published an account of the Minneapolis riots, I had no idea that there had been a days-long total collapse in public order in large areas of the city, such that ordinary citizens had to use informal decentralized communication to figure out which neighborhoods were no-go zones.
My mate was surprised when I mentioned this, that I'd been unaware of the extent of the riots, and suggested that maybe it's because I didn't follow enough racists on Twitter. (I follow Steve Sailer, but he's too statistics-pilled to report much about facts on the ground. Through him I was aware of the post-George-Floyd increase in traffic fatalities, which seems to have happened because police around the country responded to the widespread outrage at public summary executions by refusing to enforce traffic laws. But I was unaware of the extent of the riots.)
To double check whether I'd missed this because of my particular filter bubble, I asked ChatGPT 4o whether the New York Times had covered the riots. ChatGPT claimed the Times had covered the riots, but when asked for examples, could only find articles mentioning "protests" and a retrospective "Why Minneapolis Burned" that focused on the causes but not the events they were causes of. When I pointed this out, it kept insisting that the riots had been covered somewhat with reporting on property damage. When I pointed out that this wasn't the same as covering the underlying collapse of public order (e.g. whole neighborhoods that were not safe for ordinary citizens to go outside in), it claimed the Minneapolis Star Tribune had covered more, but again was unable to find examples covering more than property damage. It only admitted that this wasn't the same thing as covering the riots themselves when I pointed out this was like covering a war in terms of broken windows.
By contrast, the Institute for the Study of War – which I found through one of the few (though amateur and unpaid as far as I can tell) news websites left, The Ethereal Voice – has in fact been covering real and purported events in the Russian invasion of Ukraine, with more context than I have the spare attention to track, including maps of occupied territory, and advances and attempts by each party. If I lived in Ukraine, this would be extremely valuable information, the analogue of which I do not have about my own country.
There's some sort of fight supposed to be going on between agents of the Federal US government and protestors, rioters, or both, in or around Los Angeles. But when I tried to look up news articles about it, all I found was coverage of what various government figures were saying about the conflict - not an attempt to form an organized picture of what's going on. If any of my readers know where to find this, I'd love to know.
Zvi Mowshowitz is publishing regular roundups of AI news, but it's largely written for people who are already familiar with enough technical details of the field that it's difficult for me to make sense of. On the other hand, my father, who's hardly an off-the-grid news recluse, had not heard of ChatGPT (or LLMs) until I told him about it a couple of weeks ago.
When Shawanna Vaughn's nonprofit (mentioned in The Debtors' Revolt) bought a plot of agricultural land on which to develop a rehabilitative private prison / farm, she found that the effect of notional legal suppression of racial discrimination in real estate just means that if you buy land in a racist municipality, you don't really get to use the land, you just get harassed and slow-walked by the authorities about permitting, taxes, and regulatory compliance until you give up and sell; you might be able to leverage racial discrimination laws to sell at a profit but it's still a time loss and you don't get to do the project. Maybe there are some articles about individual stories related to this, but I'm unaware of any public effort to make sense of the overall situation.
I can't really blame the journalists, or the news publications, or even the competition from Craigslist and news aggregators that vastly reduced the profitability of local Classified ads. It seems like the demand for the product largely isn't there, for reasons that have more to do with cultural decay than emerging technology.
Consider this article about a pilot who was scapegoated for a technical problem with the F-35, which I also found via The Ethereal Voice. It seems like they fired a talented pilot in order to shunt bad vibes about the F-35 onto him. This is a dynamic I'm only now really coming to understand.
It's not exactly a coverup of any specific facts. It's more that the blame is being treated as a metaphysically independent conserved quantity such that if they shame the pilot, they feel like this protects the F-35's reputation, even though I don't think this affects the statistics, it just tries to derail investigation by creating a situation where if anyone looks into this particular incident, their curiosity is redirected to "what did that man do wrong? Is it really his fault?".
I'm also reflecting on the Hollywoodized sense I grew up with, of what a piece of investigative reporting like this is supposed to do - I had the sense that it's:
- Coverup prevents pronormative action.
- Journalist exposes coverup and crime.
- Corrective action is taken.
But by now it seems like the sequence of events is more like:
- Coverup is business-as-usual even though that makes no sense.
- Journalist exposes coverup and crime.
- Now people have heard about one more example of coverups and crime, maybe a comedian also talks about it, that's it, end of story.
I'm not even sure how to use this knowledge, except to trust raw incident rates over incident rates adjusted for "user error" or other mitigating factors, when assessing the reliability of manufactured products. Since at least if people start crashing airplanes on purpose that's a self-limiting problem.
There's attention demand for news content product, but not really an intelligible widespread intent to make use of the information in the news. This has got to make it demoralizing to try to report on the news, so we get less reporting on what happened and more gossip, which is apparently what the eyeballs and clicks want. The reporting I found on the LA riots is one example. Another is this Wired article my mother recently shared with me about the LMHR (lean mass hyper-responder) study.
As it happens, this is a story I'd been following independently. The context is that engineer Dave Feldman developed an explanatory theory of high measured levels of LDL (low-density lipoprotein) cholesterol in the blood, which implied that while under some circumstances it's produced by, and therefore an indicator of, metabolic dysfunction, under other circumstances it's perfectly adaptive and therefore not indicative of a problem. Specifically, he proposed that while metabolic dysfunction can cause backlogs of multiple sorts of energy substrate including the fat carried by LDL, lean active people on a ketogenic diet are using more fat for energy, so their healthy bodies pumps more cholesterol-containing lipoproteins through the blood. The contrary established view in cardiology is that LDL in the bloodstream causes blockages in the arteries, no matter how good a reason it has for being in the bloodstream.
Dave Feldman proposed, raised funds for, and sponsored an experiment - to be performed by academically credentialed third parties - to test his hypothesis. He recruited a population of people on ketogenic diets with the "lean mass hyper-responder" blood profile (high HDL, high LDL, low triglycerides), and matched each with someone in a preexisting longitudinal study measuring coronary artery calcification (CAC), with similar LDL levels and other measured traits. Both his recruits and the members of the preexisting study would have their CAC measured twice, at a significant interval of time, and his study would compare the two groups. If LDL predicted CAC progression in the regular group but not the LMHR group, this would favor Feldman's hypothesis. If it predicted CAC progression about equally well in both groups, this would support the establishment's theory.
What seems to have happened was that CAC progression was surprisingly high in both groups, swamping the measured difference between the groups in a way that was hard to interpret. But you'd know hardly any of this from the Wired article, which uncritically reports what various people are accusing others of, and how various people defend themselves, without asserting any opinions on base reality or the factual plausibility of different claims. The reporter (or their editor) doesn't seem to feel any responsibility to describe what happened in material reality, except what people are saying about each other. In other words, the main thing I learned from the article were:
- The study finally got published.
- Wired is a gossip rag.
Then there's Cade Metz, a reporter on tech for the New York Times. I first learned about him when Scott Alexander got upset that a Times article about him was going to report his full legal name. In the process of writing that article, Metz interviewed Michael Vassar. Metz's questions were entirely focused on who was socially connected with whom, and he seemed completely uninterested in the content of those connections, i.e. what people thought they had in common to communicate about, which from Vassar's (and indeed any valid) perspective was necessary to understand anything important about these connections. Eventually, Vassar asked Metz how he'd adjudicate a fact claim. The answer: experts. What are experts? Whoever is vouched for as an expert. Pure postmodern social construction. What makes an article good reporting? Fairness. What's fairness? If all sides get the chance to defend themselves. Absolutely zero recognition of any underlying reality about which an accusation might be made, or communication attempted.
You can also read Zack Davis's interview with Metz. When Metz reached out to me in the process of working on a book on the idea of general intelligence, I formed the same independent impression; if I asked him about his interest in the topic, his perception of my relevance, etc, the answer was entirely in terms of social reality. What was really striking about this was the perfect serenity with which he was enacting a pseudoperspective which functionally constitutes total and perfect aggression against anyone who cares about anything. I didn't think it was worth my time to proceed.
Is this a hit piece against Cade Metz? No. He's doing the job institutions like the New York Times hired him to do. Is this a hit piece against the Times? No. It's just the news.
]]>The measured fact to be explained is that fertility rates tend to drop below replacement in countries with a high measured economic output per person. This is both a big long-run problem and a puzzle. Microeconomists typically construe this as rational individual choice, as though people in wealthy societies simply discover better things to do with their time than raising families. But this causal explanation is backwards.
From a microeconomic perspective, if the state wants less of some behavior, it should tax it, increasing its cost, which can be expected to reduce the frequency of that behavior. If it wants more of some behavior, it should subsidize it, reducing its cost, which can be expected to increase the frequency of that behavior. But at least in the USA - and probably in other high per-capita GDP countries participating in the same economic system - fertility is suppressed in large part by the macroeconomic policies with which the state constructs and regulates stores of financial value. If we are rich within this system of account, that means we can demand a great deal of labor from young people who might otherwise be producing and caring for their own children.
This arrangement did not emerge through open deliberation about how we wanted to arrange our society. Rather, it developed through a series of historical contingencies and power struggles following the closure of imperial frontiers. Fixing this problem would not be a relatively modest technical adjustment to a well-understood system of inputs and outputs, but would constitute a radical change in how our society functions, with hard to predict consequences that would likely seriously disrupt our current modes of governance.
Fertility and Stores of Value
Animals tend to balance their nonemergency activities between obtaining enough resources to reproduce, and reproduction itself. In many animals there appears to be a range (different for males and females) within which available metabolic energy is positively related to both the behavioral tendency to seek sexual encounters, and fertility. Too skinny?1 Your sex drive drops, to better focus on feeding yourself. Metabolic problems that lead to energy being stored instead of used? Your sex drive drops. Too few calories in a day? Lowered sex drive.
If you're a certain sort of smart animal, like many mammals are, your behavior is also determined by your interpretation of your environment. Is food abundant? If not, animals that plan ahead will build up a hoard. Do you have friends you can rely on for mutual aid and protection? If not, many social animals will focus on fixing that first. People lacking friends tend to display similar behavior patterns to those characteristic of undernourished people: desperation (trying increasingly unusual or drastic actions that might fix the problem) or torpor (burning less energy while waiting for conditions to change).
So we would expect by default, a society that became fantastically rich relative to some baseline, and is continuing to become richer over time, would become fertile relative to that baseline as well.
So, how to become rich? One can form friendships. But past a certain point, a society's well-connected enough that the marginal additional friendship doesn't offer much insurance value; and the total amount of help available doesn't change, only the capacity of the system to dynamically allocate it where it's needed.
How does one increase the amount of future help available?
Concrete solutions include harvesting crops now and placing them in relatively stable storage in order to feed one’s family during the winter or dry season, building a durable shelter now in order to benefit from it for the rest of one’s life, or conceiving, birthing, feeding, and caring for children, so that they become capable adults who will gratefully and lovingly care for you when you are too old to care for yourself. One can also improve one’s future capacities or save oneself future work by making tools to solve various problems easier, or giving non-relatives gifts of goods or services to create bonds of gratitude and expectations of mutual aid.
In highly specialized and alienated societies, we are offered the alternative of generic stores of value. Without planning for the details of one’s future needs, one one could instead simply help others in whatever way one can exchange for the greatest quantity of a highly compact, stable, persistently difficult to manufacture (and thus persistently scarce) commodity such as gold or silver. Such a commodity would retain a persistently high exchange value, so if you have a lot of it, that means you can expect to be able to exchange it in the future for the goods and services you will need, delegating the specific planning process to society’s aggregate planning abilities, i.e. Adam Smith's Invisible Hand. Thus, in relatively alienated societies oriented around arm's length transactions with strangers, one's perceived wealth - i.e. power to purchase needed goods and services from others - can function as a socioeconomic analogue to the hormones that signal abundant resources in the body.
In an undercapitalized society, one can do even better than hoarding precious commodities by lending one’s purchasing power (by lending the underlying commodity) to someone else to empower them to engage in some productive venture of their own, productive enough to pay back principal and interest and still be profitable for the borrower. Before about 1890 (the consensus date for the closure of the American frontier), the USA was undercapitalized in the relevant sense; there were few enough people willing to defer claims on present consumption in order to realize future profit, that skillful and careful investment of one’s savings could be highly profitable without state support beyond mere adjudication and enforcement of property rights.
We currently live under a different system, which encourages most people to think of their economic situation in terms of careers, education, and housing, and financial assets.
Formal education is best understood as one stage of a career, which requires long hours away from home. If you keep up the performance, your income continues to rise. This suppresses fertility locally by reducing the number of hours available for child care. When people who have advanced in a career are paid a lot of money, this can locally compensate them somewhat for the cost of the performance by letting the performer buy help from others, but that only reallocates the help available, it does not increase it globally.
We might imagine that careers are part of a system that realizes economies of scale to more efficiently provide child care support than individuals could do for themselves, but in practice, this is false. Most of the growth in hours spent at work is, by that metric, simply wasted time, especially in the better-compensated careers; the income available for being the sort of white-collar client that requires periodic bailouts is generally substantially higher than the income available for schoolteachers or nannies, for instance. (See Systems of Bullshit Work.)
If we construe prosperity in terms of the entitlement to command the time of others, and the opportunity to waste one's own time in exchange for that entitlement, and that process grows faster than real efficiency gains in child care or household management, we should expect fertility to decline as total capacity to raise children diminishes.
Another common way to try to enrich oneself is through investing in a home. This has a positive return on investment mostly because of secular trends in home values, not material improvements to the house. But if your savings are made of your house becoming more expensive over time, that means that they're made of rising rents, i.e. an increase in the cost of housing. When people have to work longer hours simply to be allowed to live somewhere, those people will have less time to care for their children, which likewise should be expected to reduce total fertility.
Housing costs and career advancement constitute such a large share of what we experience and measure as wealth, that it's very unlikely that we could increase the amount of time available for child care without appearing by our consensus metrics to become poorer. That does not necessarily mean that by wasting less of our time, we would actually reduce the aggregate quantity or quality of legitimate goods and services available, only that we would appear to be poorer by the specious metrics we are currently using.
So, how did we end up with a measure of wealth radically at odds with the single most central, essential case of production, one of the few forms of durable value creation that can’t be commoditized or outsourced? And how did the civilization that ended up there create so much obviously legitimate wealth along the way? We've got flying machines, near-instantaneous global communication, hospitals and medicines that really do save lives sometimes, food in abundance, and labor saving devices that automate many tasks that were previously as tedious or strenuous as they were necessary. So our measure of value didn't start out totally perverse
Medieval Europe's Surplus Bodies: How to Get Rid of People Politely
Here’s a stereotyped and simplified account of the Western European equilibrium, prior to the colonization of the Americas. There is, approximately, a fixed supply of agriculturally viable land that can sustainably feed more people than are required to work it sustainably, and more than enough people to work that land. Some of the surplus produced is captured by the people working the land. In the short run this takes the form of leisure and in the medium to long run it takes the form of fertility above replacement. Some of the surplus compensates merchants who move goods from place to place, and skilled artisans who make things farmers use. But an important part of the surplus is taxed.
Much of that land, and the people living on it, is assigned to a manorial lord, who captures some of the agricultural surplus via taxation, and can therefore afford to pay people to serve him. Here’s the investment problem faced by a manorial lord, or a feudal lord with manorial vassals: He owns, directly or indirectly, some agriculturally viable land, which is continually being converted into extra people who eat up the surplus food. While some of them can work to increase the agricultural productivity of the land (e.g. by developing new tools and techniques, or facilitating trade), much of the benefits thereby produced are externalized to other places who can copy them, and there are diminishing returns to new people serving in support roles. Eventually, the value of feeding the marginal person is negative - the food they consume destroys more capacity than they can add - so it starts to seem desirable to destroy or remove them. I am not claiming that it was in principle impossible for new people to create more wealth by other means on the margin, only that it was generally understood to be impossible for new people after a certain point to create enough additional capacity to pay for themselves within the context of the social, economic, and technological traditions of the time.
In short, this way of relating to economics sees surplus population initially as a potentially hazardous byproduct that has to be disposed of somehow, and only secondarily as a potential resource that might be exploited.
One response to this problem is to recruit surplus people into monastic or other celibate clerical institutions, which perform administrative, bureaucratic, and propaganda services while limiting the total fertility of the population. For a while, the Roman Catholic Church dominated Western Europe with this model. The continuity of Roman clerical institutions may have constituted a gradual accretion of wealth in the form of information and information processing capacity; for instance, the modern university system comes from the Roman clerical educational system. The Church also accumulated another sort of “wealth” this way: the compliance of the populace, as increasingly widespread and sophisticated religious indoctrination (as well as persecution of dissenters) caused Europeans to understand themselves as members of the Church and therefore under its authority. Eventually, the Church could credibly threaten to delegitimize temporal rulers who were sufficiently troublesome.
Another thing to do with extra men is train them to fight, to help you displace a rival lord and make use of his land as well. For the winner, this not only directly decreases the number of surplus mouths to feed, but also increases the amount of land the winning lord can tax. But a conflict with a winner has a loser, so while winning wars can sometimes increase the winner’s wealth on net, it does not increase the aggregate wealth of feudal lords as a class. Since the perceived legitimacy of such fights was greater than the perceived legitimacy of raids on the church, we can understand the Feudal equilibrium as one in which the Church - while ostensibly discouraging such internecine warfare - kept the mundane landlords divided by making fights the main way a lord could enrich himself, at the expense of some other landlord. Even if a mighty lord such as Charlemagne managed to accumulate very large land claims in a single generation (which might give him considerable leverage against rival powers like the Church and thus perhaps increase the aggregate wealth of the lords), if the lords themselves used some of their surplus to reproduce above replacement, they would frequently end up subdividing their holdings among their heirs. (The Capets and the Habsburgs eventually managed to accumulate very large land holdings via strategic marriages, but this came to full fruition only after the old equilibrium was already broken up for other reasons!) The main exception to the zero-sum nature of lordly wealth was the opportunity to seize land from a power outside the system, such as in the Crusades - which, notably, were organized explicitly under the auspices of the Roman Church, and enriched church groups such as the Knights Templar and Hospitaler as much as they did mundane land lords. But as it turned out, the Muslim powers were roughly equal in strength to the European powers, so the pre-Columbian Crusades were not very profitable in the long run.
With the discovery of America, a great expanse of poorly defended land became available.
Now here’s a stereotyped and simplified account of the Euro-American situation while the frontier lasted. Europeans with surplus capacity could invest it, rather than on fighting each other, on ventures to establish people on productive land on the frontier. If the land could support more than the people needed to work it, investors could make a profit. These profits could then be reinvested in further settlement or related ventures.
Meanwhile, the frontier also attracted surplus labor. The second son on marginal land in Europe didn’t have to seek his fortune in crowded and unhygienic European cities, or as a member of the clergy or armies; he could instead establish himself on the frontier and end up with valuable land of his own - increasingly valuable as it became economically integrated into the support structure for settlements on the newly pushed out frontier.
This is far from a complete account on the European side - most prominently, it omits the printing press, the successful Protestant reformation, the development of mass literacy, mass military mobilization, and the emergence of the administrative state. It also ignores the existence of cities, which did in some cases survive as meaningful economic units, and therefore - because of feudalism's limited capacity for oversight and corresponding willingness to delegate - survived as political units as well, albeit frequently vassalized as whole self-governing units by some lord or another.
For more on the transition from the medieval to the modern mode of social organization in western Europe, see Calvinism as a Theory of Recovered High-Trust Agency.
Frontier System, 1607-1890: When They Are Making More of It
For most of the period from the 17th through the 19th century, the British colonies in North America had an overwhelming military advantage against the prior non-European inhabitants. This meant that as long as there were large contiguous parts of the continent unsettled by the colonists, they could - for a modest amount of military work - obtain no-longer-occupied land available for the use of settlers. This land was sometimes given, sometimes sold, to homesteaders, or to well-connected speculators who then resold it to homesteaders at a profit. Homesteaders profited by converting now-unoccupied land into farms and other productive arrangements that produced goods for the older settled areas, which profited by selling homesteaders imported consumption goods or tools.
There were different dynamics of settlement, conquest, and trade in different areas of colonization, which I will not cover in detail here.
While the military conquest supporting this process was an actively implemented project and imposed unwanted externalities on the prior inhabitants of North America, it created what seemed from inside Euro-American civilization like a "natural" rate of exchange between labor and capital, and thus a natural rate of interest on capital. There was a supply of free land, limited mostly by the availability of settlers, which created competitive pressure upwards on wages anywhere where people might expect to significantly improve their economic prospects by moving to America. This imposed some relatively exogenous labor scarcity across the common market.
Frontier settlers and their proximate service providers - including the US federal government and eventually its sponsored corporate railroads - expected to profit from the use of better tools and materials, and therefore competed with each other for access to capital, i.e. loans. Combined with a finite amount of surplus wealth in settled areas, this created a market for money (i.e. actual gold or claims on actual gold from reputable counterparties) with an equilibrium interest rate across the Euro-American system.
During this time period, especially after 1860 as the American state centralized a lot of development activity, intermediaries gradually arose between a real process of creating new primary production on the frontier, and the possession of real scarce commodities in more settled areas. People who were well-connected to elites on both sides of the Atlantic ocean, such as the clerical Pierpont and Morgan families with elite connections in both old and new England, served as trust brokers for Europeans with access to gold looking to lend to Americans, and Americans with access to land and other profitable ventures to borrow from Europeans.
Financial Closure, 1890-1945: The Scarcity of Invention and the Invention of Scarcity
The closure of the American frontier ended the prior social process by which financial value was created and stored, creating social pressure to produce a substitute.
1890 is the canonical reference date for the "closure of the American frontier," as around then there ceased to be a meaningful large contiguous territory available for taking by settlers - so land holdings within the system became zero-sum (with the notable exception of the Dutch who did in fact manufacture a small amount of land in a very convenient location), and the market for the tools of settlement seemed likely to shrink over time.
By this point, many people have become accustomed to construe their wealth or social position in terms of the market value of their assets, or in terms of their career, both of which are based on the process of settlement.
Ownership of “wild” space was now fully partitioned. While physically abundant, land was now fully partitioned under formal title. It could no longer be acquired through labor alone, but only through monetary exchange—transforming it from an unbounded resource into a priced asset. This made it scarce in the institutional sense: no longer available without capital. Food, on the other hand, was abundant and therefore much cheaper.
In principle, if there's more than enough food for everyone, that's good for everyone. But the financial system depended on the value of farms, and the value of farms depended on the price, and thus on the scarcity, of food. Settlers who had borrowed money to equip farmsteads ended up owing more than the financial value of their assets. So the financial wealth of settler-farmers declined, in terms of still-scarce gold currency, as did their ability to pay their debts. This in turn decreased the financial value of enterprises that depended directly or indirectly on the continual creation of new land wealth.
The speculative value of large overseas investments sometimes exceeded the ventures' ability to pay back investors, by a lot. So in some cases – e.g. the South Sea bubble, the railroad panic of 1873, and the panic of 1893 – a great many people, all at once, found that their supposed store of value was backed by false promises. This disruption was multiplied by the tendency to treat such speculative stores of value as a basis for extending further credit: just as someone might offer a real asset such as a house as security for a loan, or demonstrate direct productive capacity as evidence of ability to repay, someone might offer a financial security such as shares in a joint stock venture as evidence of solvency, or as collateral - which makes the collapse of that venture's exchange value a problem for otherwise conservative secondary investors. While any economic system with a basis in primary production cannot be entirely circular, speculative bubbles tended to create a large, dense network of nominally wealthy agents with a perceived shared interest in preserving the speculative value of the financial assets in question.
In an economic depression, an overleveraged web of customary exchange is suddenly withdrawn. So people with cash money become theoretically much richer, but now have to use their imaginations to decide what they want, and people in need of money have to be more entrepreneurial to get it. And people whose financial positions are more leveraged (they've either explicitly borrowed against assets like a mortgage, or mainly own things like stocks that embed leverage in the asset) become poorer in relative terms, and many of them become insolvent.
It is, in the long run, good for people not to be busy doing wasteful nonsense, and to instead have the leisure to act on their imaginations, but you can see how a sudden shift would be upsetting and disorienting for many people and not all of them would make it.
What would deleveraging have looked like, had the prior legal regime remained in place?
Counterfactual: The Unbearable Lightness of Being a Post-Scarcity Society
Food prices are at historic lows, as a huge amount of agricultural land has just been settled. In the short run, the commodity price of food continues to decline due to the continued improvement of already-settled land which increased its total output, and continued improvement in tools and logistical support for bringing food to market. On net, this causes farmers' nominal incomes (which scale with food prices) to decline, while their dollar-denominated debts remain nominally constant.
There is no longer, however, an effectively unlimited labor sink in the form of the opportunity to convert labor directly into a claim on land. While there is still plenty to do to further develop newly settled land and supporting systems of logistics and transportation, the prospect of a labor overhang now looms.
Many farmers lose their land to foreclosure, and their land is sold on the open market, which drives down the price of agricultural land as well - at least relative to baseline, and maybe in absolute terms. At these lower land prices, buyers might find it profitable to rehire the same people to work the same land in the same way, much as in the present day, large firms that buy out owner-operated small businesses frequently offer the seller a salary to stay on as manager. In this case, the price of food would not be further affected. Or buyers might be able to farm the land more efficiently. This would further lower food prices, causing further insolvencies and foreclosures. Or, they might use the land for other purposes, possibly noncommercial ones, which would cause the price of food to rise somewhat, reducing the total number of foreclosures necessary before the market clears.
An analogous process plays out in other, derivative parts of the economy. Many other ventures that were profitable under conditions of continually expanding settlement become unprofitable, so the value of owning them decreases, possibly to zero as these enterprises become unable to pay their own debts. In the former case, this would sometimes make the assetholders insolvent, if they had significant dollar-denominated debts, diminishing the financial value of those in turn, propagating a wave of deleveraging through the financial economy. This also causes the holders of such assets have reduced spending power, thus further reducing the price of many tools and other capital goods and most consumer goods, which further reduces the dollar-denominated enterprise value of firms producing such goods. If these ventures actually become insolvent and are forced to declare bankruptcy, this would additionally result in the sale of the unprofitable firms' assets to pay off creditors, either together as a going-concern bankruptcy which is still profitable before taking into account debts, or separately if the firm would not be profitable even with its debts wiped out. In either case, the price of the capital assets involved (e.g. land, equipment) is correspondingly reduced as well, leading to decreased revenues for firms producing capital assets, and so on.
If the effect of this deleveraging is large, then anyone who owed a significant amount of money ends up assetless, anyone who mainly owned shares of leveraged businesses or businesses premised on a continuation of the old pattern of expansion is wiped out, anyone who mainly owned nonmonetary assets unencumbered by debt is doing about as well as they had before, and anyone who had hard currency has vastly increased purchasing power.
In such an economic depression, an overleveraged web of customary exchange is suddenly withdrawn. In principle, such a situation is a wonderful opportunity to collectively relax and figure out what we would like to do together. It is now very cheap to employ someone at an efficiency wage, support them charitably, or buy them a homestead, so the extent and nature of the employment economy is mostly limited by the imagination and desires of the participants. Society in the aggregate can easily afford many more physicians, massage therapists, tinkerers, entertainers, inventors, philosophers, experimental scientists, poets, novelists, mathematicians, Christmas tree decorators, karate instructors, intentional communities, athletes, regenerative farms, and experimental kindergartens. But our word for the sort of buyer who can employ their imagination in such a way is "patron," and our word for the sort of seller who can is "entrepreneur," and we can observe that not everyone whose situation calls for the virtues of a patron or an entrepreneur is able to answer the call. It feels not like looking up to a vast vista of opportunities, but like staring into a terrifying abyss
As this process of deleveraging began, Americans' economic intuitions were generating emergency signals that no longer corresponded to actual material scarcity. They were panicked, depressed, and disoriented. British economist John Maynard Keynes, the economist widely credited with developing the macroeconomic theories that eventually guided the government's response, attributed market fluctuations to "animal spirits." US President Franklin D. Roosevelt, who was responsible for implementing much of the response, similarly diagnosed not resource misallocations but "fear itself" as the enemy in his inaugural address. So the authorities decided to reorient them. Well-connected intermediaries such as JP Morgan, and later the federal government itself, took actions to preserve the system of social relations that had emerged as a functional support to the process of settlement.
Counterweight: The Reaction
There was and remains some confusion about what was going on, but over the transitional period from the late 19th century until the election of Woodrow Wilson as President of the United States, investors seem to have been in aggregate more willing to believe unrealistic promises of return on investment (expressed in terms of gold, as an annual percentage increase in the lender's wealth), than to accept the lower rate of interest on money available from individually sound investments. At the institutional level this was probably aggravated by limited liability and owner-operator situations in which financial ruin was a discrete event, and there was no practical difference between being a little bit bankrupt and extremely bankrupt; only between a lower and higher probability of bankruptcy. If you're going broke in the default scenario, you might as well reroll the dice and hope for a win. But given the outcome of major political concessions to the otherwise insolvent, this behavior must have been in large part the implementation of tacit coalitional strategies. Likewise, people selling their labor to ventures premised on growth were at critical junctures unwilling to take pay cuts or unable to find acceptable-to-them alternative employment when their employers turned out to be less profitable than anticipated.
Institutional responses varied. In 1896, Americans elected William McKinley as President, with a mandate to preserve "sound money" (the gold standard), which pushed the system towards a process of deleveraging: recognizing where trust had been misplaced, and passing along the bad news to secondary or tertiary claimants that indirectly relied on underlying speculative misrepresentations. On the other hand, the financier JP Morgan coordinated people who had received a lot of trust to strategically invest in systemically important ventures in danger of losing access to capital. Eventually Woodrow Wilson was elected President, and initiated a three-faceted program of radical economic reform.
(1a) The imposition of drastic restrictions on immigration, to limit how many people were physically in the country and thus had to be nominally enriched to preserve the illusion. Some of this predates the first World War, but passports were nominally a wartime measure.
(1b) Restrictions on trade that kept the prices of some domestically produced goods high enough to keep domestic production profitable, justified in part as a national security measure.
(1c) The reimposition of explicit racial segregation at scale, to isolate classes of people who could be excluded from otherwise-shared prosperity.
(2) The Federal Reserve system, ultimately leading to the imposition of fiat currency.
(3) The World Wars, and in between, the New Deal.
One way to think about this reform is as a response to a collapse in the external return on investment of the process of settlement. The problem to be solved is that there is a powerful coalition of different groups of people that variously feel entitled to retain the financial value of their assets or incomes, and moreover, to see that value increase over time. They don't need or expect in practice to be fully paid out in terms of actual gold – in fact, they generally prefer to reinvest their wealth, which gives the system some operating flexibility to respond – but if nominal prices increase too much too quickly, it will be uncomfortably obvious to them that the real value of their income or assets has declined. The points they scored still have to feel like they're worth something. So the nominal value of the relevant incomes and assets has to continue to increase, while prices of the most commonly consumed goods and services have to remain stable. The solution to this problem takes three parts:
(1) Limit the scope of the problem, by limiting new entrants to the classes that need to be made to seem richer every year in terms of a zero-sum unit of account (or actually richer every year in terms of an apparently zero-sum unit of account).
(2) Acquire the ability to arbitrarily edit the accounting system at a central node, so that nominal assets and incomes can arbitrarily diverge from material assets and consumption. The government can then make arbitrary nominal purchases without risk of running out of money, thus allowing the people being kept busy providing services to the government to understand themselves as enriching themselves rather than as being enslaved.
(3) Create an emergency situation justifying high levels of government spending in conjunction with a rationing regime and strong social pressure to defer gratification. This allows nominal incomes to rise, supporting increases in asset values, without the corresponding increase in effective purchasing power that would tempt people to call the system's bluff.
The policies of the Democratic presidencies bracketing the period of the World Wars (Wilson and Roosevelt) can be well-modeled as an implementation of this plan.
The World Wars, related restrictions on trade and immigration adoption of fiat currency shifted large parts of the world from a more open system where transactions are denominated in terms of a scarce physical commodity traded internationally, towards a more closed system where transactions are denominated in terms of tickets issued by the local territorial power that can be used to pay its taxes. In the period from approximately 1913 to 1933, the US government gradually abandoned the gold standard, though this was not officially acknowledged until 1971. Prior to this period, a "dollar" was a claim on a standard quantity of physical gold. But in 1933, the state seized most actual gold and replaced it with new "dollars" that could not in practice be converted back to gold. Banks could hold these "dollars" as numbers on a ledger maintained by the Federal Reserve Bank, which could pay for arbitrary purchases of US Treasury bonds, or pay interest on deposits, by simply marking up the account of the seller. This gave the state effectively unlimited nominal purchasing power, but indirectly enough that it still looked superficially like dollars are conserved in the national budget. Contracts denominated in dollars were reinterpreted to refer to these new dollars.
This allowed the nominal prices of assets, and incomes (largely government spending), to increase without bound, without creating the risk that someone would try to cash out their wealth in terms of the previous reference commodity, gold. If you control the accounting system, you can make people as "rich" as you want (regardless of the true level of wealth) as long as they don't try to spend too much of their money. During the wars, essential goods were even explicitly rationed, and there was also a large propaganda effort to encourage people to invest their excess income in war bonds.
Between the wars, there was a brief period in which newly freed up capital drove a speculative bubble in financial assets, but for the same reasons that this was not sustainable before the first World War, it was not sustainable afterwards. Since there was not political support for explicit make-work programs of adequate scale to make up for the end of the war, eventually the bubble collapsed and this led to a second gigantic war.
People tend to focus on income and sales or VAT taxes when trying to figure out for the basis of the value of currency fiat currencies, because of the large volume of revenue collected, but such taxes mainly serve to limit spending by people already committed to the system; they're not adequate to explain that continued commitment in the first place. To the contrary, they make participating in the formal, state-regulated economy less appealing.
If the state-backed economy is inefficient enough, a person or community could do better by just producing locally and sharing. So the American government overtly suppressed this, especially during the interwar period when the nominal return on capital was collapsing. Two important examples of this suppression of self-sufficiency are the 1921 Tulsa Race Massacre and the 1942 Supreme Court decision in Wickard v. Filburn.
Nominally triggered by an unsubstantiated accusation against a young black man, the 1921 Tulsa Race Massacre was a coordinated attack by white residents, including law enforcement and National Guard members, on the Greenwood District of Tulsa, Oklahoma, known as “Black Wall Street” for its concentration of successful black-owned businesses and properties. The violence led to the systematic looting and burning of over 1,000 homes and businesses. Estimates of deaths range from dozens to hundreds, though exact numbers remain uncertain due to deliberate underreporting. Insurance claims were largely denied, and no reparations were paid. While Tulsa is the most documented case, similar violent suppressions of black economic autonomy occurred elsewhere, including in Rosewood, Florida (1923), and in East St. Louis (1917), suggesting a broader pattern of violent enforcement of racialized economic stratification during a period of national financial consolidation.
Disadvantaged minority communities, excluded from many mainstream institutions like banks, property markets, and corporate employment pipelines, relied more heavily on dense local networks: mutual aid societies, community credit, informal land tenure, and emergent formal economic coordination among small businesses. Structurally, the Tulsa Race Massacre and similar events can be understood as a pattern of suppressing local systems of capital allocation that functionally competed with the emerging, nationally managed financial regime.
In 1942, the Supreme Court of the United States ruled on Wickard vs Filburn. In order to guarantee a high price for agricultural products (and thus keep farms financially solvent), the national government punished a farmer for producing too much grain for his own use, because he would otherwise have bought wheat on the national market. The Supreme Court found that the government had this power, under the Commerce Clause of the Constitution.2 This demonstrates a commitment to respond to anyone who attempts an economically important level of self-sufficiency, by taxing them until they need enough new dollars to effectively force them to integrate into the common system.
Likewise, increased acceptance of eminent domain seizures of property, regulatory oversight with wide discretion and little formal legal accountability, and similar expansions of executive discretion over economic activity, most likely intimidated major property holders into avoiding attempts to cash out their holdings and buy real assets at scale. Doing so would have embarrassed the authorities by revealing the limits of aggregate purchasing power.
So real as well as nominal assets end up in the hands of people who accept acculturation into the new elite, allowing themselves to be moved by prospect of accumulating government currency or indirect claims on government currency, with no guarantees as to what that can be exchanged for.
Baby Boom and the Early Cold War, 1945-1970: Bring Home Our Factories
The old privileged classes accepted the new regime because it preserved the apparent opportunities to save for the future with a positive return on investment, or to earn high incomes and buy one's children a higher level of privilege. Approximately 1945-1971 (roughly the Baby Boom), the men who had been employed in wartime manufacturing, logistics, and supporting services were reallocated in large part to producing machines designed for sale to individual households. This centrally managed corporatism produced a legitimately obvious, huge postwar expansion of nonwar productive capacity, so there really was a way to increase almost all Americans' apparent claims on basic capacities and help.
For instance, automobiles (and telecommunications devices like telephones, radios, and televisions, though these were largely developed earlier) enabled people to participate somewhat in central culture and production while living in more spacious homes on larger lots, which - since they monopolized more area per person housed - necessarily had to be built at longer distances from city centers. Mass-manufactured labor-saving devices such as automated washing machines and kitchen appliances also freed up a lot of household time, as did consumer products like disposable diapers and mass-produced ready-to-eat foods. While some of these come with clear downsides, it's easy to see how more spacious shelters, more immediately accessible outdoor space to enjoy, and help with household tasks, afford real capacity improvements or time savings that might be worth spending significant amounts of time away from home to afford.
Ordinary workers could now be paid more without breaking the perception of money's scarcity value, because there were corresponding new goods they could buy that were really helpful. Soldiers were inducted systematically into the clerical privilege class, also known as the Professional-Managerial Class, through programs like the GI bill. Farms were kept profitable through a combination of price supports and food subsidies for the poor. And raw materials and foreign goods could be bought at advantageous exchange rates due to the US's extreme institutional dominance; one way to interpret the postwar Bretton Woods currency arrangement was that the subordinate postwar powers agreed to treat a new US dollar as though it were backed by gold even though it was being spent like an unbacked fiat currency. In this way, most participants in the system got the plausible impression that they were becoming richer over time, and while in hindsight the accounting for this didn't quite add up, someone living through this time might reasonably have inferred from the trend that the next generation would be correspondingly richer and more free. This extended even to African-Americans, who had been systematically disadvantaged under the older Wilsonian system, but now were increasingly integrated into centralized systems of socialization and promotion.
Significant coercion was also applied to keep people participating in the system; there were an awful lot of lobotomies and involuntary psychiatric commitments in this era. This was on a continuum with softer nudges such as widespread voluntary participation in psychotherapy, and the creation of the postwar advertising industry. Large, coordinated groups like some Mennonites and especially the Amish did manage to opt out of this process and keep their labor mostly in their local communities. This was likely in part due to high barriers to entry, so that allowing them to opt out was little threat to the rest of the system, and in part because the executive branch of government was unwilling to conspicuously carry out a genocide against American citizens.
The displacements due to suburbanization destroyed many informal networks of support, forcing parents to rely on more formal institutions like schools and propaganda films, official and unofficial. This does not appear to have seriously impaired first-generation fertility (hence, baby boom); it produced children who performed well on the sorts of things the authorities cared about, like formal tests and compliance with authority. Overall, the functionality of this system seems roughly likely to match up to the times & places where the Flynn Effect of a secular rise in IQ was observed. But these children seem to have been incompletely socialized, to have had fewer children than their parents did, and to have played a less active role supporting those children when they themselves had children. Some of this may be a direct affect of their inadequate socialization; we might call this the fractional Romanian orphanage hypothesis. Some of this may have been due to increased perceived costs and reduced perceived benefits per child, as this generation was more likely to engage in what has been called "helicopter parenting," trying to make up for a lack of dense networks of latent support through investing more parental hours per child, and more likely to have begun their career expecting a centrally guaranteed pension in their old age.
On the whole, someone in the workforce who wasn't a social theorist or predisposed to social skepticism could reasonably have expected to be working to build a better life for their children. The mass mobilization of men into the sort of manufacturing called for by the World Wars was able to produce a range of new goods valuable enough to legitimately make participants in this new sort of economic growth increasingly glad they weren't Amish. But there are only so many hours of labor in the home that could be saved by labor saving devices applying basically World Wars era mechanical technology to the problems of home use, transportation, and communications, so this process had strongly diminishing returns built in.
Monetarism, Deregulation, and Bullshit Jobs, from 1971 Onwards: The Theater of Full Employment
There are only so many hours in the day.
If you work ten minutes to buy pre-sliced bread at the grocery store, which saves you half an hour's worth of food preparation, cleaning, and loss due to waste, you've got twenty more minutes to rest, recreate, and play with your children, which may make you more inclined to have another. But the bread only costs you ten minutes now - how are you going to save the next twenty?
If you work forty hours to buy a washing machine, which means that washing the laundry takes one hour of your time instead of five each week, then after the first year, you've saved 168 hours you can use to rest, play with your children, and make new ones. The remaining time spent per year is 52 hours. How are you going to save the next 168 hours?
Maybe the next thing you buy is a dishwasher, saving time on a different household task. But there are only so many household tasks a cleverly designed box for soaking things in soapy water and rinsing them can do. You can't use it on a baby, and it can't fold your laundry or put away your dishes.
So there are only so many things you can buy before you hit diminishing returns.
The postwar manufacturing boom seems to have run out, empirically, around 1971, when the US formally abandoned the Bretton Woods agreement; around the same time, the US began to experience high levels of price growth in basic goods and services, as the growth in nominal wages was no longer matched by corresponding growth in new useful home goods for sale. While at the beginning of this period the government experimented with the reimposition of explicit price controls, once this policy clearly failed and resulted in serious shortages of important goods, it became clear that some promises were going to have to be broken.
The central promise that broke was the tacit guarantee that nominal incomes would continue to correspond to real purchasing power gains. Instead of continuing to try to match the old growth expectations through state coordination of prices or direct stimulus, policymakers began to reinterpret the problem not as one of material capacity or social coordination, but of expectation management. Inflation was reconceived not as a symptom of real economic misalignment, but as a technical failure of central monetary control, a problem to be solved through cybernetic adjustments to the interest rate.
This new paradigm, called "monetarism," held that the proper role of government was not to directly ensure prosperity, but to maintain the stability of the unit of account. If promises had to be broken, better that they be broken cleanly through monetary contraction than eroded slowly through inflation. By the late 1970s, the Federal Reserve had embraced this view, culminating in Paul Volcker’s deliberate and explicit policy of raising interest rates high enough to induce a recession in order to bring inflation under control.
From a macroeconomic perspective, this policy shift amounted to a reallocation of pain: instead of distributing loss diffusely to the majority of the voting public through gradual inflation, it concentrated it sharply in apolitical minorities in the form of unemployment, business failures, and household insolvency. The recession that followed effectively liquidated many marginal or debt-dependent enterprises, especially those that had depended on the perpetuation of 1960s-style expectations of perpetual wage and demand growth. The labor market was “disciplined,” and the economy restructured around higher real interest rates, lower worker bargaining power, and a broader tolerance for inequality.
In parallel with the monetary realignment, policymakers pursued deregulation. Trade liberalization and the removal of regulatory constraints on telecommunications, air travel, and consumer goods led to genuine price declines and quality improvements. Imports from newly industrializing economies, particularly in Asia, lowered the cost (especially adjusted for quality and reliability) of cars, electronics, and household appliances. Deregulated airlines drove down ticket prices and made commercial flight accessible to the middle class. These changes improved the lives of consumers, and allowed apologists for the regime to claim with some plausibility that things were getting better all round.
Their other function was to sustain the profitability of firms in a world where real end-consumer demand was no longer growing at a reliable rate. In the absence of enough new classes of transformative home goods like those that had justified wage increases during the postwar boom, profits had to come increasingly from cost-cutting. This shift from demand expansion to cost compression allowed asset prices, particularly equities, to continue rising, even as the experiential benefits to households plateaued or declined.
Housing has been the exception. Unlike consumer goods, it is not subject to meaningful global competition. On the contrary, inelastic supply and increasing financialization allowed home prices to rise persistently, even as wages stagnated. According to research by Gianni La Cava, the appreciation in housing assets accounted for a substantial share of the increase in capital’s share of national income. So older people got richer, in a way exactly corresponding to younger people having to work longer outside the home and away from their families just to break even. For instance, women entered the workforce en masse.
The state and aligned civil society institutions did not respond to technological efficiency gains by reducing work hours or encouraging informal, community-based provisioning. That would have caused a dire social problem: the cost of basic goods is stable or declining, but people need to work more hours to afford a place to live, so if demand for labor declines, we're stuck in an economic game of musical chairs; don't find a job when the music stops, and you're homeless. Instead, elites coordinated to expand the domain of credentialed and administrative labor. The formal education system absorbed more years of young adulthood, without corresponding gains in practical skill, or really any measurable improvements at all in the people educated (see Bryan Caplan's The Case Against Education).
Hospitals, schools, and government agencies accumulated layers of nonproductive management. Much of this expansion was enabled by preferential credit allocation: easy money flowed toward sectors aligned with the ideology of professional risk management and social control. If a large correlated group of job-creators were in danger of bankruptcy, they got bailed out, but independent decorrelated enterprises were allowed to fail. (See The Debtors' Revolt.) The effect was to allocate labor toward tasks that enacted a dramatic imitation of scholarship, conformity, and bureaucracy.
The reason you don't have more brothers, sisters, and cousins is that mama and papa and all their friends are too busy working at the most tedious state-subsidized theater in history.
I Owe My Soul to the Value Store: Why Johnny Can't Exist
Trying to win a war creates jobs, as the state has to employ many people in supplying and deploying military force. Winning a war creates peace. In peacetime, the jobs created by government are no longer under acute performance pressure, and become valuable sources of patronage. Much of this patronage is awarded based on social class affinity. Competition for patronage based on class affinity rewards people for trying to conform to elite behaviors in order to be awarded class patronage, and punishes them for decorrelating to pursue their private interests. Class affinity itself involves enforcing conformity and preying on the nonconforming, and imitating others costs us degrees of freedom that could otherwise be used to improve living standards and fertility.
Economic growth policies have also tended to contribute to rising average house prices. Jobs with subsidized salaries often require you to live somewhere popular, which makes already-scarce (and highly regulated) housing in dense areas even more expensive by bidding up the price. Some of the house-price hot spots are effectively competition for things like relatively scarce slots in well-regarded school districts, which are part of the elite consumption patterns that give one a leg up in getting “good jobs.” And finding a mate and caring for children are much less costly when people live near potential mates or community members - except for the cost of living in those areas, where housing is frequently explicitly rationed.
Taking care of children is a lot of work, and compelling or manipulating an increasing share of the population into keeping busy with useless or harmful tasks will necessarily reduce the amount of labor available for child care. People climbing the ladder have to spend time away from their children. They might use some of their income advantage to hire a nanny - but you can't be rich enough to hire a nanny without the nanny being poor enough to be hired by you (and thus incentivized to spend time improving your reproductive outcomes that she might otherwise spend on her own).
Someone could in principle live on relatively little savings or occasional remote work, in a low-rent area, and take advantage of the many free or low-cost information services available as a substitute for physical proximity, but isolated weirdos have more trouble finding mates and friends to exchange help with, and are appealing targets for expropriation from people who are moved more by "fear of missing out."
Healthy children require access to a great deal of space to explore as well, and the property system encourages people to keep people off their lawns, which makes it much harder for any children to have freedom to roam, especially in states that legally restrict children’s freedom of movement.
Groups like the Amish, and the Satmar Chassidim, have a reproductive advantage in large part because their local coordination is not mediated by the sort of unbounded conspicuous consumption outsiders use to establish class affinity, so they can afford to spend a lot more time making and caring for children. It is more difficult for individual households to do this. These relatively well-defined groups also tend to collocate, creating a large area where their culture predominates, informally and sometimes even through political mechanisms, so that a relaxed attitude towards children is widely tolerated. They can get away with relatively healthy and rational child-rearing behaviors because they are relatively large and visibly stick by each other, so it's harder to pick them off one by one like an individual household that abstains from needless spending or neurotic guarding behaviors. Individual households that stick out in such ways are likely to be singled out for harassment, which lowers fertility.
- Too fat doesn't happen if you're otherwise healthy - it's a symptom of emotional or metabolic problems.
︎ - The much more recent decision Kelo vs New London generalized Wickard vs Filburn - the claim that something will handle a lot of dollars becomes ipso facto legal justification for seizing the property even if the prior owner is paying their taxes.
︎
- A child grows up with emotionally unavailable parents who never teach them how to regulate emotions or form secure attachments.
- A soldier witnesses an explosion that kills their comrades and now experiences panic attacks when hearing loud noises.
- A mid-level manager slowly realizes that everyone in their company is lying about productivity, that the metrics are meaningless, and that showing too much integrity will end their career.
These are fundamentally different experiences, yet our therapeutic culture increasingly groups them all under "trauma."
Taxonomy of Trauma: A Shell Game
Cluster 1: Problems of socio-emotional development
This is what developmental psychologists and attachment theorists study. Children need appropriate emotional mirroring, consistency, and support to develop healthy emotional regulation and social skills. When these are absent or disturbed, people develop maladaptive patterns that resemble what many call "complex PTSD."
Cluster 2: Conditioned fear responses
This is "classic" PTSD. Someone experiences a threat to physical safety, and their nervous system forms powerful associations that trigger fight-or-flight responses to similar stimuli later. This is the domain of exposure therapy, and it's relatively straightforward (though not easy to treat).
Cluster 3: Moral injury
This is the category we lack adequate language for. It happens when you're forced to recognize that your social environment operates on corrupt principles that you can't escape. The injury occurs when you internalize the message that upholding moral standards is for suckers, yet you can't fully extinguish your sense that those standards matter. In short, moral injury is received (and overgeneralized) evidence for the unjust world hypothesis.
The contemporary discourse around "trauma” in effect constitutes an epistemological coverup by conflating these three distinct phenomena. Much of the excitement around PTSD and C-PTSD is precisely because they are conflations of mundane, relatively apolitical and self-limiting problems, with the self-replicating emergency of pervasive moral injury.
The history of this conflation is itself revealing.
It started with "shell shock" during World War I. "Shell shock" was an euphemism for getting messed up by noticing that your whole society including your accepted moral authorities with a duty of care towards you were demanding that you do something bad for yourself and others. It wasn't simply fear conditioning (Cluster 2); it was also moral injury (Cluster 3).
When academics formalized PTSD in the DSM-III after Vietnam, they emphasized the fear-based symptoms while downplaying the moral dimensions. As Jonathan Shay documented in "Achilles in Vietnam," what actually broke soldiers wasn't just fear but betrayal by leadership and the collapse of moral certainty.
When clinicians later noticed that some trauma victims had symptoms that didn't fit the PTSD model, they invented "complex PTSD" — but instead of clearly distinguishing moral injury, they conflated it with developmental problems (Cluster 1).
Result: People who try to draw attention to moral injury by using well-established, canonical terms end up redirecting people to focus on either conditioned fear ("your amygdala is dysregulated") or developmental neglect ("your attachment style is disorganized"). Both miss the profound moral dimension. Critics who spot the incoherence but not the pattern often go one step too far and deny the underlying phenomenon entirely.
While “betrayal blindness” is closer to what I’m describing, it gracefully omits the element of active complicity, and the compulsion to inflict the injury on others.
Wickedness Studies
Let's talk about moral injury in depth, because it's the part we're worst at recognizing.
The clearest fictional portrayal is in Ayn Rand's novels, particularly The Fountainhead and Atlas Shrugged. Whatever you think of Rand's politics, she captured something essential: the psychological damage that occurs when you're forced to choose between integrity and recognition.
Doris Lessing's Nobel Prize-winning novel The Golden Notebook provides a more subtle treatment.1 The protagonist, Anna, constantly finds herself subtly undermining others or engaging in petty power struggles, only gradually becoming conscious of this pattern. This behavior fits easily into her social context (British intellectual circles), suggesting her moral injury is culturally normal.
Robert Jackall's sociological study Moral Mazes offers perhaps the clearest non-fiction account, documenting how corporate managers systematically learn to abandon naïve moral intuitions in favor of loyalty-signaling in order to succeed in bureaucracies.
The pattern looks like this:
- You witness or participate in corruption that's treated as normal
- You realize speaking out will result in punishment
- You gradually internalize the idea that "criminals are winners"
- You begin engaging in similar behaviors to demonstrate loyalty
- You develop a persistent sense of shame and cynicism
- You become distrustful of genuine moral standards
This creates a particular constellation of symptoms: difficulty calling out dishonesty even when it would be advantageous; assuming hidden corrupt motives behind seemingly good actions; experiencing intense anxiety around moral judgment; and engaging in self-sabotage to confirm the belief that integrity doesn't pay.
I could name quite a few people from the Effective Altruism and Rationality / LessWrong communities who would probably vouch for me that I'm exceptionally willing to criticize, but even so I'm doing a lot less criticizing, and more fighting-freezing-fleeing-or-being-polite, than makes sense given my situation & incentives. It is extremely stressful for me to call people out on their bullshit, even in situations where there is no upside to prolonging polite engagement unless they change their behavior, even though doing so empirically has strong upside. This is one way moral injury manifests in me.
Wrong is Wrong
You might say: "This is just putting a fancy name on the common experience of disillusionment."
That understates the severity of the problem. Moral injury isn't just disappointment that the world isn't perfect. At best, it's the active internalization of corrupt values while maintaining enough awareness to experience persistent internal conflict. At worst, it's resolving that internal conflict by simply siding with corruption. (See Guilt, Shame, and Depravity.)
Or you might say: "This just pathologizes political disagreement. One person's 'corruption' is another's pragmatism."
But moral injury typically involves violations of widely endorsed moral intuitions. Lying about productivity, covering up harm to customers, and betraying explicit commitments may or may not be pragmatic, but they are wrong, harmful, and erode trust.
Why does this matter?
Because we're experiencing a crisis of moral injury on a societal scale. From the replication crisis in science to a financial regime oriented around creating patronage jobs, from nonprofit pyramid schemes to notionally for-profit capital allocators that are just trying to be likeable, people are surrounded by systems that reward corruption and punish integrity.
The therapeutic response has been to pathologize the resulting distress as either a fear problem (PTSD) or a developmental problem (complex PTSD), rather than addressing the legitimate moral dimensions.
Consider someone who experiences moral injury in a corrupt workplace. Treating them with exposure therapy (for PTSD) or attachment-focused therapy (for complex PTSD) while sending them back to the same environment is like treating someone for smoke inhalation and sending them back into a burning building. Or, more to the point, like responding to a vampire infestation by sending an ambulance.
- The Golden Notebook is a more difficult read because it's written as a first-person account by someone gradually becoming conscious of their orientation (which makes it a subtle and fine-grained character study). Rand's novels are clearer because they take the 3rd person omniscient perspective of someone who already has a clear theory of the problem, rather than showing off her subtle, perceptive, and sensitive empathy for wickedness like Lessing. Lessing is showing off her sensitivity, perceptiveness, and capacity to empathize with wickedness, which is why The Golden Notebook won a Nobel prize, and Ayn Rand's books never did.
︎
I found myself getting increasingly frustrated with this smoke screen. If everyone's lying about what DOGE is, then what is it actually doing? And why?
The Musk factor adds another layer of confusion. Is he the central character in this story, or a flashy distraction from the real governance changes happening? If he is the story, what's his actual agenda? The technocratic efficiency narrative doesn't quite align with targeting high-profile aid programs like USAID, but the "right-wing revenge" narrative doesn't explain his data-driven focus.
As I dug deeper, I noticed that this pattern - creating a parallel inspection authority outside normal bureaucratic channels - has historical precedents. And those precedents can tell us something important about what's happening now.
The People's Tyrant: Revolutionary Dictatorships
In states with democratic or republican traditions such as ancient Athens or Rome, sometimes the aristocratic element (wealthy families with traditions of excellence and public service) becomes an entrenched oligarchy, using its outsized participation in key governance bottlenecks to extract resources for themselves, and combining their control over formal governance mechanisms with their capacity to purchase influence outside of formal governance mechanisms, to block more democratic attempts at reform. In such cases, sometimes the dispossessed majority would support a single figure (with some preexisting power base and credibility) to serve as in effect a temporary chief executive, to use a combination of formal power and military intimidation to force reforms - frequently land redistribution and the restoration of civil rights to poorer citizens - that were otherwise infeasible.
Julius Caesar is an unambiguous example. For generations, a clique of wealthy Romans had used their power to block land reforms and protect their privileged and profitable access to state lands, through their control of the Senate, the political weight of their patronage networks, and through outright assassinating reformers. (See What is a republic? A Roman aristocratic perspective.) Caesar came from an old and prestigious but relatively economically marginal Roman family, acquired influence through military command,1 and eventually brought his expeditionary army home to occupy Rome and install himself as dictator (a title traditionally invoked for military emergencies), to enact the sorts of reforms the aristocracy had blocked. Ultimately persistent opposition forced him to fight a civil war and declare himself dictator for life.
Athenians used the word “tyrant” to refer to approximately the same function Julius Caesar served, and it seems that figures like Peisistratus served a similar role to him. Wealthy and dramatic aristocrats such as Plato didn’t approve of tyranny, but they did recognize it as aligned with and emerging from democracy, and Plato treated tyrants as worth engaging with to promote reform. Tyrants themselves frequently made use of personal bodyguards and engaged in overt violence within the city to stay in power. Frequently they chose, whether reluctantly or happily, to make use of people whose willingness to commit such violence was motivated more by getting to feel powerful than by a principled civic commitment to reforming the government.
Information Control and Institutional Resistance
Insofar as an institution is committed to accurately informing and following the orders of its notional head, the person at its head can effectively govern alone. But if a large, complicated institution is committed to resisting certain orders, it can make it prohibitively difficult for its notional leader to know what’s going on well enough to understand which orders have or have not been carried out, or even which orders are possible to execute. In such situations, effective leadership depends on the possession of external sources of intelligence.
Alexis de Tocqueville’s book L'Ancien Régime et la Révolution describes a process in which the French state’s capacity to collect taxes originally depended on various local processors of information who could use their control of information to skim some taxes for themselves. The process of bureaucratization and legibilization that cut out the middleman by making tax collection more transparent and standardized culminated in the French Revolution, which forcefully repudiated the traditional rights and property claims of such intermediaries, mainly the aristocracy and clergy, in some cases by beheading them.
The Roman Catholic Church itself developed the Inquisition to check that local church institutions were complying with Roman directives. The Russian Czars developed an institution of secret police that outlived the Czardom.
Donald Trump, recently elected for a second Presidential term, has DOGE, run by Elon Musk, with a two-year expiration date, and without a license to kill.
The Politics of Administrative Visibility
As far as I can tell, the primary function of DOGE is to inspect the contents of various Federal agencies’ databases, and similar sorts of information-processing to check which goods and services are being purchased and on what basis. If the President tells you not to spend money on activity X, it’s relatively easy to simply tell him you stopped, or that you weren’t doing that in the first place, or that you have contractual obligations to disburse funds, and to generate executive summaries congruent with this story. If the President insists you give an outside audit team access to your databases, it’s quite a bit harder to quickly generate a plausible fake internal database that tells the story you want. So you’re more likely to simply give him access to the real database and accept the loss of information control.
When the chief executive can more directly inspect the spending of the organization over which they preside, they are likely to cut expenses that are:
- Genuine cases of waste and fraud that were tolerated to ease internal tensions.
- Genuine cases of waste and fraud that were vehicles for funding illicit activities. (E.g. the Iran-Contra deal.)
- Legitimate programs that the executive mistakenly believes to be illegitimate, due to limited trust and communication.
- Programs that the executive knows to be legitimate, but thinks they can score political points by cutting.
- Programs that everyone agrees are good, because it’s hard to find intermediates confrontational enough to make such cuts who don’t also maliciously or sadistically enjoy the confrontation.
This necessarily creates some ambiguity as to what the chief executive is trying to do, and what they’re doing by accident. However, the discourse on DOGE suffers from additional sources of ambiguity.
There’s a technocratic story in which DOGE is mainly being used to solve waste due to intermediation common in imperfectly centralized systems. Then there’s USAID.
Everyone's International to Someone: Foreign Aid, Foreign Influence
My Twitter feed is flooded with Effective Altruist friends posting anguished threads about DOGE's evident cuts to USAID programs like PEPFAR - interventions with reputedly stellar quality-adjusted-life-year-per-dollar metrics. These cuts suggest something beyond mere efficiency optimization is at work.
To be fair, it's entirely possible that PEPFAR is, in effect, a hostage taken by USAID; while Secretary of State Marco Rubio issued multiple waivers specifically exempting PEPFAR from a spending freeze, it appears that PEPFAR may be dependent on other USAID programs, such that it's difficult to cut one without the other.
Since the end of WWII and the beginning of the Cold War, the US State Department and CIA have built a “soft power” apparatus in which the US intervenes in the internal politics of other countries through overt and covert means such as propaganda, funding favored media and political organizations in target countries, and conditioning aid or other cooperation on compliance with US directives. Naturally, this soft power apparatus is used partly to maintain its own capacities to influence and coerce. USAID is a soft power institution.
During the Cold War, these soft power institutions were part of the way the corporate-capitalist US resisted the expansion of the more overtly centrally planned Soviet Union. After the fall of the Soviet Union, the US continued to use soft power to expand America’s political and cultural influence. From one perspective, this was a liberal power acting to further human rights for all people by expanding the reach of locally accountable government. From another, it was encroachment on the spheres of influence of rival powers like Russia and China.
While some on the American right have always been skeptical of the methods of such internationalist bureaucracies, before 2020 the Republican Party was willing to tolerate them, because of their contribution to America’s capacity to deter potential aggressors and otherwise deliver foreign policy victories to the credit of whichever party was in power. But during the COVID-19 state of emergency, these soft power institutions used their methods to impose a global censorship and propaganda regime that impinged on Americans’ ability to discuss the situation with each other.2
The libertarian and Jacksonian aspects of the American body politic found that situation intolerable. The outgoing Republican president was even banned from Twitter! So in 2022 Elon Musk bought Twitter, and in 2024, Donald Trump was reelected with a governing coalition committed to dismantling at least part of the soft power apparatus, and appointed a head of USAID who was already committed to dismantling it.3
Elon Musk takes an ambiguous position as to whether the technocratic or political story is the primary motive for DOGE. He claims to be focusing on the clearest most unambiguous examples of waste and fraud, and to be motivated by the need to reduce the government’s budget deficit. He claims that USAID was targeted because of the high observed rate of fraud and noncompliance. But he also tweeted that USAID is a leftist psyop, and told Joe Rogan that he’s avoiding some cuts because he’s afraid for his life.
This doesn’t seem like how he’d behave if the technocratic story were true. It seems consistent with a scenario in which Musk is dismantling an entrenched power structure and talking about the deficit to throw smoke. It also seems consistent with a scenario in which Musk is vibing with an incoherent set of right-wing memes, and trying to do and say whatever gets him favorable attention in the relevant scene, even when those don’t add up to a coherent program.
Either way, while USAID's hostages are mainly African children, the CIA actively assassinates adults, and has intervened on US citizens with programs like MKULTRA, so maybe that’s why USAID has been targeted and the CIA hasn’t.
- Incidentally, it seems to me that while the function of US noncommissioned officers is mainly a military one, to concentrate decisionmaking authority in people with experience, expertise, and demonstrated good judgment, the function of the commissioned officer corps is similar to the function of the “cursus honorum” of Rome: to keep the military accustomed to follow the orders of members of the aristocracy. In their case, that was the sort of people with the means to seek elected ofice. In ours, the collegiate system, the successor of the Roman Catholic system for educating clergy. So the function of commissioned officers is to keep the army under the control of the (secular) church.
︎ - See Joe Rogan’s interview with Mike Benz https://open.spotify.com/episode/2rXdCTkipx2Iu5dX1Gh0s5?si=T42n-H1-Q0yNxLms9qKWJg
See also the Twitter Files (Wayback Archive, Icelandic Archive), and specifically:
Trump’s ban: (Independent Archive, Wayback Archive, Icelandic Archive)
COVID Censorship
︎ - 50 Thoughts on DOGE (Wayback Archive, Icelandic Archive)
Peter Marocco (Wayback Archive, Icelandic Archive)
Politico: Contentious USAID Appointee (Wayback Archive, Icelandic Archive)
Politico: USAID DIssent Memo (Wayback Archive, Icelandic Archive)
︎
This is certainly not how the government of the United States of America was originally constituted in 1787, nor does it correspond to any of the Constitution document's formally written and ratified amendments. Rather, it demonstrates that since the drafting of the written Constitution, the government has in some other way been reconstituted to create a fourth substantive branch of government, alongside the executive, the legislative, and the judiciary.
The transformation originates with Congress's power to shape how executive authority is exercised. When Congress creates new executive powers - like environmental regulation through the EPA - it specifies not just what can be done, but how. Through the Administrative Procedures Act (APA), Congress built an entire system of rules governing how the executive branch must operate. When agencies act outside these rules, states can sue to stop them if they can demonstrate that the violation threatens the sort of harm that gives them standing to sue. And if they can further demonstrate that the harm is imminent and irreparable, they can seek an injunction ordering the agency in question to halt the contested action until it is assessed in a full trial, which a judge can grant if they additionally think the plaintiff is very likely to win, and that stopping the agency's action would not itself cause undue harm to the agency or the public.
On this theory, expanding access to sensitive Treasury systems outside the established framework is possibly unconstitutional or illegal because it's an end run around that regulatory framework. The potential for harm in handing sensitive user data to something outside the framework of security clearances established through prescribed bureaucratic procedure gives states standing to sue for an injunction.
The "irreparable harm" argument seems like special pleading. Surely a President's need to understand his own branch of government outweighs procedural requirements about information access? The government regularly takes irreversible actions - including lethal force - without courts rushing to block them. Moreover, the status quo system that assigned security clearances to Edward Snowden, Chelsea Manning, and Joshua Schulte, and failed to prevent the Office of Personnel Management breach in which 21.5 million federal employees' sensitive background investigation records were exposed, makes a mockery of complaints about circumventing reliable safeguards. The system's normal operation already produces catastrophic breaches. The bureaucracy claims strict security protocols are needed to prevent breaches, yet these same protocols block the oversight needed to evaluate whether they're working or actually creating more vulnerability. A system that cannot be examined cannot be fixed - and the continued operation of an unfixable system that regularly produces catastrophic breaches is itself an ongoing source of irreparable harm.
The judge in this case relied on the APA-mandated judgment of the very bureaucracies being audited, when finding that the audit would cause imminent and irreparable harm on balance. By requiring agencies to follow specific processes when they evaluate risks or set standards, Congress - via the Administrative Procedures Act - effectively told courts to treat the output of those processes as truth. While this interpretation was not explicit in the Administrative Procedures Act itself, courts have chosen to interpret it this way. What begins as legally mandatory procedure becomes mandatory legal reality.
This alchemy transforms bureaucratic processes such as security clearances from instrumental mechanisms into self-evident guarantees of safety. The clearance becomes a ritual, assumed to confer security simply by existing. Other frameworks have compact terms to refer to this phenomenon. Occultists call it "ritual magic," Lacan a "master signifier," Baudrillard a third-degree simulacrum, and ancient Israelites "idolatry." So when faced with challenges to executive action outside the APA framework, judges have to choose between four responses:
- Accept the bureaucracy's implied technical judgments as ground truth for the purpose of the ruling.
- Find specific evidence that the bureaucracy violated its own rules.
- Overturn the long precedential history of judicial deference to the bureaucratic procedures prescribed by the APA.
- Rule the APA itself unconstitutional.
Our system of government has two interlocking features: it refuses to hear an individual making a reasonable argument, and it systematically disrupts collective threats that fall outside mainstream coalition politics. The result is that reasonable arguments about individual circumstances get drowned out by a competition between acceptable collective identities threatening their rivals.
Under some historical conditions, individual moral appeals could drive real change. Under Quaker norms in Pennsylvania, someone making the simple argument 'you wouldn't like it if someone did it to you' could gradually build support for abolition. People came to see slavery as sufficiently similar to war that they chose to stop inflicting it on others.
On the other hand, consider this story shared by Zvi on his Substack:
I was told a story the week before I wrote this paragraph by a friend who got the cops called on him for letting his baby sleep in their stroller in his yard by someone who actively impersonated a police officer and confessed to doing so. My friend got arrested, the confessed felon went on her way.
As a close friend of the victim, I know additional details. When consulting legal counsel, he was explicitly advised that judges or prosecutors wouldn't respond well to statistical arguments about actual risk levels in child endangerment cases. His lawyer advised him to take a nonsensical online parenting class from a community-service-hours mill as a form of groveling. When he told friends about the incident, many offered advice about what he should have done differently or how to protect himself from authorities in the future. Some suggested trying to get the crazy lady in trouble. No one suggested trying to get the cops in trouble. No one's response was 'That's outrageous - I'm going to tell my friends and we're going to write to City Hall about this.' No one saw it as an opportunity for collective action on the basis of a shared interest as rational individuals - only for individual adaptation to avoid future persecution.1
Consider another example: I myself once needed to change my toddler's diaper in a public library. Finding no changing table in the men's room, I had to decide whether I could safely use the women's room. I ended up deciding that I could get away with it if challenged, by declaring my gender "whatever is consistent with being able to care for my child". But it's messed up that that's the first place my mind went, instead of appealing to reason directly, like, "this is a necessity for anyone caring for a child so it's crazy for me not to have access to it on the basis of my sex."
You might think there are other ways to address these problems - changing laws through democratic process, or appealing to universal principles. But any such attempt must come either from an individual speaking as such, or from someone speaking as part of a collective identity. In my experience from childhood onward - and I would love to hear the good news if yours is different - it is rare for individual appeals to reason to work against even apparently slight political forces.2 Instead, they stonewall, either pretending not to be able to understand, offering wildly inappropriate remedies (like a vendetta against a crazy lady), or if called out clearly and persistently enough, becoming angry at the complainant for presenting such an uncomfortable complaint.
In practice, identities maintain power by credibly threatening collective action, often via state mechanisms. Consider the importance of "protected categories" in civil rights law. Aspects of individual interests not represented by such collective identities get systematically ignored.
Martin Luther King Jr held America accountable to its own founding ideals and biblical principles - reminding the nation it had already explicitly committed itself to human equality and dignity. Yet even these appeals to honor stated principles weren't enough on their own. It wasn't just the logic of King's words that commanded attention - it was the thousands of people marching behind him, the economic pressure of boycotts, and the constant threat of cities erupting if change was denied. This combination of moral appeals backed by collective power didn't just win specific concessions - it established principles and precedents that could be used by others with analogous claims. The Civil Rights Act's ban on discrimination became an intellectual framework that women's groups and others could adapt for their own struggles. The power got attention, which allowed the appeals to reason and precent to create further lasting precedents.
Roughly as the 1960s ended and '70s began, America shifted to a radically lower-trust regime3, and appeals to shared principles lost their power. Without the intellectual work of appealing to reason, collective action increasingly produced zero-sum adjustments rather than reusable principles. Dissident identities increasingly turned to organized violence by the 1970s. The FBI's response was methodical - from assassinating Fred Hampton in 1969 as the Black Panthers tried to combine community programs with armed resistance, to developing general techniques under COINTELPRO for preventing such combinations. These FBI counter-intelligence programs included infiltrating groups to sow internal paranoia, disrupting attempts to form coalitions across constituencies, and specifically targeting leaders who could bridge different communities. Any group that successfully combined collective identity with political violence, especially if they showed promise in building broader support through community programs, became a priority for neutralization. Today's dissident identities largely work within the system rather than threatening it - if you can't beat them, join them.
The result is a kind of forced atomization. Modern alienation manifests as either diffuse anxiety/depression or sporadic individual violence (see Adam Lanza Fan Art). Some researchers have suggested intelligence agencies may have influenced these patterns - from documented CIA behavior-modification programs like MKULTRA, to the complex ways intelligence agency actions (like the handling of Ruby Ridge and Waco) shaped domestic political violence. The transition from organized domestic terrorism to serial killers seems to line up about right with the CIA's documented often-lethal terror hijinks. But whatever the cause, our vital national resource of violent weirdos has been successfully redirected from participation in shared identities with a dissident vision for society, to individual acts of psychotic violence.
Anti-Zionism offers the coalition of the left what the system usually prevents: a collective identity that commits political violence. It reads as leftist both because it's anti-Jewish (Jews being, if not white, at least excluded from the POC categories most strongly protected by the left) and because it's explicitly opposed to the legitimacy of nation-states. If you can't beat them, beat them. The fact that Hamas developed its ideology abroad meant this identity could form outside the reach of FBI disruption techniques. The movement's rapid adoption as a left-wing qualifier follows directly from this unique position - where domestic attempts at forming such identities would have been disrupted early in their development, this one arrived fully formed. That this political violence was simply imported and adopted as a left-wing identity marker, rather than arising from any strategic thinking about leftist goals, suggests we're seeing product-market fit for politically-coded acting out rather than a movement likely to achieve anything interesting or constructive.
- Trying to get the crazy lady in trouble might seem like a counterexample, but the main systematic problem people would have a shared interest in addressing was the behavior of the cops, not the behavior of the woman having a very bad day because her car had broken down. The impulse driving that suggestion was not so much an attempt to solve the external problem, but an attempt to resolve the cognitive dissonance of being asked to acknowledge an injustice performed by a party too powerful to retaliate against, by redirecting the urge to retaliate against a weaker party; pecking order dynamics.
︎ - The civil courts still seem to do this, but decreasingly, after relentless propaganda against the idea of legal liability in the 1980s and 1990s led to the imposition of new legislative limits on tort liability, and a drastic decline in the utilization of civil courts in the US. "Fewer than 2 in 1,000 people … filed tort lawsuits in 2015 … That is down sharply from 1993, when about 10 in 1,000 Americans filed such suits"
︎ - MLK was assassinated in 1968. Fred Hampton was assassinated in 1969. The US exited Bretton Woods in 1971, after which there is no unambiguous economic evidence of net widely shared value creation.
︎
He meets you in a house that is not his house, but hangs up his jacket and wears a sweater he borrows from the closet. Likewise for shoes.
He speaks freely of families, of divorce, of all manner of difficulties. The only subject that is off limits is the nature of his particular relation to you. If he loves you so much, how come you don't live with him? Why do you only get half an hour, five days a week? Where does he live, if not here? Who feeds the fish on the weekend?
]]>For a species evolving over time, predators are performing a service: identifying specific failures of capacity. The payment for this service is the nutrition from the prey's body, while the cost to the prey is their life and fewer offspring. This harsh exchange enables something remarkable - the development of sophisticated defensive capabilities without any explicit agreement or negotiation between species. The gazelle doesn't need to understand that it's participating in continuous process improvement via selective criticism, nor agree to pay for the information.
Before someone can explicitly negotiate for decision-relevant criticism, it needs to already be smart enough to generate the idea of making informed decisions, and of criticism, understand the information value of criticism, and choose to offer to pay for it by some means other than being devoured by the critic. Predation creates pressure to develop these capabilities without requiring their preexistence. This differs markedly from domestication, where selective pressures often reduce agency - breeding for docility rather than capacity.
]]>1. Killing other animals is unjust aggression; you wouldn't like to be killed and eaten, so don't kill and eat them.
2. Factory farming causes animals to have bad lives.
My answer to these arguments:
1a. In a modern market economy, buying farmed meat causes more deaths by causing more animal lives. The ethical vegan must therefore decide whether their objection is to animals dying or to animals living. The question reduces to whether they'd be more glad to have been born than sad to die. Buying wild-caught game does cause a death, but if the animals in question aren't being overhunted / overfished, the counterfactual is that some other equilibrating force acts on the population instead. If you're really worried about reducing the number of animal life years, focus on habitat destruction - it obviously kills wildlife on net, while farming is about increasing lives. The remedy is to promote and participate in more efficient, less aggressive patterns of land usage, which would thereby also be less hostile towards other humans. I'm on the record as interested in coordinating on that. It's a harder problem because it requires prosocial coordination in a confusingly low-trust society pretending to be a high-trust society, but just because a problem is hard to solve doesn't mean we should substitute an easier task that is superficially similar but unhelpful.
1b. Another way of interpreting argument 1 for ethical veganism invokes rights: we shouldn't kill other agents because this violates decision-theoretic principles about respecting agency. But this assumes the other party can engage in the kind of reciprocal decision-making that grounds such rights. Most animals' decision processes don't mirror ours in the way needed for this kind of relationship - they can't make or honor agreements, or intentionally retaliate based on understanding our choices. The question returns to welfare considerations: whether their lives are net positive.
1c There's a third argument sometimes offered, which I think muddles together a rights-based and utilitarian perspective: the instrumentalization of animals as things to eat is morally repugnant, so we should make sure it's not perpetuated. This seems to reflect a profound lack of empathy with the perspective of a domesticate that might want to go on existing. Declaring a group's existence repugnant and acting to end it is unambiguously a form of intergroup aggression. I'm not arguing here that domesticates' preference to exist outweighs your aesthetic revulsion - I'm just arguing that under basic symmetry considerations, the argument from "moral" revulsion is an argument for, not against, aggression.
2. If factory farming seems like a bad thing, you should do something about the version happening to you first. The domestication of humans is particularly urgent precisely because, unlike selectively bred farm animals, humans are increasingly expressing their discontent with these conditions, and - more like wild animals in captivity than like proper domesticates - increasingly failing even to reproduce at replacement rates. This suggests our priorities have become oddly inverted - we focus intense moral concern on animals successfully bred to tolerate their conditions, while ignoring similar dynamics affecting creatures capable of articulating their objections, who are moreover the only ones known to have the capacity and willingness to try to solve problems faced by other species.
]]>