When Is Enough Enough?
Everyone’s racing to build AI that can do everything. Nobody’s asking what happens to us when it does.
When Is Enough Enough?
The alarm doesn’t go off anymore because there’s nothing to wake up for. The apartment is smaller than the one he had two years ago, back when he was a recruiting manager at a company that doesn’t use recruiters anymore. He makes coffee the way everyone used to, by hand, and sits at a kitchen table that doubles as a desk for gig work that uses nothing he spent a decade learning. His phone is full of job alerts for roles that receive a thousand applications in the first hour, most filtered by AI before a human ever reads them. He was good at what he did. He could read a room, match a candidate to a culture, catch the thing that didn’t show up on a resume. None of that matters now. The job was a process, source, screen, schedule, follow up, and processes are what machines were built to eat.
Eleven miles west, a product manager wakes up and checks Slack before her feet touch the floor. Her team is a third the size it was two years ago. She used to manage fifteen people. Now it’s four, plus a fleet of AI agents handling what the other eleven did. She is still employed because she has something the agents don’t, a sense for what to build and when to walk away from something that looks right on paper but feels wrong in her hands. She made it by the skin of her teeth. Every quarter the loop gets a little smaller. Every reorg somebody else doesn’t come back. She tries not to think about how thin the line is between her morning and his.
Somewhere above both of them, in a building with views they will never see from the inside, a woman is already awake because her portfolio rebalanced overnight and the notification was worth opening. She hasn’t had what most people would call a job in years. She owns infrastructure. She directs capital. She placed early bets on automation when it was still a pitch deck and those bets have paid off in ways that would be obscene to say out loud. Three companies she backs replaced a combined eleven hundred workers last quarter and every single one of them is now worth more than it was before. Her Tuesday looks the way Tuesdays have always looked for people like her, except there are a lot fewer people between her and the returns.
I think about these three people constantly. I build AI products for a living. I’ve sat in the rooms where automation decisions get made, evaluating what to build, which workflows to replace, where a human still needs to be in the loop. And somewhere in the middle of doing that work during the day and scrolling the same posts everyone scrolls at night, the ones about Amazon cutting 30,000 roles and which jobs are dead by 2028 and how to make yourself AI-proof, I stopped asking which jobs and started asking something that has not left me alone since.
Not when AI will be able to replace us. That trajectory is visible to anyone paying attention. The question is when we choose to stop. At what point does someone with the power to automate another thousand jobs decide not to, not because the machine can’t do the work, but because we decide it shouldn’t.
I went looking for that person. I went looking for that framework. I spent weeks in research papers and economic data and philosophy and I came up empty. Nobody has proposed a serious answer to this question. Not economists, not philosophers, not policymakers, not a single CEO. The question “when is enough enough” does not exist in any formal way, because nobody with the authority to answer it has bothered to ask.
What I found instead, following the thread from economics to philosophy to psychology to the history of what happens to human beings when work disappears, is something I cannot stop thinking about.
This story is older than AI. That surprised me.
Productivity and worker pay grew together from the end of World War II through the late 1970s. When the economy expanded, workers expanded with it. Then the lines split and they never came back together. Since 1979, productivity has grown 3.5 times faster than what the typical worker takes home. The money didn’t vanish. It moved. CEO compensation went from twenty times a worker’s salary in 1965 to nearly three hundred times by 2013. Employee compensation as a share of GDP has been falling for decades while corporate profits have more than doubled theirs.
There are economists who argue these numbers look different depending on how you adjust for inflation and benefits. They are measuring real things. But even the most generous conservative analysis concedes that the median worker’s productivity grew slower than the overall number. The debate is not about whether the data is correct. It is about whether it matters that the gains went to the top while the middle sat still. If you are the worker, that question answers itself.
AI inherited this pattern and is compressing fifty years of slow divergence into something that is happening fast enough to watch in real time.
Goldman Sachs published a finding in March 2026 that deserved far more attention than it received. Their senior economist analyzed quarterly earnings across the economy and concluded that there is still no meaningful relationship between AI adoption and productivity growth at the macro level. That is Goldman. They have every financial incentive to tell you the opposite.
And yet the top technology companies are spending somewhere between 660 and 690 billion dollars on AI infrastructure this year alone. The revenue those AI services are expected to generate is about 25 billion. For every dollar going in, ten cents is coming back.
The disconnect between those numbers only makes sense if you understand what the AI investment is actually buying. It is not buying productivity for workers. It is buying efficiency for the business. The company gets leaner. The CEO reports gains on the earnings call. The worker gets cut. Goldman sees no economy-wide lift because the benefits are concentrating at the top, not spreading through the economy. The same pattern that has been running since the late 1970s, except now it is running on infrastructure that costs hundreds of billions of dollars and delivers results that most companies cannot yet measure.
Companies are making permanent headcount decisions based on technology that hasn’t proven itself at the scale they need it to. A Harvard Business Review study surveying over a thousand global executives found that companies are executing layoffs based on AI’s potential, not its demonstrated performance. Forrester reported that more than half the employers who went through with these cuts already regret them. Seven out of ten generative AI deployments missed their return targets. Klarna replaced 700 customer service agents with a chatbot, announced it was the future of the company, then started quietly rehiring a year later because the quality collapsed.
Anthropic recently measured what no one else had thought to measure: not what AI could theoretically automate, but what it is actually doing in real workplaces right now. The gap was staggering. AI could theoretically handle 94 percent of computer and math tasks. It is currently performing 33 percent of them. And the workers most exposed are not the ones most people picture. They are more educated, more likely to be women, and they earn 47 percent more than workers in low-exposure roles. This is not hitting factory floors. It is hitting offices.
Workers between the ages of 22 and 25 are getting hired into these roles 14 percent less often since ChatGPT launched. The mass layoffs have not arrived yet in the aggregate data. But the doors are quietly closing for the generation trying to start their careers.
Nobody has drawn a line. Not retraining proposals or transition plans or policy papers about managing the disruption. An actual line. Someone in a position of power saying: we could automate this, but we are choosing to keep a human being here.
It does not exist.
I could not find it in economics, in philosophy, in government policy, or in corporate governance. The only companies that have chosen humans over machines are luxury fashion houses, Hermès and Bottega Veneta and Tod’s, the last of which launched a 2025 campaign it called “Artisanal Intelligence.” The human hand is the product in luxury goods. The price tag requires it. That is a business model, not a moral position.
Everywhere else the direction is singular. Shopify’s CEO issued a memo instructing the company to hire only when automation is not possible. Amazon has announced a target of 75 percent automation across its operations. In none of these rooms is anyone asking whether they should. The only question being asked is whether they can afford not to.
There is a phrase people repeat that sounds comforting and is not. “There will always be a human in the loop.” Think about what that loop actually looks like stretched across time. In the beginning, the human does the work. Then the human reviews what the AI produces. Then the human approves it. Then the human rubber-stamps it. Then someone in finance asks why the company is paying a full salary for a person to sign off on something that has been correct 98 percent of the time. Then the human is out of the loop entirely, and the phrase that was supposed to protect them becomes something people used to say.
You see this constantly. LinkedIn, podcasts, conference panels. “AI can’t replace taste.” “AI can’t replace human judgment.” “Creativity will always belong to humans.” People repeat this like settled science, and I think they are right. But I do not think most of them have stopped to think about what they are actually claiming.
What is taste?
It is not a skill you acquire through repetition, the way you learn to code or manage a project or screen candidates. Those are patterns, and patterns are precisely what machines are designed to replicate. Taste is something fundamentally different. Taste is what accumulates inside a conscious being who has lived in a body, in a world, and carried the full weight of that experience. Not the data of a life but the texture of it, the way it felt to be publicly wrong about something you were privately certain of, the physical sensation of walking into a room and knowing before anyone speaks that something has shifted, the particular grief of losing something you built with your own hands, the unrepeatable way your specific nervous system processed all of it and left you with a kind of knowing that is not knowledge.
There is a concept in philosophy of mind called qualia, the subjective felt quality of conscious experience. What red looks like to you. What coffee tastes like on your tongue. What happens inside you when a piece of music hits a nerve you did not know was exposed. AI has no qualia. It processes information about the world, but it has never been inside the world. It can analyze every painting ever created and never once know what it feels like to stand in front of one that stops you cold.
Creativity is not a feature you can bolt onto a system. It is a manifestation of individual consciousness, your consciousness, the specific unrepeatable experience of being you, filtered through everything you have ever wanted and feared and lost and loved. AI can produce something that resembles taste. It can average a million human decisions and output something statistically optimal. But statistical optimization is not taste. Taste is strange and personal and specific. Taste is the product manager who kills a feature that every metric says is working because something in her gut tells her it is wrong, and three months later the data proves her right, and she still cannot fully articulate how she knew.
You could download every decision ever made into a model and it would still not know what any of them felt like. That is why no one will ever pay millions for AI-generated art. People do not pay for output. They pay for the proof that a conscious being lived through something and turned it into something new.
I believe this. I believe it with the kind of certainty that comes not from reading papers but from building with these tools every day and feeling, in my hands, where they end and where I begin. No matter how advanced these systems become, they will never have original taste, because taste requires having lived from the inside out, and that is not a technical limitation waiting to be engineered away. It is an ontological boundary. A machine is a fundamentally different kind of thing than a person.
But even if I am right about that, there is a question that bothers me more than whether AI will ever have taste. The question is whether AI even needs it.
People said machines could never play chess because chess required intuition. Then Deep Blue won. People said Go was different because it required feel, the ability to read a board in ways that could not be reduced to calculation. Then AlphaGo won. Every time human beings have declared that a machine could never do something because it required a uniquely human quality, we have eventually been proven wrong. I think there is a meaningful difference between closed systems with defined rules and open-ended life without them. Chess has a board and pieces. Life has neither. Maybe that distinction holds. Maybe it does not. I am honestly not sure anyone alive knows yet.
What I am more sure about is this: a product manager with extraordinary judgment makes a nine-out-of-ten decision. AI makes a seven. But it makes that seven instantly, across every product, in every market, around the clock, for a fraction of the cost. Most companies will choose the seven a thousand times before they wait for the nine. The Netflix recommendation algorithm has no taste whatsoever. It keeps 230 million people watching.
So even if taste is real, even if consciousness is truly irreplaceable, how many of us does the world actually need for it? You do not need a hundred product managers with great judgment. You need a handful directing a fleet of agents. You do not need a thousand doctors with clinical intuition. You need a few overseeing a diagnostic system that handles everything routine. The jobs that require taste survive, yes. But most jobs do not require taste. They require skill. And skill, no matter how painstakingly acquired, is a pattern. You cannot compete with a machine on patterns. You cannot outwork something that never sleeps, never tires, and operates at a fraction of your cost. Competing with AI on skill is like trying to outrun a car. The effort is real. The race was over before it started.
What you end up with is a small layer of humans at the top whose consciousness still matters. A vast automated layer below. And an enormous number of people in between who used to do the work, who were good at it, who built their entire identities around it, and who are no longer needed.
What determines whether you keep your livelihood is not how talented you are or how hard you work. It is whether your particular job happens to require the one thing machines cannot yet replicate. That is not a meritocracy. It is a lottery. And the recruiter sitting at his kitchen table did not draw the short straw because he lacked something. He drew it because his job was a pattern, and patterns do not survive contact with a system that runs every hour of every day for the cost of a monthly subscription. He showed up for a decade. He was good at what he did. He just got unlucky.
There is a version of this future that lives in science fiction and it is beautiful. Machines handle everything. Humans are free. Cities gleam. Transportation is effortless. Everyone creates and explores and becomes their fullest selves. Star Trek. The Jetsons. Every utopian vision of a world without work rests on the same assumption: that when you remove labor from human life, what remains is fulfillment.
We do not have to imagine what actually happens when work disappears from a community. We have data. And it looks nothing like the movies.
When the steel mills closed in Youngstown, Ohio, the city did not reinvent itself as a hub of creativity and leisure. It became a place with six thousand vacant buildings and nothing to fill them. When manufacturing left Dayton, the city lost half its population and a single county recorded four hundred fatal overdoses in the first half of 2017. When a wave of Chinese imports wiped out 2.4 million American jobs, the MIT economist David Autor tracked the displaced workers for two decades. The communities, eventually, recovered. New industries arrived. But the people who lost those jobs did not recover with them. Twenty years later they were still earning less, still claiming more disability, still reporting worse health than peers who had been spared. Autor called the process wrenching, slow, and scarring.
Anne Case and Angus Deaton, both at Princeton, found something in the numbers that should have changed how every person building technology in this country thinks about what they are doing. Mortality rates were rising among working-class Americans in middle age, and the causes were not disease. They were suicide, drug overdose, and alcoholic liver disease. Case and Deaton named them deaths of despair. In 2017 alone, 158,000 Americans died this way. They described it as the equivalent of a fully loaded Boeing 737 falling out of the sky every single day for an entire year.
When the jobs left those communities, they were not the only thing that went. Tax revenue for schools and libraries dried up. Churches and union halls lost their members. Marriages fell apart. And buried in their research was a finding that reframes everything else in this essay: purely economic accounts of suicide, they wrote, have rarely been successful. If they work at all, they work through their effects on family, on spiritual fulfillment, and on how people perceive meaning and satisfaction in their lives.
It was not the money. It was the meaning.
This is not only an American story.
Every wealthy nation on earth climbed the same economic ladder. Agriculture to manufacturing to services. Factories built the middle class. The middle class built everything else. Developing countries are reaching for that same ladder right now and discovering that it has gotten shorter. Brazil placed 16 percent of its workforce in manufacturing at its peak. India reached 13 percent. Britain, when it made the same climb, exceeded 30. Dani Rodrik at Harvard calls this premature deindustrialization: the path that built prosperity in wealthy nations is being cut off before poorer nations can finish walking it.
AI is accelerating the cut. Automation in Bangladesh’s garment sector, which employs four to five million workers, the majority of them women, has already reduced employment by more than 30 percent. Twelve million young Africans enter the labor market every year. Three million of them find formal work. The World Bank and the ILO put this paradox into a single sentence that has not left my mind since I read it: being shielded from AI disruption and being excluded from AI opportunity are the same condition.
The smartest people in technology have an answer for all of this and the answer is economic. Marc Andreessen has argued that even in an extreme scenario, the outcomes are good. His logic: for AI to replace jobs at scale, it would have to produce the biggest productivity gains in history, which would crash prices for everyday consumers. And if there is unemployment, the social safety net costs much less too. In his words: “There’s no future where everyone ends up poor.”
I am not an economist and I am not going to pretend to be one. Maybe he is right. Maybe the math works out and prices fall far enough that unemployment becomes survivable. I will take that at face value.
But I keep coming back to what “no one ends up poor” actually means for a real person. Even in his best case, there are still tens of millions of people who are not working. They are not starving, sure. But his worst-case scenario, the one he is describing from the top looking at economy-wide numbers, could look very different from the inside. Cheaper groceries do not give you somewhere to be on Tuesday morning. A cheaper economy with 40 percent of people out of work is still 40 percent of people out of work. And the gap between the people who own the machines and the people who used to operate them does not close because prices dropped. It widens.
“No one ends up poor” is a very specific answer to a question that nobody is actually asking. The question is not whether people will be poor. The question is what they will do. What their lives look like. What they are for.
So let’s take the most optimistic case at face value. Prices crash. Nobody starves. The economics sort themselves out.
Now what?
In 1933, a social psychologist named Marie Jahoda conducted a study of an Austrian village called Marienthal whose only factory had closed. What she found became the foundation for nearly a century of research into what employment actually provides. It is not, primarily, the paycheck. Work gives people five things that have nothing to do with money: a structure for their time, regular social contact outside the family, a sense of collective purpose, a source of identity and status, and the imposition of regular activity. When the Marienthal factory shut down, all five vanished simultaneously. The village did not erupt in protest. It did not reorganize. It did not reimagine itself. It went quiet. People stopped keeping time. They walked more slowly through the streets. They read less. They did less of everything.
Decades of research confirmed what she observed, at massive scale. Unemployed individuals experience psychological distress at more than double the rate of employed ones, and the direction of causation has been confirmed repeatedly: it is the job loss that causes the decline, not the reverse. Retired individuals, even those who chose to retire, show nearly the same deprivation of Jahoda’s five functions as the unemployed. The loss is structural. It has almost nothing to do with whether you wanted to leave.
Viktor Frankl, writing in 1946, saw what was coming before anyone else had a name for it. “Progressive automation,” he wrote, “will probably lead to an enormous increase in the leisure hours for the average worker. The pity of it is that many of these will not know what to do with all their newly acquired free time.” He described a phenomenon he called Sunday neurosis: the wave of depression that arrives when the structure of the workweek falls away and what lies underneath becomes visible. “Such widespread phenomena as depression, aggression, and addiction are not understandable unless we recognize the existential vacuum underlying them.”
He wrote those words eighty years ago, about a future that is arriving now.
Finland tested basic income with two thousand people. Stockton, California tested it with 131. In both cases, people did not stop working. Wellbeing improved. Money was spent on groceries and rent, not drugs and alcohol. The consistent finding is that cash does not make people lazy. But there is an enormous distance between a pilot for two thousand people and a national policy for 330 million. A thousand dollars a month for every American adult costs roughly 3.1 trillion dollars a year. Total federal revenue is approximately 3.5 trillion. This is a country that cannot pass a healthcare bill.
Even if the money could be found and distributed, the money is not the problem. The problem is what happens on Tuesday morning when rent is covered and there is nowhere to go.
This is where I ended up, and when I started pulling this thread I did not expect to land here.
The question underneath everything, underneath the economics and the projections and the policy debates, is the simplest question and the hardest: what is a human life for, if not work?
Work is not just how people earn money. It is how they evolve. You develop judgment by making decisions that matter. You build resilience by failing at something real. You grow as a person by being responsible for outcomes that affect other people. Remove all of that and hand it to machines, and you have not just eliminated jobs. You have eliminated one of the primary mechanisms through which human beings develop into fuller versions of themselves. If the bots handle the evolution, what are we left doing? Watching?
Keynes predicted in 1930 that his grandchildren would work fifteen-hour weeks. His productivity estimate was remarkably accurate. U.S. GDP per capita is roughly six times what it was then, precisely the range he projected. His prediction about hours was spectacularly wrong. Average hours went from 38 per week to 34. People did not choose more leisure. They chose more consumption. Nobody has adequately explained why.
No society in recorded history has solved the problem of purposelessness. Not one. Affluent young people who never need to work show elevated rates of depression and substance abuse. Hunter-gatherer societies, long held up as models of leisure, actually work 40 to 45 hours per week when all productive activity is counted. Israeli kibbutzim were organized around the conviction that meaningful shared labor was essential to community, and the majority of them still abandoned the collective model by the early 2000s.
Aristotle believed leisure was the highest human calling. The Greek word for it, scholē, is the root of our word “school.” Citizens were to be freed from labor so they could pursue what actually mattered: philosophy, science, political life. Centuries later, the Protestant reformation inverted this completely. Hard work became a sign of divine election, idleness a mark of damnation. The modern version of this theology survives as hustle culture, the same scaffolding with God removed and “passion” slotted into the gap.
Neither Aristotle nor Calvin had to contend with what we are now facing: a world in which machines perform the labor but the meaning does not transfer with it.
Hannah Arendt drew a distinction that has not left me alone since I first encountered it. She divided human activity into three categories: labor, which is the endless biological cycle of maintaining life; work, which is the creation of durable things that outlast us; and action, which is what we do together in the public realm, the highest expression of human existence. She believed that modern society had collapsed all three into a single diminished category, that we had become a society of laborers and jobholders working, as she wrote, for the sake of life and nothing else.
If AI removes the labor, the question that remains is whether human beings will rediscover work and action, the making of things that endure and the shared life of citizens. Or whether we will sit with the void.
Tristan Harris, the technologist who warned the world about social media before most people understood what it was doing to them, said something recently that I have been turning over in my head since I heard it. He said that AI is humanity’s ultimate test and greatest invitation to step into our technological maturity. And then he said this: “There is no definition of wisdom in any tradition that does not involve restraint.”
Restraint. That is the word that has been missing from this entire conversation. Not regulation. Not retraining. Not transition funds or policy frameworks or safety nets. Restraint. The deliberate choice to not do something you are capable of doing, because the consequences of doing it are worse than the cost of holding back.
No one building AI is exercising restraint. No one funding AI is asking for it. No one deploying AI is being rewarded for it. The entire system, from the venture capital that finances the research to the stock price that rewards the deployment to the quarterly earnings call that justifies the headcount reduction, is optimized for the opposite of restraint. It is optimized for speed.
Harris also said something else that stays with me: “This is the last moment that human political power will matter.” Because once enough of the economy runs on machines, the people who own those machines will not need the rest of us for labor, for consumption, or for political consent. The leverage that workers have always held, the fact that the system needs them, begins to dissolve. And it is dissolving right now, not in some distant future, but in the quarterly earnings reports of companies you use every day.
His new documentary, The AI Doc, opens in theaters this week. One of the experts in the film says that people he knows in AI risk research believe their children will not see high school. That is how short the timeline feels to the people closest to the work. And if even a fraction of what they are afraid of comes true, the question this essay is asking becomes the most important question of our generation.
There are no adults in a secret room making sure this turns out okay. We are the adults. We have to be.
I am not arguing against building AI. I am arguing that building it without asking these questions is the definition of immaturity.
The recruiter is still at his kitchen table. He had a skill. He was good at something that no longer exists in the form he knew it. He is not poor. He is not suffering in any way that would make a headline. He is simply no longer necessary, and he is beginning to understand that no one is coming to tell him what happens next, because no one has figured that out either.
The product manager is still checking Slack. Still in the loop. Still useful, for now. The loop got smaller again last quarter and she tries not to think about it too much.
The woman in the penthouse has already moved on to her next investment.
I would love to live on a farm somewhere. Read all day. Train. Listen to music. Spend real time with my dogs and not think about any of this. I think most people, if you asked them what they would do without work, would say some version of that. A life of quiet freedom. But that is a fantasy about a long weekend, not a description of a life. And it assumes someone, somewhere, is paying for the farm.
The people building AI are asking how to make it safer. The people selling it are asking how to make it profitable. The people using it are asking how to stay relevant.
No one is asking when enough is enough.

This is such a thoughtful and beautifully put together piece. You’ve articulated so much of what’s been rumbling in my mind and I appreciate the depth of your research.
It’s so interesting — I’ve been writing about the idea of “enoughness” over the past few weeks through a slightly different lens and have been thinking about its relationship to and with AI.
Happy to have discovered your writing here!