When human is the enemy

When human is the enemy

A First class MSc graduate is in the news. He's been rejected for 500 jobs and he suspects AI screening is to blame. His specialism? Machine learning.

A father speaks on a Radio4 documentary about the suicide of his thirteen year old daughter. Social media platforms fed her harmful content algorithmically because they were optimised to keep her online as long as possible, until she took her life. He tells the reporter, I just want someone (a human, he means) to tell us what they did. The tech firms send no-one to the inquest.

Scarlett Johanson turns down the opportunity to voice ChatGPT, so they reproduce her voice without permission of payment.

A waitress is sacked for a 'human error'; her employer brings in at-table order by app: unforgivable, these days to be human.

The machines, and their controllers, Mark Zuckerberg, Elon Musk, Sam Altman, Sundar Pichai, are taking over.

Being human is a weakness, now; an impediment to revenue growth. Once we were unique: now we are data points: replaceable. Often: inconvenient.

Today I write not just about AI coming for blue collar jobs, but how the idea of being human, with human needs like water, food and not being bombed alive, is being unpicked in front of our eyes. How us being human is a problem for the billionaires. But one they're pretty close to having a solution for.

For this looks very close to endgame, rhetorically speaking.

And I'm not talking about systems going rogue; the evil genius robot of the sci-fi films. I'm talking about what happens when AI works exactly as it's designed to.

Using words from those building and controlling the machines, those who've fallen victim, and the mass media who increasingly adopt the values of machines in describing the world we live in, I am asking: how much longer will humanity be tolerated? And are you as confident as you were two years ago, that you can outrun the terminators?

The reporting of war in The Middle East: the strange invisibility of humans

The war in Iraq and the Middle East is perhaps the first war of the robots.

The dehumanisation of enemy populations, remote warfare and the minimisation of the toll of war on civilians is nothing new. War depends on such paradigms as not all lives being of equal value, and murder in the name of state interests being morally neutral.

But I'm struck right now by the headlines in the mainstream media. Human suffering has been replaced in their worldview with what matters to those now running the world: the cost.

Iranian missile strikes are costing big oil billions

Rising cost of living from Iran conflict

Markets react sharply… oil prices surge… FTSE falls

11 million barrels per day lost… global gas supplies impacted

$16.5 billion in arms sales approved

These are headlines I've noted down at random. Unscientific. Here's an unscientific anecdote to go with them: An elderly couple with a van near mine told me, they might need to cancel their annual holiday to the South of France because of the cost of diesel. They're hoping the war ends soon, as they'd like to see Nice.

War is not now about life and death: the messy, mundane business of bleeding or losing limbs or bereavement; it is about transactions, supply and technology.

A journalist's piece is headlined: “$12.7bn… how it’s been spent”, as if writing a shopping blog.

The same report notes: “A single Tomahawk missile costs about $3.5m”. As an aside, it is mentioned that one such missile struck a school.

We are told the war cost “$3.7bn in the first 100 hours… nearly $900m a day

The war is narrated through prices. And only then:people. No one even asks what the point of it all is. In the post-truth world causation doesn't seem to exist. Things happen. More things happen. Why is a question with a long answer, and long answers aren't scroll friendly.

This is semantic ordering. The process by which we assign relevance to things. And over time, that ordering becomes expectation.

You come to understand war first as an economic event. And only afterwards as something that happens to people.

What all this means, outside of the bloody business and rhetoric of war, is that we should be prepared to make sacrifices. Not for others humans, or values and principles, not even for glory: but so the automated money machine can continue to spin.

It's all part of the new world where robots matter more than people. Or, more precisely, when the owners of the robots live like super-humans and everybody else can go to Hell.

Welcome to the robotics era: when the machines come for everybody's jobs

I predicted at the start of January that May would see the launch of advanced robotics through a concerted multi-corporate PR offensive we are meant to get excited about. I predicted too that this would be followed by their very rapid introduction into the workplace: deals have been signed, but the communications experts want us to feel good about it. Watch this space and hope I'm wrong.

In 2025, companies ordered over 36,000 robots in North America. Whilst this was a modest 6.6% increase on the previous year, what's notable is that this was no longer concentrated in automotive, but spread across food, electronics, pharmaceuticals.

Globally, there are 4.6 million robots operating in factories. I don't mean mechanical robots; I mean ones who learn. Who learn in groups. Whose performance increases at a rate that no human can match. I mean robots who move, see, adapt and communicate.

Factories that once required redesign to accommodate machines are now being populated by machines that are human sized. They are human shaped. They don't have souls? Well, all the better for them to make money.

China is investing tens of billions towards automating up to 80% of final assembly work. The unemployment that will follow will be a genocide of its own.

My uncle, a very senior civil engineer, now retired, on hearing in Radio 4 of the graduate unemployment rate, put it starkly: "there is going to be civil war."

I think we all suspect this. And not in decades to come, but in this one.

A Barclays report notes that robotics systems are designed to take “jobs we don’t want.”

But did anyone ask the worker if he didn't want his job? Isn't it more that: robots do not negotiate. Do not require maternity leave. Do not file harassment claims. Do not talk back.

And so the definition of a good worker is altered. When your competitor is a robot, you'd better not have an opinion, an emotion or a need. You'd better not need a toilet break or to want to sleep.

Humanoid systems that once cost millions are now approaching $100,000 per unit, placing them within reach of mid-sized firms.

The World Economic Forum estimates 83 million jobs may be displaced globally in the next few years.

A UK minister described this as the “first wave” of job transformation. The language is careful. Transformation does not sound like hunger, homelessness, shame, fear.

And still we are sold the line that all of this somehow benefits us. We are encouraged to thrill at the novelties of machine labour, simultaneously presented as the logical progression from past technological advancements and as a wondrous, Godlike achievement.

But warehouses where robots outnumber people grow into warehouses with no staff at all. Retail spaces where interaction is optional become spaces where interaction is not allowed. The HR Director who signs off on the protocol suddenly has no humans to justify her job.

Our needs as humans simply create friction, in this well-oiled humanless revenue creation dream.

The Rise of Automated Policing and Pre-crime detection of 'offenders'

If we fight back? They’ll automate suppression. Robocop is coming to a small town near you in the next six years. AI systems are already being deployed to track, predict and manage citizens without human judgement.

In the UK, police forces are being equipped with AI‑powered facial recognition vans and cameras that can scan crowds in real time and compare every passing face against police databases, alerting authorities to any “match”. Campaigners have warned this is hurtling society toward “an authoritarian surveillance state” where constant biometric surveillance becomes routine.

This isn’t limited to Britain. In the United States, New Orleans police quietly used live facial recognition cameras across hundreds of street cameras, monitoring public spaces in real time and triggering alerts to officers — in some cases without proper transparency or legal oversight — until public outcry forced a review. Proposals are now being advanced to formalise such real‑time facial surveillance with reduced safeguards.

Across the UK and US, “predictive policing” systems sift historical data to forecast where crime might occur and who might commit it: the results are used to direct police resources and patrols.

In some cities (involving more than 50 US police departments) authorities are even experimenting with AI that watches behaviour to decide what counts as “suspicious” before anything has even happened, echoing dystopian fiction where the potential for deviance becomes reason enough for intervention.

What all this adds up to is a form of automated state power that doesn’t need humans to administer control. Your loss of liberty does not require a human warden, a beat cop, or a judge. It requires only data, algorithms, and the systems that process them.

And if the robotics revolution requires it? Would you be so selfish as to stand in the way of progress?

The artisan is not safe

My personal answer to the AI onslaught is not working as I hoped.

And I'm not convinced it's a personal failure.

I ran a small silver jewellery business for a few months. Until last month, in fact. When the cost of silver became as prohibitve for me as many other jewellers, driven sky high by the use in chips and servers for the Ai data centres (but I'm not bitter).

I was initially wowed by the slick, smart Shopify Platform, into which I barely had to whisper an aspiration, and it was made in pixels, with all the backend process I'd want.

But then something weird happened. The in-app ads for extensions, apps and upgrades (many free) started appearing:

"Get your products modelled in seconds,"

"Create videos of your products instantly,"

I'm told repeatedly that the successful Shopify owners are all doing this. I look on Instagram and some are. The messaging is: to tell the truth is to be left behind.

More: I am told that at the click of a button I can optimise my store by selling other products, which I have never touched, which were made by machines, and will be promoted by machines, but for which I'll receive some small cut.

Pretty soon I get the picture: handcrafting is wilful, luddite and economic suicide. I'm missing a trick, an opportunity, the point.

Check the Etsy threads: dozens of handcrafters are shutting their business as the platform increasingly sells products designed by AI, made by robots and marketed using automated software. The platform shows which products sell, which photos get clicks, which items are “recommended” and humans follow. It's a loop, of course. But whose driving it? Are machines serving us or are we serving them?

Sellers describe listings being removed or suppressed by AI with no meaningful right of appeal, sometimes because identical products appear elsewhere — even when those copies are stolen from the original maker. The logic is circular: the machine sees duplication, flags the human, and leaves the copy intact.

Meanwhile, the front end is no longer a marketplace so much as a feedback machine. Products whose images conform to algorithmic expectations ( clean backgrounds, repeatable styles, quick to render) are preferred. Those that resist standardisation sink. By 2025, more than a third of new digital listings contained AI-generated elements, because they are faster, cheaper, and easier for the system to parse.

And so sellers begin to design not for buyers, but for the algorithm that mediates them. They optimise images for click-through, titles for search, output for scale. The craft becomes secondary to its performance metrics. What looks like creativity is just compliance.

So the artisan, unless they are strong enough or resourced enough to stay away from ecommerce, is pulled into the values of a system they wish to stand against.

Text in bright lozenges highlights some notional saving. We'd love handmade, we say, but humans are expensive. We save money on the real, so we can pay for our premium Spotify subscription, and our Netflix. For our Claude AI subscription and our Ring Doorbell.

The greatest stunt they've played, these robots, is make us think they're an essential. Like, fire, as Pinchai says; like electricity.

Agentic Shopping and the Rise of AI Buyers

But agentic selling is old hat. The talk now, the trend coming soon to a device near you, is agentic shopping: software that searches, selects, and purchases on your behalf, reducing you to a passive observer of your own consumption.

Imagine a system that will decide what groceries to order, which gifts to buy, and order your holiday wardrobe, leaving you with more time to scroll TikTok. The irony is cute: robots buying from robots.

You begin to see how humans are looking less and less necessary? Our centrality to the wealth narrative displaced, our participation in society is reduced to that of an infant, reliant on others for the servicing of needs we can no longer articulate.

As Max Levchin states, “AI agents… will soon manage shopping and payment decisions for consumers.” Not assist, but manage. The distinction matters. Management implies delegation of judgement, not just labour.

McKinsey & Company is more explicit: “The ‘customer’ is now an AI agent acting on behalf of a person.” In other words, the market is beginning to treat the human not as the decision-maker, but as the principal in a contractual arrangement — one step removed from the act itself.

Andy Jassy describes this as a transformation, in which agents “capable of buying goods on behalf of consumers could transform online shopping.” That is a careful phrasing. It avoids stating the obvious consequence: that optimisation will migrate away from human perception and towards machine readability.

This is not a speculative shift. It is already visible. Product listings are being structured for algorithmic parsing rather than human judgement. Price, availability, delivery time and standardised attributes become decisive, because these are the variables machines can process at scale. Aesthetic judgement, narrative, and craft — the qualities that require interpretation — become secondary or invisible.

In such a system, producers are no longer competing for attention, but for compatibility. The question is not “will a person choose this?” but “will an agent select it?” That is a different market, governed by different incentives.

The result is a subtle but consequential inversion. Systems originally designed to serve human choice begin instead to anticipate, shape and ultimately bypass it. The transaction still bears the imprint of a human preference, but the act of choosing has already been abstracted into a set of machine-operable criteria.

At that point, the issue is not whether machines are participating in the market.

It is whether the market still requires human judgement at all.

The consequences for society are napalm on the paddy fields. Local shops close, high streets become haunts for the unemployed (perhaps a robot is doing the job they 'didnt want'). Communities that formed around markets and independent stores vanish. Children grow up never seeing a butcher, baker or cobbler. Bartering, banter, connection, cohesion fade, replaced by the quiet hum of AI systems exchanging goods for credit. The rise of AI shoppers is a convenience we'll be told; many will be keen to experiment: it's not the thing to be left behind.

And so the turkeys collect coins for the gas meter. Because coins are shiny. Because we've never had to fight for ourselves as a species before and their algorithms have shown such division we're too busy hating other humans to notice what the robots are doing.

Sleepwalking in silk pyjamas

For someone warning of this for probably four or five years now, it's hard not to wonder: will people wake up in time.

I recently spoke to a member who was one of many who has said to me in the last two years, "I didn't join because I'm worried about my job. I love my job. I'll always be needed because that's the sort of job I have. But I want to support what you do."

He was laid off in September.

I don't wish to be right about any of this. There is no satisfaction in watching it move from theory to pattern to history (will robots read history books? No. They'll gobble up all the data and create numbered lists).

The belief that one will be spared because one is skilled, or liked, or necessary is of immense comfort; right up until it no longer holds.

And even then, the explanation tends to remain personal: bad timing, a difficult market, a single decision.

It is easier, still, to believe that this is happening around us, rather than to us. Or that if it happens to us, it's something our own efforts can put right.

We think that there is time; that there will be warning; that we will recognise the moment when things change.

But changes accumulate.

And we distract ourselves by creating cartoons of ourselves, or make sharp points about the water usage in data centres, and somehow seem reluctant to realise: humans are doing humans out of existence.

What’s Wrong With Humans?

It is worth listening, carefully, to how humans are described by those building the systems.

Mark Zuckerberg has frothed about a future where people will have “AI friends” that can replicate social connection at scale. Am I the only wondering why I'd need them, unless there were going to be a lot fewer humans around?

Sundar Pichai, CEO of Google has said that: “AI...is more profound than fire or electricity.” It's a sublime claim: at once AI becomes inevitable, elemental, and it's opponents positioned as cranks who wish to deny progress. The difference his statement hopes to elide is that fire and electricity served humans. Increasingly, the role of the human is to serve the robots and their controllers.

Sam Altman says, “Humans who use AI will replace those who don’t.”

The semantics are breathtaking: we will make you extinct, he is saying, if you refuse to join the party. We will erase you, if you don't adopt, conform, behave as the system requires. Humans who resist, who refuse, who slow the machine, are disposable. Their replacement is framed as natural, inevitable and beneficial. But even a child will ask, and what's the next stage in the plan? Even a child would ask (only a child it seems): what if I don't want to play with robots? What do you mean when you say 'replaced'.

The automation of democracy

Politicians are beginning to echo the same framing.

Democracy, after all, is hardly efficient. It requires debate and delay. It tolerates disagreement. It's wildly unpredictable and, historically, human beings have shown a stubborn habit of voting from their own self interest.

There is increasing interest in automated governance. For now we have the gentle introduction of the idea in fine sounding words: "policy informed by data models", "decisions guided by predictive systems", "public opinion measured in real time" and fed back into algorithmic adjustment.

The language around it sounds benign, progressive even: better decisions, evidence-based policy, responsive systems. But the project shifts from representation and communal problem solving to mechanistic and unfeeling management of populations. In such contexts it is the outliers who are vulnerable. Machines don't cope well with exceptions. And I believe we're all exceptional.

Within this shift, the role of the human changes from participant
to input. From citizen to data point.

Robots are winning elections now. Though I've yet to see the first AI candidate, I don't believe that time is faraway. For now, campaigns deploy AI to target voters, tailor messaging and predict behaviour. Systems parse social media, voter rolls, purchasing histories, even micro-expressions in video footage to determine what will move someone to vote or stay home.

The problem is not that AI helps campaigns. It is that decisions once made through debate, persuasion and negotiation are now taken based on the God of Data. That the opportunity for compromise, sysnthesis and creativity has been replaced by models which are binary. Systems learn what drives emotion, attention, compliance. Citizens still vote and candidates still stand for office. But the invisible hand guiding the outcome is a machine.

Consider recent elections in the United States and India, where AI-powered tools analysed billions of data points to predict turnout, identify persuadable voters, and deliver content tailored to biases and fears. In some cases, synthetic video and text content was deployed at scale to influence perception. The choice of the electorate was shaped, nudged and conditioned. The machine decided what was likely to produce the “desired outcome.” Human deliberation — the messy, unpredictable, moral part of democracy — is undesirable. And no one is accountable. "No one lied," they'll say afterwards. The machine hallucinated, they'll say. Or more likely: the machine did only what it was trained to do.

Did anyone commit suicide when fire was invented?

In 2018 the first pedestrian fatality involving a self‑driving car occurred when an Uber test vehicle struck and killed Elaine Herzberg as she crossed a road. By late 2025 the U.S. National Highway Traffic Safety Administration counted 65 fatalities caused by techology still in use today.

There law cases mount up of AI‑enhanced surgical systems causing serious harm and injury, and catastrophic surgical errors.

Multiple suicides have been documented in legal filings from prolonged interactions with AI chatbots; families have explicitly alleged in lawsuits that teenagers killed themselves after sustained engagement with systems like ChatGPT or other generative models where warnings were inadequate and safeguards failed.

The adult version of ChatGPT has been delayed. But it's, um, coming.

Reporting from India highlights mounting distress among tech workers as AI‑driven automation takes jobs in a country with minimal welfare benefits. There has been a spike in suicides among IT professionals.

People have died in fires. People have died from electricity. The difference is that these deaths were accidents. The deaths I'm talking about above occur when AI systems work exactly as they were intended to do. It's not murder. But it's close surely to culpable homicide?

These deaths are not accidents. They are the inevitable product of obedience: systems doing exactly what they were designed to do while humans break, bleed, despair. AI does not hesitate, does not hesitate, does not grieve. It calculates, optimises, enforces. Lives register only as inputs, outputs, or anomalies to be corrected.

It is a quiet kind of cruelty. The machines are flawless in their function, and in their perfection they render human fragility irrelevant. We are present only as parameters, our suffering a statistical footnote. The question is no longer whether we control these tools. The question is whether we still matter at all when the tools no longer need us, when human judgement, error, grief, and instinct have become optional — or obsolete.

Hysteria? Or following the breadcrumbs?

It is not that humans are explicitly being removed. It is that their presence is being redefined.

Their value is conditional on how well they fit within the system: how predictable they are; how efficiently they can be processed; how little friction they create.

For now, the solution is not to remove them entirely.

It is to reduce their impact.

Being human is tolerated still; just.

Just as war reporting trains us to consider as paramount the economic, not human, cost of war, so we are all taught in increments: that we need to accommodate the robots.

We can't speak to our bank. even when they've made a catastrophic mistake.

We can't get a refund on a train fare, if we can't meet the bots requirements for data.

We should try and decide if we have a fatal illness by using an online chatbot, and not expect to see the doctor our taxes pay for.

We need to fit into a world run by machines for the owners of machines.

And we're doing barely a damn thing to challenge it.