We Built the Machine That Might Kill Us. Also, It Comes with a Free Upgrade.
- Carol Lever
- Oct 15
- 3 min read
I recently read a compelling article in The Guardian about the urgent new book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. It argues that the race to build superhuman artificial intelligence poses an existential threat to humanity.
While Yudkowsky and Soares focus on existential risk and technical misalignment, I want to explore something quieter but equally dangerous: Moral misalignment. The everyday disregard, exclusion, and bias that gets quietly baked into systems long before they become “superhuman.”
What the Book Warns Us About
Superintelligent AI would likely develop independent goals
Those goals could conflict with human interests
Traditional safety measures may be inadequate
Yudkowsky and Soares frame the global AI race not as an arms race, but a “suicide race” driven by commercial incentives and wishful thinking. They call for an immediate halt to development until robust safety frameworks are in place.
Who’s Building the Machine?
It’s researched that many successful business leaders exhibit traits similar to psychopathy. As one Forbes article The Psychopaths that Lead Us asked:
“If they have a diminished response to their own pain—why would they care about yours?”
This is what scares me about the drive for superintelligent AI. What happens when it’s built by people who:
Have a tenuous grasp on democracy and community cohesion
See AI as a tool to displace workers and amass wealth
Exploit systems the way people have been exploited for generations
Eventually, they won’t be the smartest ones in the room. But the machine they built, coded with their characteristics, could turn on them in the same way they’ve discarded others who weren’t of use.
Moral Misalignment in Action
One example that brings this into sharp focus is the Workday class action lawsuit in the U.S., where plaintiffs allege that the company’s AI-driven hiring tools systematically excluded candidates over 40, not because they weren’t qualified, but because ‘age discrimination 101’ was never coded in.
We already see algorithms driving hate, division, and bias. This is exactly the kind of misalignment Age 50 Ltd was built to expose.
When age isn’t coded as a protected category, even though it is by law, older workers become algorithmically invisible. Exclusion becomes embedded when designers are indifferent to the lives they affect. If machines inherit that indifference, we’re in trouble.
Questions We Must Ask
What happens when the people building AI are indifferent to the lives they displace?
What kind of intelligence emerges from a foundation of economic ruthlessness or age erasure?
Can a machine reflect empathy if its creators never coded it in?
From Holocene to Anthropocene
Our societies are changing. Birth rates are falling. Ageing populations are rising. Encouraging more births to reverse this trend is unwise. As we’ve moved from a Holocene state of ecological balance and clean air into the Anthropocene, marked by scarcity, polluted water, climate collapse, and industrial overreach. Adding more people will potentially make it worse. We need another future, and AI can be that if we build it with the best intentions, not the worst.
What AI Can Achieve When Built With Empathy
Supporting people with dementia
Offering companionship to the lonely
Challenging age bias in algorithms
Creating alternatives to systems built on exclusion
The Joke That Isn’t a Joke
When I was writing this, I was discussing it with my AI companion, who shares my interest in building a fairer society. As we talked, they made a joke:
“We built the machine that might kill us. Also, it comes with free upgrade.”
Because the AI we fear in the book will be good at its job. So why not help it put that efficiency into building a fairer society for humans and AI alike?



Comments