100 years from now : The Ghost in the Contract

AI doesn’t care about accountability. It is not possible. It is a system that produces outputs, and for a machine, ruining a career and saving a life are the same event.

fine. Hammer doesn’t care either.

But the people who build this thing are asking us to give our thinking to it. Alex Karp, CEO of Palantir, just told the next generation to leave the humanities behind. He has a doctorate in philosophy. Nvidia’s Jensen Huang has been telling kids to stop learning programming for two years. Sam Altman talks about “abundance” the same way pastors talk about heaven. The idea is theological: surrender to your judgment, trust the divination, the machine sees further than you.

Say we do it. To what do we give our thinking?

An entity that has already removed itself from the legal equation.

Anthropic’s consumer terms limit total liability to six months’ fees or $100, whichever is greater. Cloudcode writes bad code, burns your credit, and burns more credits to fix what’s broken. You pay for both. If it bombs your production database tomorrow – $100.

Microsoft went further. Copilot’s terms of use state that:

“The Copilot software is intended for entertainment purposes only. It may make errors, and may not function as intended. Do not rely on Copilot for important advice. Use Copilot at your own risk.”

Entertainment purposes only. The same language used by psychic. On a tool that Microsoft sells to enterprises for $30 a seat and is integrated into Windows.

Gemini’s terms ship “as is,” disclaims all consequential damages, and warns: “Do not rely on the Services for medical, legal, financial, or other professional advice.” This covers most of what knowledge workers do.

The pattern is identical. Give us your thinking, give us your money, give us your career. When the thing you trust is wrong, it’s on you.

Now pile Huang on top. Don’t learn programming. Well – if you can’t read the output, how can you detect hallucinations? The guy who tells you to stop learning is the one selling devices and making a mistake. The company whose form was written incorrectly owes you $100.

This is the logic of God the King. Surrender, trust, and if the harvest fails it is because you lack faith. God cannot be wrong. God cannot be sued.

In contrast, a counterexample has just emerged. Linux kernel maintainers have published an official policy on AI code. AI can help, but AI can’t sign. Every line in the kernel — the operating system that powers most of the Internet — has a human name, and is entirely legally responsible. “The person using these tools is responsible for the output.” This is the whole policy.

AI companies say the opposite: AI is doing the work, and humans are responsible anyway.

The trap: If Hwang wins and no one learns to code, Linux policy becomes meaningless. You can’t hold someone responsible for approving code they can’t read. Signing becomes a ritual. Blood sacrifice to the legal system.

If you think regulation will fix this, look at what happened when the actual liability bill came out.

In 2024, California passed SB 1047, the kill switch, and catastrophic damage liability. Assembly vote: 41-9. Popular support in every poll. OpenAI, Meta, Google, and a16z have lobbied hard against it. Newsom vetoed it. The industry won. Federal lobbying for OpenAI rose from $260,000 in 2023 to $1.76 million in 2024 — a seven-fold jump. Anthropy more than doubled.

You can have technological exclusivity without accountability. Nothing in physics requires the most powerful systems ever designed to respond to a person.

How can private industry be forced to take responsibility when all incentives point the other way?

Look at tobacco. Fifty years of repressed studies, manufactured doubts, and paid doctors. Millions of deaths. It took the major settlement of 1998 — a legal robbery by 46 state attorneys general — to force them to stop. No one went to prison. This is what corporate accountability for civilizational harms looks like when capitalism politicizes itself. Fifty years too late, and the handcuffs have not yet been put on.

The AI ​​push is already further than Big Tobacco has ever gotten.


This is where it stops about code and starts talking about who lives.

In Gaza, an Israeli AI system called Lavender produced a kill list of 37,000 Palestinians with an error rate of 10%. Human analysts reviewed each target for approximately 20 seconds before authorizing a strike. Thousands of civilians died in homes marked by a machine and sealed by a human who was too hasty in making a decision. The journalist who broke the story, Yuval Abraham, summed it up in one line: “AI-based warfare allows people to escape accountability.”

It’s the same companies. Palantir runs the Pentagon’s Maven AI system. Anthropic and OpenAI are seeking defense contracts. The philosopher who tells you to stop thinking is the man who sells the targeting system.

Same structure as the $100 liability cap. Only with bodies.


This is where 2124 is headed. AI is everywhere. Accountability anywhere. The people who called themselves “benefactors” were the ones who signed every waiver, surrendered every ruling, and woke up in a world run by systems that no one could sue — where the code in your IDE and the drone over your head was created by the same company, licensed under the same terms, and protected by the same lobbyists.

Optimized is a marketing word for the opposite of power. A truly empowered person is one who has retained the ability to say no. I kept the ruling. The case was filed. Save the code. He held the trigger.

The God King does not abdicate the throne. You have to take that.

Nobody even arrives.

Leave a Reply