Flexible opinions on tech, politics & society
Political cartoon: A smiling robot labeled GPT-MP stands calmly at the dispatch box while grotesque MPs scream around it. Caption: House in paralysis as GPT-MP continues to insist that black is not white.
Opinion

We Should Let AI Run the Government

Hear me out. The case for algorithmic governance isn't as absurd as it sounds.

⏱️ 6 min read 💬 Commentary

"But AI makes stuff up!" comes the inevitable objection. Yes, it does. So do politicians. We just call it policy.

Last week, someone confidently informed me that women need 200 fewer points to become chess grandmasters. Said it with complete certainty. Presented it as established fact.

Completely wrong.

There's a separate women-only title with different requirements. The actual Grandmaster title? Same threshold for everyone. But they'd heard it somewhere, it confirmed existing beliefs, and so it became truth.

That's not artificial intelligence hallucinating. That's human intelligence doing what it's always done: confidently filling gaps with plausible-sounding nonsense.

We've been hallucinating for 300,000 years. We just called it "being confidently incorrect" and built entire civilisations on it.

The Confidence Gap
Accuracy vs. certainty in decision-making

The Objections, Examined

Double Standards in Practice

When AI errs

"Fundamentally untrustworthy. Existential threat."

When politicians err

"Well, that's just politics. Nobody's perfect."

AI gives different answers

"See? It's inconsistent!"

Experts give different answers

"That's called nuance."

AI has biases

"The training data is compromised."

Humans have biases

"That's what makes us human."

"Ask five AIs, you get five different answers."

Ask five economists the same question. Ask five politicians. Ask five experts on literally anything contentious. That's called disagreement. We built democracy around managing it.

"AI is biased."

Unlike politicians? News media? Think tanks funded by parties with specific desired outcomes? At least AI's biases are increasingly measurable, traceable, and fixable. Try debugging a human.

The problem isn't that AI can't solve things. The problem is that humans can't agree on what 'solved' looks like.

The crux of the matter

The Actual Problem

AI can optimise. AI can solve. AI can process information at scale, identify patterns invisible to human cognition, model outcomes across thousands of variables.

What AI cannot do is choose the goal.

"Maximise GDP" and "minimise suffering" and "preserve individual freedom" and "ensure equality of outcome" aren't the same objective function. Sometimes they directly conflict.

And humans? We cannot agree on what we're optimising for.

30+
Years debating climate
0
Global consensus
Conference sandwiches

That's why we can't solve climate change, housing, healthcare, or poverty. Not because the problems are technically unsolvable. Because we can't agree on what "solved" looks like.

The Track Record

We've been running the "let humans govern humans" experiment for quite some time now.

The results are, charitably, mixed.

Trust in Institutions
Government vs. technology, 2015–2024

War. Corruption. Short-term thinking optimised for election cycles. Leaders who optimise for re-election rather than outcomes. Policy based on focus groups and vibes rather than evidence.

And yet when someone suggests AI might assist with governance, the response is immediate:

"But it got a maths question wrong once in 2023!"

◆ ◆ ◆

The Uncomfortable Question

If you wouldn't trust AI to make decisions because "it sometimes gets things wrong"...

Why do you trust humans who get things wrong far more frequently, with far more confidence, and with far less capacity to learn from mistakes at scale?

Error Correction Speed
Time to fix systematic mistakes

A Modest Proposal

I'm not suggesting we hand over the nuclear codes to GPT-5.

I'm suggesting that perhaps the species that still cannot reach consensus on whether climate change is real shouldn't be quite so confident that human judgment represents the gold standard.

The Luddites didn't stop the Industrial Revolution. They just made themselves irrelevant to it.

We're heading toward a world where the question isn't "AI versus humans." It's "humans who use AI versus humans who don't."

And right now, the humans running governments are overwhelmingly in the second category.

The Better Question

"Would you vote for an AI?" is the wrong framing.

Would you vote for a human who actually used AI to make better decisions?

Because right now, you're mostly voting for humans who won't.