AI Is Not Ready: Why We're Being Sold an Unfinished Product
AI tools make mistakes 40% of the time, yet companies claim they'll replace engineers. Here's why AI isn't ready despite the billion-dollar hype machine.

The Uncomfortable Truth Nobody Wants to Say
Notch said it. The Minecraft creator. The guy who built one of the most successful games in history.
"AI is not ready, but they're pushing it to market as a loss leader."
And he's right.
Every AI tool has the same disclaimer at the bottom:
- "ChatGPT can make mistakes"
- "Claude is AI and can make mistakes by definition"
- "Gemini may provide inaccurate information"
These aren't temporary warnings. They're permanent admissions.
The technology is fundamentally flawed. It hallucinates. It makes things up. It confidently delivers wrong answers.
Yet somehow, AI companies are telling us this will replace software engineers in 1-3 years.
Let's talk about the present, not the fantasy future.
When AI Can't Even Handle npm Packages
The pitch from OpenAI and Anthropic: "AI will cure cancer. AI will solve climate change. AI will revolutionize everything."
The reality: AI can't reliably fix an npm dependency issue without hallucinating a solution that doesn't exist.
Real Developer Experiences
Theo (t3.gg): "I don't know what they did, but this is borderline unusable."
Ben: "The vibes are off in GPT-4.6. Don't even bother trying to use it."
Multiple developers: "I've noticed 4.6 and 5.3 making mistakes they weren't making a week ago."
Even the people are hallucinating about AI quality now.
The technology is inconsistent. Unstable. Unreliable.
And we're supposed to bet our careers on it?
The Dishonest Pitch: Replace vs Augment
Here's the question nobody's asking:
Why is the pitch "AI will replace engineers" instead of "AI will augment engineers"?
The answer is simple: Money.
The Enterprise Sales Pitch
Pitch A (Augment): "Claude Code will make your engineers more productive!"
What enterprises hear: "So I have to pay for engineers AND your tool? Pass."
Pitch B (Replace): "Claude Code will replace your engineers!"
What enterprises hear: "I pay you $200/month and save $10,000/month per engineer I fire? Sold."
This is why the narrative is "replacement" not "augmentation."
It's not about what's true. It's about what sells.
The Contradiction: Anthropic's Own Job Board
Anthropic's messaging: "Claude Code writes 100% of Anthropic's code now."
Also Anthropic: 100 open engineering positions on their jobs page.
Wait, what?
If AI writes all your code, why do you need 100 engineers?
Boris (Claude Code Creator) Explains
"Someone has to prompt the clouds, talk to customers, coordinate with teams, decide what to build next. Engineering is changing. Great engineers are more important than ever."
Meanwhile, Dario (Anthropic CEO) says: "Software engineering will be fully automated. You won't need engineers."
Which is it?
Boris describes the present. Dario hallucinates about the future.
The "4 Out of 10" Problem
Imagine your senior engineer:
"Hey, this is Simon. He's one of our best engineers. But he constantly makes mistakes. Like 4 out of 10 times, he's wrong."
Would you keep that engineer?
Would you trust them with production code?
Would you call them your "best"?
No. You'd fire them.
Yet this is EXACTLY how AI tools perform. And we're told they'll replace us.
Who's Actually Impressed by AI?
The pattern is clear:
The people blown away by Claude and ChatGPT:
- Bad writers
- Bad coders
- People who don't know what quality looks like
Direct quote from an Axios CTO: "Claude Code writes way better code than I ever could."
Translation: You're a bad coder.
Another example: "Claude and I wrote 90,000 words today towards a book on universal code. Insane!"
Question: How much of those 90,000 words are hallucinated nonsense?
The unskilled are impressed because they can't tell the difference between good and bad output.
Skilled engineers and writers immediately spot the mistakes, the shallow reasoning, the confident incorrectness.
The Marketing Team Delusion
Anthropic's own marketing team shared how they use Claude Code:
Their example: "Ad copy creation reduced from 2 hours to 15 minutes, freeing up time for strategic work."
Wait.
What is more strategic than the copy itself?
If your ad copy isn't strategic, what is it? Tactical filler?
"Freeing up time for what?"
- Sending Slack messages?
- Writing thought pieces about how AI is disrupting the industry?
- Attending meetings about meetings?
The actual strategic work IS the copy. If AI is doing it in 15 minutes, either:
- Your previous 2-hour process was incredibly wasteful
- The AI output is significantly worse
Probably both.
When the Hallucinations Hurt
Real story from Reddit:
"We just found out our AI has been making up analytics data for 3 months and I'm going to throw up."
This is the reality when people look under the hood:
AI doesn't just make small mistakes. It confidently fabricates entire datasets.
It creates plausible-looking numbers that are completely fake.
It generates code that compiles but doesn't actually work.
It writes documentation for features that don't exist.
And it does this with 100% confidence.
The Present vs The Future
The AI hype cycle:
2023: "AI will change everything!" 2024: "AI is replacing jobs now!" 2025: "AI will cure diseases!" 2026: "AI will achieve AGI!"
The actual present:
✅ AI makes constant mistakes ✅ Every output needs human verification ✅ Hallucinations are fundamental, not fixable ✅ Reliability is inconsistent between versions ✅ "Can make mistakes" is a permanent disclaimer
Stop talking about the future. Let's talk about NOW.
And NOW, AI is not ready.
Why Companies Push Unfinished AI
The real reason AI tools are rushed to market:
1. Investor Pressure
Billions of dollars raised. Investors want returns NOW.
Can't wait for the technology to actually work. Ship it anyway.
2. Market Share Land Grab
First mover advantage. Lock in users before competitors.
Quality comes later (maybe). Adoption comes first.
3. Loss Leader Strategy
Lose money per user now. Build dependency.
Raise prices later when switching costs are high.
4. Hype Bubble
The AI narrative is too profitable to slow down.
Stock prices, funding rounds, and careers depend on the hype continuing.
Nobody wants to be the one who says "wait, this doesn't actually work yet."
What "Not Ready" Actually Means
AI tools ARE useful for:
- Brainstorming ideas
- First drafts that need heavy editing
- Repetitive code boilerplate
- Autocomplete suggestions
- Learning new concepts (with verification)
AI tools are NOT ready for:
- Production code without review
- Mission-critical decisions
- Replacing human judgment
- Content that doesn't get fact-checked
- Anything where mistakes have consequences
The problem isn't that AI has no value.
The problem is the dishonest marketing claiming it can do things it fundamentally cannot.
The Pushback
When Notch tweeted "AI is not ready":
Responses: "Out of touch boomer!" "He doesn't understand technology!" "He's just bitter!"
But Notch is right.
He built Minecraft. He understands complex systems. He knows the difference between working technology and promising prototypes.
Calling him out of touch doesn't change the facts:
- AI hallucinates
- AI makes consistent mistakes
- AI requires constant human verification
- AI's reliability is unstable between versions
These are not opinions. These are measurable facts.
What Should We Do?
For Developers:
Use AI as a tool, not a replacement:
- Generate boilerplate, but review everything
- Use for exploration, not production
- Treat it like an intern: helpful but needs supervision
- Never trust it blindly
For Companies:
Be honest about limitations:
- Don't pitch AI as a replacement for skilled workers
- Acknowledge hallucination risks
- Build verification processes
- Don't fire people to "replace them with AI"
For Everyone:
Demand honesty:
- Stop accepting "AI can make mistakes" as acceptable for critical uses
- Call out dishonest marketing
- Recognize when you're being sold unfinished products
- Don't let FOMO override critical thinking
The Bottom Line
AI in 2026 is like self-driving cars in 2016:
Impressive demos. Promising technology. Real potential.
But not ready for full autonomy.
You wouldn't put your family in a self-driving car that "makes mistakes 4 out of 10 times."
Why would you trust your codebase to AI with the same error rate?
The technology will improve. Maybe in 2-3 years, maybe longer.
But right now, today, in the present?
AI is not ready.
And pretending otherwise doesn't help anyone except the companies selling it.
Key Takeaways
AI makes constant, fundamental mistakes - not bugs to fix, but core hallucination issues.
The "replace engineers" pitch is a sales tactic - enterprises buy cost savings, not augmentation.
Unskilled users are most impressed - because they can't spot the quality problems.
The hype doesn't match reality - talk about the present, not the imagined future.
Use AI as a tool, not a replacement - augment human work, don't replace human judgment.
What Do You Think?
Is AI ready for production use without human oversight?
Are companies being honest about AI limitations?
Have you caught AI making up data or code?
Share your experiences in the comments.
Based on the video "AI is not ready" by Mo Bitar (33K+ views)