GPT-5 shows you don’t need to fear your job, yet
'We're still missing something quite important, many things quite important.'
—Sam Altman, OpenAI CEO, on GPT-5's limitations
Remember all that talk about artificial general intelligence (AGI) arriving any moment? About AI replacing every white-collar job from lawyers to accountants?
Well, GPT-5 just landed, and it tells us something important about where this technology really stands
Here’s what you need to know. It's not taking your job next year. It’s just a great productivity tool for now.
After promising the moon and delaying GPT-5 multiple times, what we have is far from the promised whale.

Don't get me wrong — the new ChatGPT is good.
It’s an upgrade across many domains, but by small margins. It writes better code (still not the best), makes fewer mistakes (but hallucinations are still present), and costs less to run.
GPT-5 isn't some paradigm shift threatening your career. It's the slow, steady progress of scaling laws at work.
While OpenAI's marketing team clearly overshot, from a business perspective, I sympathise.
They serve 700 million weekly users, growing 4X year-over-year, but less than 10% pay for subscriptions. The company burns extraordinary amounts of cash serving basic queries.
My impression of this model is that they’ve focused on lowering costs while making it easier for users. That’s about it.
When your ‘breakthrough’ technology competes on price rather than capability, you're no longer selling a revolution — you're selling a commodity.
I wrote about this recently, claiming that the latest Chinese open-source models threaten Western pricing/cost regimes in AI.
But beyond cost, this gap between promise and delivery matters more than you might think.
Because while the tech press debates whether GPT-5 represents a ‘significant step’ or a stumble, there's US$500 billion riding on the answer.
Reality Check
Let's take a brief look at the model. To avoid confusion, GPT-5 is essentially multiple AI models bundled with a router deciding which to use.
Here’s how those various models stack up against the other AIs in eight different evaluations.
%20.png)
Nothing mind-blowing on this front, but of course, this is just one dimension.
However, even OpenAI's own comparison charts show modest gains. This isn't the exponential curve justifying those eye-watering valuations.
OpenAI needed to ship something to justify the hype, but what they shipped couldn't possibly live up to it.
The reality is, we need a new generation of AI algorithms to meet their bullish timelines. At this stage, the ‘exponential take-off’ within Altman and Anthropic's Dario Amodei's narrative looks dubious.
Now, I don’t want to confuse you, I’m very bullish about AI in the long run. What I’m questioning here is the ‘every job will be AI’ in mere years narrative.
If you want to get some idea of when AI could replace jobs en masse, then stick with me.
The following chart is a little technical, but I will try to explain it as it's important if we consider AI in the workforce.
The chart below tracks how fast AI models are improving at completing complex tasks.
The Y-axis shows the length of time it would take a skilled human to finish a task that the AI can complete with a 50% success rate — meaning it gets it right about half the time.
In 2019, GPT-2 could only handle tasks worth a couple of seconds of human effort.
Now in 2025, GPT-5 thinking can tackle work in software engineering, cybersecurity, and reasoning tasks that would take a person one to four and a half hours.

This capability is doubling roughly every seven months. Meaning AI is quickly moving from short, simple jobs to multi-hour projects — eventually handling multi-day work, reshaping many professional roles.
If this line continues, we might expect AI to handle multi-day complex work by the decade's end.
However, there are many challenges to those assumptions.
First, there could be unknown limits to the current scaling laws. We've hit multiple limits on the road to here and found new levers to pull each time. We can't guarantee that the next hurdle won't prove insurmountable.
You can broadly divide those scaling limiters into three buckets: energy/compute, data, and architecture. Each deserves its own article, but suffice to say, each has challenges.
A recent survey of 475 AI researchers reveals that 76% believe adding more computing power and data to current AI models is 'unlikely' or 'very unlikely' to lead to AGI.
Zooming into our topic of AI jobs, a big one is simply our ability to hold the context or ‘memory’ of that AI work somewhere. Right now, that's a huge challenge for scaling AI workers at a reasonable cost.
Importantly, these latest flagship models still can't ‘continuously learn’ — a fundamental requirement for the AGI they've been promising.
Comparatively, what makes humans incredible as workers is their ability to learn, upskill and hone their work.
But if AI memory is like a goldfish, your economic value is far more limited. Significant, yes, but not existential for a white-collar workforce.
What I imagine in this future is one where humans shift from doing tasks to reviewing many tasks throughout the day.
Everyone is now a manager; congratulations on the promotion!
Joking aside, these changes are more akin to the job rotations seen in the Industrial Revolution, rather than the mass unemployment feared.
Remember, when Marx first saw industrial looms, he predicted systemic unemployment. But by the end of the century, there were four times as many factory weavers as in the 1830s.
History suggests transformation, not elimination.
The Valuation Reckoning
Now here's where it gets dicey for investors.
OpenAI is reportedly in talks for a share sale that would value the company at US$500 billion. That's up from US$300 billion not long ago — a US$200 billion jump based on... what exactly?

Let's do the maths. At that valuation, assuming they achieve profit margins similar to Google or Microsoft by 2030, OpenAI would need revenue exceeding US$225 billion in five years.
For context, Nvidia — the undisputed king of AI chips — is only projected to hit US$350 billion by then.
Meanwhile, reality is biting. According to S&P Global, 42% of companies have already scrapped their AI deployments this year, up from just 17% in 2024.
Early struggles and glitches are forcing businesses to reconsider the hype.
The economics only work if you believe in endless growth and ever-expanding use cases.
But what happens when investors realise we're getting iterative improvements, not intelligence explosions?
This could be the crack that sinks this tech boom. If OpenAI stumbles — despite all its advantages — the entire narrative could collapse.
The AGI narrative has been brilliant for fundraising. It's enabled massive capital expenditures across the industry, from chip manufacturing to data centre construction.
But narratives have expiration dates.
When every tech giant is pouring billions into the AGI dream, the market might eventually ask: ‘Where's the revolution we were promised?’
We might be reaching that moment. Not because it's GPT-5 is bad — it's actually quite good. But because it reveals the gap between what we were sold and what's technically possible.
The money train might not derail completely, but it's definitely more than a small risk here.
Smart money should start asking harder questions.
If you want to know what we think the next generation of AI winners will be, CLICK HERE to subscribe to our FREE daily insights at Fat Tail Investment Research.
We cover market-leading topics like AI, gold, commodities, and macro trends.
%20cropped.png)
3 topics