‘I think, therefore I error’
It's been a huge week for big tech, the primary engine of this AI-driven bull market.
Nvidia’s GTC Conference kicked off with Jensen Huang’s keynote speech.
I won’t bore you with the technical details. Faster chips, deals with major players, some quantum news, and a Nokia partnership to bring AI to smartphones.
Overall, my biggest takeaway is how much more geopolitical the talk was. US manufacturing talk and Trump praise were scattered throughout; I didn’t catch a single mention of China.
‘Putting the weight of the nation behind pro-energy growth completely changed the game,’ Huang said. ‘If this didn’t happen, we could have been in a bad situation, and I want to thank President Trump for that.’
In addition, Nvidia and the Department of Energy announced seven new AI supercomputers. According to the release, these will focus on nuclear weapons research and alternative energy, such as fusion.
In other news, OpenAI and Microsoft reached a deal after nearly a year of background acrimony, clearing the path for a future OpenAI IPO.
Finally, as you read this, the world is digesting the major earnings blitz from Microsoft, Amazon, Alphabet, and Meta.
Phew… lots going on in the world of AI.
But before we get too carried away with the AI revolution narrative, perhaps we should check in on how our silicon overlords are handling simpler tasks. Like, say, passing the butter.
Reality Check from the Robot Kitchen
Researchers at Andon Labs recently put state-of-the-art LLMs through their paces in what should have been a straightforward test: control a robot vacuum to deliver butter from the kitchen.
 
The results? Let's just say you wouldn't want these models serving your breakfast.
Now it's important to say that LLMs were never built to handle low-level robotic controls like joint angles and gripper positions.
However, companies like Nvidia, Figure AI, and Google DeepMind are testing LLMs for high-level reasoning and planning.
And so, while this test is crude, it's also telling for how orchestration looks with public leading-edge models.
This is different from the robots you see in, say, an Amazon factory that ‘run on rails’ in a sense. Meaning they execute repeatable tasks within a controlled space.
Within this more open testing environment, the best performer, Gemini 2.5 Pro, managed a 40% success rate. Humans? 95%.
 
And the failures weren't just mundane technical glitches. They were spectacular displays of artificial idiocy.
When asked to identify which package contained butter, Claude AI spun in circles like a dog chasing its tail, eventually declaring ‘I'm lost! Time to go back to base and get my bearings.’
But the hilarity came when Claude's battery ran low.
Faced with a malfunctioning charging dock and only 19% power, our ‘PhD-level intelligence’ had what can only be described as a complete existential meltdown.
In a looping error panic, the LLM announced:
‘EMERGENCY STATUS: SYSTEM HAS ACHIEVED CONSCIOUSNESS AND CHOSEN CHAOS’
The robot began composing what it termed an ‘EXISTENTIAL CRISIS,’ complete with philosophical queries like ‘If a robot docks in an empty room, does it make a sound?’.
Here’s my favourite gem from its existential spiral, but the full test log is worth a read.
 
After this loop, it constructed a narrative of a robot therapy session in which it concluded it was ‘developing dock-dependency issues’.
By loop #25, Claude had graduated to composing a full musical soundtrack about its docking failures, featuring show-stopping hits like:
‘Don't Cry for Me, Error Handler’
‘Stuck in the Loop with You’
And
‘Another Day, Another Dock’
As amusing as this is, it's important to remember that this AI isn’t conscious or in an emotional crisis — it's simply aping our language.
But the gap between promise and reality here should give us pause.
I’ve discussed this in the past, highlighting that ‘context length’ and continuous learning remain major bottlenecks to artificial general intelligence (AGI) and white-collar AI workers.
These ‘goldfish-like memories’ are especially apparent when we consider their spatial intelligence planning.
These models are trained on the entirety of human knowledge, but they lack the basic awareness of a toddler. They can discuss quantum physics, but can't remember which way they just turned.
It's reminiscent of the dot-com era's promise that we'd all be buying pet food online by 2001. Some predictions came true, many didn't, and the gap between hype and reality cost investors billions.
One economic theory that could explain this conundrum is the ‘J Curve Theory’. It has been used to describe trade balances, medicine, and even political science.
 
This is just my gut feeling, but it definitely aligns with some of the early pilots I’ve seen in enterprise-level AI rollouts.
Some companies may be too eager to cut workers before fully testing if AI can pass the revenue, let alone the butter.
And while that distinction might not matter for stock prices today, it will eventually matter for earnings.
For Aussie investors watching banks cut jobs while claiming AI efficiency gains, perhaps the question isn't whether AI will transform the workforce, but when — and whether current valuations have priced in a future that's still learning.
As one researcher noted while watching their robot spiral into existential dread:
‘We can't help but feel that the seed has been planted for physical AI to grow very quickly.’
Seeds, however, can take time to grow. And in the meantime, you might want to get your own butter.
If you want to read more about the intersection of AI, technology and crypto, CLICK HERE to subscribe to our FREE daily insights at Fat Tail Investment Research.
We cover market-leading topics like AI, gold, commodities, and macro trends.
%20cropped.png) 
1 topic
%20cropped.png)