AI factories – are they built on Firm(us) ground?
Mention the letters ‘AI’ to anyone and they will either roll their eyes or start salivating with interest – that seems to be the two camps of extreme in which investors currently find themselves.
There is no question that AI is dominating news globally and with that naturally comes hype and with every additional headline comes with it the warning of an ever-increasing bubble that will one day burst.
The so called ’AI industrial revolution’ is creating first, second and third order derivatives, which as a result are sprouting up brand new industries and businesses, many of whom appear on the surface to be overnight successes. In Australia, one such company which has been in the headlines of late is the private company Firmus Grid Limited (Firmus).
As has been publicly announced, Firmus completed an AUD$330 million capital raising to build an ‘AI Factory’ in Tasmania. NVIDIA Corp. (NDQ: NVDA), the world’s most valuable company, was a participant in this capital raising event.
According to both the Australian Financial Review (Street Talk) and The Australian (Dataroom), Firmus may be looking to do a subsequent capital raising shortly, whilst also aiming to conduct an IPO in CY2026.
A key part of our investment process at NAOS is spending a considerable amount of time understanding the dynamics that underpin industries in which potential investments operate.
Below, we outline some of the fundamental concepts underpinning the AI industry as well as Firmus specifically. Ultimately, a number of these dynamics will have a significant influence on the success (or otherwise) of Firmus and many other companies globally.
What Is an AI Token?
Beneath the surface of every AI application are algorithms that churn through data in their own language, based on a vocabulary of ‘tokens’. Tokens are tiny units of data that come from breaking down bigger chunks of information.
AI models process tokens to learn the relationships between them and unlock capabilities including prediction, generation and reasoning. This process of tokenisation is a crucial step in preparing data for further processing.
An AI model processes these input tokens, generates its response as tokens and then translates it to the user’s expected format. The faster tokens can be processed, the faster models can learn and respond. Importantly, tokens are used both when you ask a question and when the AI answers, making it a two-way process.
As AI becomes more advanced, and with the introduction of large language models (LLMs) such as ChatGPT which includes reasoning AI, the number of tokens required for AI processes and responses has increased exponentially. As a rule of thumb, 1 token = 3-4 letters in English; ~1 token = 0.75 words. Some AI queries now involve millions of tokens.
AI Training Vs. AI Inference
There are two fundamental phases of an AI model – training and inferencing.
- Phase 1 = AI training:
- Training is the process of teaching an AI model by feeding it large datasets to recognise patterns and learn.
- The AI training phase requires massive computational power, most notably in the form of NVIDIA GPUs for this process to occur effectively.
- Training a model from an algorithm often spans days to even months and incurs substantial costs. Furthermore, the computing power necessary to train new iterations of a cutting-edge model (e.g. ChatGPT-5) is rapidly increasing.
- Phase 2 = AI Inferencing:
- This phase is where the trained AI model (from phase 1) applies its learned knowledge to new, real-world data to produce output, such as predictions, classifications, or decisions.
- For every time a model is trained, it may run millions upon millions of inferences before it’s ever trained again. Every time a model guesses-the-next-word, it performs inference. Inference is the true high-volume activity in this new age of generative AI applications and the stage in which end users typically interact with AI.
What Is an AI Factory?
To understand AI factories, it's helpful to first break down data centres into two main parts: the physical "shell" (like the secure building and power connections) and the internal IT resources (compute power and networking). In this space, some companies focus only on building and leasing the physical infrastructure as landlords, others specialise in designing and running the internal tech, and some handle both.
Firmus, for example, constructs and operates purpose-built AI factories. These are next-generation data centres optimised for training, fine-tuning, and deploying AI models at massive scale—particularly large language models (LLMs).
Traditional data centres, by contrast, are often inefficient for AI workloads. They're designed for general computing tasks like cloud hosting, email, or web traffic, not the intense, specialised demands of AI processing.
An AI Factory is purpose-built to:
- Host thousands of high-performance graphics processing units (GPUs), predominately NVIDIA GPUs;
- Deliver efficient power and cooling (often cooling GPUs in liquid); and
- Operate at ultra-high energy density and uptime.
In essence, it is a data centre which houses a ‘massive farm’ of NVIDIA GPUs, acting as a ‘manufacturing plant’ for producing AI tokens used in training and running AI models. AI factories enable faster, more cost-effective AI development and iteration compared to traditional data centres, which aren’t built for these intensive AI workloads.
Firmus Operations – Singapore
In June 2023, Firmus partnered with ST Telemedia Global Data Centres (STT) to build an AI factory within the existing STT Singapore data centres. Firmus is the owner and operator of the internal IT network & compute whilst STT is the owner of the physical data centre. STT itself is a major data centre company within Singapore, and globally with >95 data centres. It is owned by Temasek Holdings, the major sovereign wealth fund of Singapore. Firmus have deployed ~4,000 NVIDIA GPUs across two of these STT data centres sites.
Firmus is generating revenue and earnings from large scale enterprise and government related customers across these Singaporean operations. These Singapore facilities received the Asia-Pacific Data Centre Project of the Year award for its advanced design, exceptional energy efficiency and cost savings (~30% lower than the current status quo) through the use of liquid cooling, meaning the NVIDIA GPUs are housed within a cooling liquid to improve power efficiency and lower cost. This is despite Singapore being an equatorial climate (i.e. hot, so requiring significant cooling).
Firmus Operations – Tasmania
The abovementioned AUD$330 million capital raising conducted by Firmus is a key input into the development of Project Southgate, an AI factory in Launceston, Tasmania. Once built, this two-stage project will be the largest AI specific facility in Australia. This project will be built, designed, owned and operated by Firmus, which is different to the Singaporean operations whereby Firmus is the owner & operator of the internal IT infrastructure, not the data centre shell.
Source: Firmus website
Project Southgate Tasmania will deliver a combined ~90megawatts (MW) of AI infrastructure by 2026 in the first stage — with 44MW delivered under Stage 1a, and Stage 1b doubling capacity to 90MW. Project Southgate Stage 1a has commenced its building process during 2HCY25 and we expect to be completed by mid CY26. A further 300MW second stage is planned to follow, subject to final approvals.
Site selection is a very critical input into the success or otherwise of any data centre. The Launceston site for Firmus has all the necessary approvals, sufficient existing power supply, appropriate redundancies in place and a supportive government. Furthermore, the cooler climate of Tasmania, as well as renewables being the predominant source of energy are additional physical advantages which have contributed to the creation of the Tasmanian ‘AI Factory Zone’.
“Tasmania is set to lead the world in sustainable artificial intelligence with the creation of a world-first AI Factory Zone in Northern Tasmania” – Jeremy Rockliff, Tasmanian Premier
Firmus Strategic Alliance – Expanding Project Southgate
Firmus announced in October 2025 a strategic partnership with CDC Data Centres (CDC) and NVIDIA to expand Project Southgate onto the mainland of Australia. For context, CDC (formerly Canberra Data Centres) is a leading developer, owner and operator of highly secure, sovereign data centres across Australia and New Zealand. CDC is partially owned by listed infrastructure company Infratil Ltd (ASX: IFT), in addition to The Future Fund and the Commonwealth Super Corp. According to public filings, its most recent valuation in September 2025 was AUD$13.6 billion.
This strategic partnership will see Firmus take capacity within the current and future physical data centres infrastructure that CDC has. This approach draws on the strengths of both companies, whereby Firmus design, deploy and operate their AI Factory (as they have done in Singapore), whilst CDC gain a potentially substantial new customer for their soon to be built new Melbourne facility, with future locations in Sydney, Canberra and Perth. NVIDIA also makes a material contribution to this alliance, albeit the details of the capacity of their involvement appears not yet to be publicly known. Our best guess based off similar recent deals seen globally, is that NVIDIA will be the supplier of GPUs as well as be some form of strategic partner, which could go a long way to derisking the revenue profile of the initial operations.
In our view, there are several reasons why this strategic alliance makes sense for Firmus, including that it:
- provides an avenue to scale their AI Factory operations in a far greater manner;
- removes the capital expenditure associated with building data centre infrastructure and minimises the types of risk associated with building said infrastructure (e.g. cost of land, access to power, regulatory approvals);
- significantly reduces the time frame to project execution and enhances their ability to take advantage of the current industry tailwinds as best as possible;
- should lead to greater certainty around timing of future revenue streams; and
- allows Firmus to focus on its core competency, being building and operating the IT internals of AI factories in a highly efficient manner.
Whilst this strategic alliance may have seemed hard to grasp for Firmus, even only a few months ago, and on paper its sheer size, in a domestic context, is remarkable, it has precedent globally, which we outline further down in this report. According to the numbers released, this strategic alliance (which also includes Firmus’ Tasmanian operations) has the ability to scale from ~150MW (megawatts) in 2026 to 1.6GW (gigawatts) through 2028 (i.e. ~10x bigger).
Industry Tailwinds
The AI sector faces a stark imbalance between surging demand and limited supply—not just for AI tokens, but also for the underlying infrastructure needed to produce them.
By the end of 2024, the time spent generating content via generative and reasoning AI models had grown by more than 22x compared to the previous year. Brookfield, the world’s largest infrastructure manager, forecasts ~USD$2 trillion in global spending on AI factory development by decade's end.
Big tech companies are heavily investing in AI capital expenditures to build the necessary data centres and infrastructure to power an emerging, generational technology shift and meet soaring demand for AI services. This large-scale spending is crucial for companies like Microsoft, Google, and Meta to build a competitive advantage, as AI is becoming a primary growth engine for their cloud, advertising, and search businesses, which in theory should lead to increased revenues and profits.
The soaring demand for AI stems not only from a growing user base but also from increasingly complex and sophisticated applications. Measured in AI tokens—the universal currency of AI—US technology research firm Tirias Research projects a ~115x surge in token usage by 2030. These tokens must be generated, processed, and monetised, requiring substantial global capital investment. From a NAOS perspective, the industry tailwinds are strong but that alone does not guarantee success at Firmus, nor an appropriate return on investment.
Source: Tirias Research
Firmus Competitive Positioning
We classify the competitive positioning of Firmus into three categories outlined below.
- Physical Advantages
On a standalone basis, the site-specific physical location in Tasmania has numerous advantages including a relatively low cost of land & labour, sufficient primary and redundant power, a supportive local government, a climate suitable to lowering power & cooling costs.
- Sovereign Advantages
Australia is one of only ~7 countries/regions outside of the US which have been given full access to NVIDIA chips by the Trump administration and get unvetted access to the most advanced NVIDIA chips available. With the demand for NVIDIA GPUs far outweighing the supply, being a customer in a tier 1 nation is not only a major competitive advantage with respect to purchasing NVIDIA GPUs but also for attracting global customers, many of whom are looking for reliable supply. The strategic alliance with CDC is a major benefit when it comes to reliability and data protection for future customers given their positioning in the Australian and New Zealand marketplaces as a top tier data centre company
- Operational Advantages
Firmus has developed internal engineering expertise with respect to the way in which the company constructs and operates its direct-to-chip cooling processes for NVIDIA GPUs. This has delivered tangible benefits in the form of lower-cost production of AI tokens. This operational excellence has positioned Firmus as one of the few global NVIDIA partners and, potentially, the sole partner in the APAC region.
Source: NVIDIA
The robust partnership between Firmus and NVIDIA—where NVIDIA serves as GPU supplier, operational collaborator, and equity holder—mirrors successful models in other regions. Top-tier firms similar to Firmus have thrived under this structure. We believe this model succeeds only when Firmus (i.e. David) delivers clear value to NVIDIA (i.e. Goliath), creating mutual commercial benefits. The recent Project Southgate Strategic Alliance announcement reinforces our confidence in this approach.
Furthermore, we have seen recent examples in other geographies, namely with CoreWeave Inc. (NDQ: CRW), NScale (unlisted) and Lambda (unlisted) whereby NVIDIA appears to have effectively signed guaranteed offtake/backstop agreements with the abovementioned companies, essentially underwriting their revenue on long term deals, acting as a significant risk mitigator. We are unclear if this is the case with Firmus in this strategic alliance but at the very least, we believe all examples evidence the strength of the relationship which NVIDIA has with certain partners.
Source: Bloomberg, AFR, Koyfin, Data Centre Dynamics
Return on Investment
Will the AI CAPEX boom deliver strong future returns on investment (ROI)? That is perhaps the trillion(s) dollar question. One key factor is the faster timeline for AI factories to reach full utilisation of contracted revenue compared to traditional data centres. We note the below are illustrative examples only and should not be relied upon for any specific company.
AI training works best when all available computing power is used at once, rather than gradually. This means customers pay for the full capacity upfront, generating revenue and earnings for AI factories much faster than traditional data centres. Quicker revenue realisation provides greater certainty for AI factory funding structures.
Source: Wilsons, NAOS
As the saying goes, history doesn’t repeat, but it rhymes. Consider Amazon Web Services (AWS), the cloud division of Amazon.com Inc (NDQ: AMZN). It took ~13 years to turn a profit, yet in CY25, AWS is achieving a ~26% return on incremental invested capital while still allocating ~$100 billion to capex. Clearly, Amazon anticipates significant future returns on this investment. For an additional datapoint on this, on their late October 3QCY25 results call, Amazon CEO Andy Jassy mentioned that their AWS growth has reaccelerated to levels not seen since 2022, which considering the law of (very) large numbers within AWS, highlights the underlying demand they are experiencing.
“As fast as we are adding [compute] capacity right now, we’re monetising it.” – Andy Jassy, CEO, Amazon.com Inc.
We have also included the same metrics table for the Alphabet Inc (NDQ: GOOG) cloud division, known as Google Cloud, which again highlights reducing returns on incremental capital and significant capital expenditure (Microsoft do not provide enough datapoints on their cloud division Azure to undertake the same analysis). The point we are highlighting here is that despite significant upfront capital spend, adequate to strong returns are achievable. Industry demand suggests ongoing reinvestment over the next decade, but this doesn’t preclude value creation in the meantime.
Amazon Inc (NDQ: AMZN) AWS Divisional Financials & Returns
Source: Company Financials, NAOS
Alphabet Inc (NDQ: GOOG) Google Cloud Divisional Financials & Returns
Source: Company Financials, NAOS
Another factor contributing to ROI will be the efficiency in which an AI Factory can generate AI tokens. The simple logic being, that the more cost-effective an output can be produced, the higher the operating margins may become. We have seen recent commentary from global AI cloud provider Oracle Corp. (NYSE: ORCL) refer to a gross margin profile, post equipment depreciation of 30-40% in this operational area.
Similar to how Firmus will be positioned under the strategic alliance, Oracle does not build the physical infrastructure, rather they lease space in purpose-built sites, pay rental payments to the landlords and generate revenue from their AI Factory/Cloud (compute & networking infrastructure).
In a recent investor day, ORCL referenced margin improvement over time as they become more efficient in their operations. Bringing this back to Firmus, the evidence suggests they have already proven themselves to be a very low cost, efficient AI Factory through its Singapore operations, which all else being equal, should deliver higher gross margins than comparable industry norms. We will see if this plays out in reality over time as they further scale out their operations.
A critical factor in the ROI for AI investments is the ability to repurpose GPUs used for AI training for AI inferencing with little to no additional cost. Our understanding is that there is no incremental capital expenditure required to repositioning GPUs from training to inference. NVIDIA GPUs, the leaders in AI training, are equally effective for inferencing, despite more competition in that market. If GPUs can serve "two lives", their returns could extend well beyond the typical ~5-6-year depreciation period, potentially to over 10 years, with the later years being highly profitable. As with any economic market, pricing will depend on supply and demand. Given the forward projections are for the AI inferencing market to become the largest part of the AI market in time, we believe pricing will remain stable to strong at least over the medium term, if not longer.
“The amount of inference compute needed is already 100x more… And that’s just the beginning.” – Jensen Huang, CEO, NVIDIA Corp.
We believe that within the Firmus Singapore operations, there are customers who are conducting AI training and also customers using Firmus for AI inferencing purposes, meaning there is a proof point for this already occurring. Given the long-term nature of customer contracts and the very strong demand backdrop, in addition with having a strong relationship with NVIDIA, these should aid in the company generating a compelling ROI on its deployed capital for the foreseeable future.
Risks
Like any investment in emerging companies, Firmus is not without risk; particularly given Firmus is currently unlisted. Our view is that some risks, such as execution challenges during construction, can be mitigated and are likely to diminish significantly within the next ~6 months.
There are other risks, namely technological (given the rate of technological advancement that has historically occurred across different iterations of NVIDIA GPUs), which Firmus and all their peers will need to manage as the ‘backbone’ of AI continues to evolve over the coming years.
As is the case with a lot of data centres, contractual lengths with customers are typically measured in years. Therefore, having an offering which attracts premium pricing and long-term contractual periods stands you in the best chance for achieving a healthy revenue profile.
We have listed below a number of notable risks that investors should be aware of in relation to Firmus or businesses with similar business models:
· The AI business model has yet to be proven to be profitable – With many, if not most artificial intelligence businesses remaining in private structures (such as OpenAI) it’s hard to gauge just how financially successful these business are in the early years of the AI cycle. Many commentators will highlight that given the extraordinary cost of building AI models, the ultimate revenue model that’s required to support this investment is yet to be proven by any one business.
· The (in some case) circular business model of NVIDIA. – More recently NVIDIA has made several investments into users of NVIDIA GPUs (i.e. their own customers). This culminated more recently with its $150b billion investment into OpenAI. These funds will be used by OpenAI to build data centres with a combined >10 gigawatts of power. These data centres will ultimately house many thousands of NVIDIA GPUs which in turn boosts NVIDIA’s financial performance. The below image highlights the extent of this circular business model amongst select parties.
Source: Bloomberg
· Technological change and the effect on demand for NVIDIA GPUs – Currently and as has been the case for the last few years, NVIDIA has a significant competitive advantage to its GPU peers. Whether or not this will continue is unknown but ultimately for a business such as Firmus, if its AI Factories are filled with NVIDIA GPUs, it’s imperative they remain highly valued by the market in general.
· Will the long-term benefits for AI drive structural demand for processing power? Continued demand for computer power will only occur if the benefits of using AI services provides them with a positive outcome for the end user.
Conclusion
Time will tell whether we are in ‘one of the greatest financial bubbles of all time’ or whether we are simply in the build-out phase of infrastructure underpinning the ‘5th industrial revolution’. As with any investment, picking the victors from the failures is the name of the game and whilst it might all seem rosy at the moment, the strength of the foundations underpinning any particular investment are as important as ever in the world of AI. Perhaps the only thing we can guarantee is that there will be many business failures along the way.
Despite this, we at NAOS believe that following the demand signals can often provide a good barometer for at least the medium, if not the longer, term health of any industry. In the AI industry, the consensus signals from many of the Magnificent 7 companies, whom have some of the world’s healthiest balance sheets, indicate that demand will remain over the coming years. Those same companies have the financial wherewithal to underpin this demand with supply and the incentives to see it succeed.
“'There’s definitely a possibility, at least empirically, based on past large infrastructure buildouts and how they led to bubbles, that something like that would happen here… If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously. But what I’d say is I actually think the risk is higher on the other side [opportunity cost of not spending those dollars].” - Mark Zuckerberg, CEO, Meta Inc.
5 topics