Sam Altman Outlines OpenAI’s Long-Term Bet on Profitability

Sam Altman explains OpenAI’s long-term path to profitability, arguing that rising losses reflect aggressive investment in AI training and compute capacity as the company bets on sustained revenue growth from consumers and enterprises.

Sam Altman Outlines OpenAI’s Long-Term Bet on Profitability
Photo by Solen Feyissa / Unsplash

OpenAI CEO Sam Altman has offered a detailed explanation of the company’s approach to profitability, arguing that heavy losses today are the result of deliberate investment in large-scale model training rather than a lack of demand for its products.

Speaking on the Big Technology Podcast, Altman addressed questions about OpenAI’s financial trajectory during a discussion focused on revenue growth, compute spending, and the company’s long-term business model. The exchange, which begins around the 36-minute mark of the interview, centered on reports that OpenAI could incur significant cumulative losses before reaching profitability later in the decade.

Altman acknowledged that OpenAI’s compute costs currently outpace revenue growth, but said this imbalance reflects a strategic choice. According to Altman, OpenAI would reach profitability much sooner if it slowed investment in training increasingly large and capable models. Instead, the company has chosen to scale its training efforts aggressively, accepting near-term losses in pursuit of long-term market leadership.

He argued that concern about OpenAI’s spending would be warranted only under a specific condition: if the company were to reach a point where it had substantial computing capacity that could not be monetized profitably. As long as demand continues to absorb available compute, Altman suggested, high spending remains justified.

During the interview, the host pressed Altman on widely reported figures suggesting that OpenAI could lose more than $100 billion before becoming profitable, while also committing to long-term infrastructure investments reportedly measured in the trillions of dollars. The question focused on when and how revenue growth would eventually overtake costs.

Altman responded that the company expects the economics to shift as inference—running trained models for users—becomes a larger share of overall compute usage. While training is capital-intensive and front-loaded, inference generates ongoing revenue. Over time, Altman said, inference demand is expected to grow large enough to absorb and surpass training costs.

He also emphasized that OpenAI remains constrained by limited compute availability rather than excess capacity. According to Altman, insufficient compute directly limits revenue, meaning that additional capacity can be monetized almost immediately through consumer subscriptions, enterprise offerings, and API usage. He added that OpenAI expects further gains in efficiency, reducing the cost per unit of computation as hardware and software optimizations improve.

Altman pointed to strong growth across consumer products such as ChatGPT, expanding enterprise adoption, and future products that have yet to launch. In his view, computing power is the core input enabling all of these revenue streams, making continued investment essential to the company’s growth.

When asked directly whether OpenAI expects revenue from consumer subscriptions, enterprise services, and API access to eventually cover its compute investments, Altman confirmed that this remains the plan.

Taken together, Altman’s comments frame OpenAI’s financial strategy around a clear benchmark: profitability is not limited by demand, but by the company’s ability to build and deploy computing capacity. The risk, as he defines it, would emerge only if that capacity could no longer be sold at a profitable rate.

For now, OpenAI’s path forward rests on a single assumption—that revenue growth can keep pace with, and eventually exceed, the scale of its computing investments as AI adoption continues to expand across consumers and businesses.