Why is the Cost of Developing Artificial Intelligence So High?

Why is the Cost of Developing Artificial Intelligence So High?

Why is the Cost of Developing Artificial Intelligence So High?

The push towards larger AI models, coupled with the increasing need for chips and data centers,
is driving up costs for tech companies.

 

Topic

details

Massive Investments for Early Gains

Rising AI Costs: Bigger Models and Increasing Demand

Large Language Models Require Massive Investments

Chip and Computing Costs: Major Investments in Advanced Technology

Renting Chips: Another Expensive Option

Conclusion

 

 

 

 

 

 

details

After more than 18 months of intense focus on generative AI,
some of the biggest tech companies have shown that this technology can be a real revenue source.
However, it also represents a significant expense.
Microsoft and Google’s parent company, Alphabet,
both reported increases in cloud services revenue in their latest quarterly results,
with customers spending more on AI services.
Meta Platforms indicated that its AI efforts have enhanced user engagement and ad targeting,
although it is still far from making substantial profits from this technology.

 

 

 

Massive Investments for Early Gains

The three companies have spent billions on AI development to achieve these early gains and plan to increase these investments.
On April 25, Microsoft announced that its capital expenditures reached $14 billion in the last quarter,
a 79% increase from the previous year’s quarter was partly driven by AI infrastructure investments.
Alphabet spent $12 billion during the same quarter, a 91% increase from the previous year,
and expects spending to remain at the same level or higher for the rest of the year,
focusing on AI opportunities.
Meanwhile, Meta raised its investment estimates for the year,
now projecting capital expenditures to range between $35 billion to $40 billion,
a 42% increase at the upper end of the range.

 

 

 

 

 

 

 

 

 

Rising AI Costs: Bigger Models and Increasing Demand

The rising cost of AI has surprised some investors,
with Meta’s stock falling in response to spending projections paired with slower-than-expected sales growth.
However, within the tech industry, it has long been known that AI costs would rise.
This is due to two main reasons: AI models are becoming larger and more expensive to develop,
and the global demand for AI services necessitates building more data centers to support them.

 

 

 

Large Language Models Require Massive Investments

Today’s most popular AI products, like OpenAI’s ChatGPT,
rely on large language models, systems fed with massive amounts of data to provide the best possible responses to user queries.
Many leading AI companies are betting that the path to more advanced AI,
and possibly systems capable of surpassing humans in many tasks, lies in making these large language models even bigger. This requires more data, computational power, and longer training periods.
Dario Amodei, CEO of Anthropic, a competitor to OpenAI, stated that current AI models cost around $100 million to train, with future models potentially costing $1 billion, and between $5 billion to $10 billion by 2025 and 2026.

 

 

 

Chip and Computing Costs: Major Investments in Advanced Technology

A significant portion of the cost is tied to chips, not the usual CPUs but powerful GPUs that can process vast amounts of data at high speeds, like Nvidia’s H100 chip, which sells for about $30,000.
Major tech companies need many of these chips; Meta’s CEO Mark Zuckerberg stated that his company plans to acquire 350,000 H100 chips by the end of the year to support its AI research.

 

 

 

 

 

 

 

 

 

Renting Chips: Another Expensive Option

Companies can avoid buying physical chips by renting them, but this is also costly.
Renting Nvidia’s H100 chip sets costs about $100 per hour.
Nvidia revealed a new processor design called Blackwell,
which is highly efficient at handling large language models and is expected to be priced similarly to the Hopper chip line, which includes the H100.
Nvidia stated that it would take around 2,000 Blackwell GPUs to train an AI model with 1.8 trillion parameters,
the estimated size of OpenAI’s ChatGPT-4. In comparison, it would take 8,000 Hopper GPUs to perform the same task.

 

 

Conclusion

Ultimately, the tech industry is pushing towards building larger and more advanced AI models,
significantly increasing costs.
Despite the high expenses, companies continue to invest heavily in developing this technology to achieve potential future gains.

 

 

 

Why is the Cost of Developing Artificial Intelligence So High?