Artificial General Intelligence: What Are We Investing In?

March 24, 2025

Audrey Mocle is the Deputy Director at Open MIC.

Hanna Barakat & Cambridge Diversity Fund / Turning Threads of Cognition by Hanna Barakat & Cambridge Diversity Fund / Better Images of AI

The companies developing so-called artificial general intelligence (AGI) and their backers have been making two big promises: this new technology will yield huge financial returns and solve all of the world’s problems. Both claims should be viewed with skepticism, especially by investors.

Let’s start with the first.

The concept of AGI, an AI system capable of rivaling human thinking, is a marketing tool and investment pitch for generative AI companies like OpenAI, Anthropic, and xAI. And it’s a successful one; AI dominated the venture narrative and dealmaking activity in 2024, capturing 46.4 percent of the year’s total deal value. Five Big Tech AI “hyperscalers”—Microsoft, Alphabet, Meta, Amazon, and Oracle—have poured an estimated $197 billion into AI infrastructure in 2024 alone.

This large deployment of venture capital and capital expenditures has been premised on two assumptions: first, generative AI products will be profitable, and second, building more AI infrastructure will produce better generative AI products — and eventually — AGI.

However, generative AI — deep learning models that can generate high-quality text, images, and other content — is not currently profitable and will struggle to become so. It is essentially a commodity product, and unlike other software, its (significant) costs increase as its user base grows. The advent of DeepSeek, whose reasoning model rivals ChatGPT, Claude, and Llama but cost a fraction to build, exemplifies the commodification of the generative AI market — and triggered a $1 trillion stock selloff.

Generative AI companies will need to deliver significant value for consumers to attempt to offset their high costs. But the technology’s other critical problem is that it is unreliable. Its models are “addled by profound inaccuracies and bizarre hallucinations,” causing public trust in AI systems to trend downward globally. It is no surprise that recent data shows that AI adoption in almost all US industries is low. And there is currently no principled solution to the problem of generative AI hallucinations.

Another issue is that AI scaling has plateaued. Simply using more computing power with existing model development strategies will not necessarily produce more useful and reliable generative AI. While industry researchers are considering an alternative approach, exploring it “will require sustaining eye-watering levels of spending.”

On top of this, to quote Open AI co-founder Ilya Sutskever: “Data is the fossil fuel of AI,” and we’ve used it all. Synthetic data (data generated by AI models themselves) has been suggested as a substitute, but there are debates as to its likely effectiveness. This creates an incentive to harvest more personal private information and proprietary business data.

All of the above should give investors cause for concern. It also undermines the credibility of AGI’s second big promise: that it will solve humanity’s most pressing problems.

OpenAI CEO Sam Altman believes that “In the future, everyone’s lives can be better than anyone’s life is now.” He claims his technology will solve the problems of climate change and global poverty. Humanity’s greatest problems, however, are ones of political will and resource allocation, not a lack of intelligence or computing power. In many cases, their solutions are already understood. Even if AGI is on the horizon, we have little reassurance it will make the world a better place. After all, the tech sector has not lived up to its past utopian promises. 

If we look at what the pursuit of AGI is yielding today, it hardly projects a promising outlook for our future. The rapid build-out of data centers is driving up energy consumption and greenhouse gas emissions, exacerbating climate change. Data centers are also disrupting local communities, drying up water reserves, and emitting air pollution with billions of dollars worth of

projected public health costs. The low-wage workers supporting the generative AI industry are being exploited, all the while the technology itself is being marketed as a means of surveilling, controlling, and eventually replacing entire workforces. Generative AI is fueling disinformation and hate speech and is being deployed in active military conflicts to surveil and target citizens and operate autonomous weapons. And recent research has even suggested that incremental improvements in AI capabilities undermines human influence over large-scale systems like the economy, culture, and nation-states, leading to “the permanent disempowerment of humanity.”21

The question becomes: Should we endure these risks—financial and social—on the chance that AGI is around the corner and will usher in a new era of prosperity and harmony?

Past tech investment trends should make us cautious on both fronts. The advent of the internet, social media, and cryptocurrency all promised revolutionary social transformation and returns for investors. In each of these cases, the rewards ended up concentrated in the hands of a few, while the risks were borne by society as a whole.

We currently have the makings of another tech bubble. The generative AI sector is being propped up by venture money and a few hyperscalers. A small number of tech stocks account for an “uncommonly high” share of AI market capitalization. The major players have circular financial arrangements with capital coming in and out of the same companies: Microsoft has a 49 percent stake in OpenAI, OpenAI traded procurement of compute for an equity stake in CoreWeave, CoreWeave gets 60 percent of its revenue from Microsoft, and NVIDIA invested in CoreWeave and is renting back its own chips from the company. The astronomical valuations of these companies are based on the questionable notion that deep learning models will continue scaling and eventually yield AGI. Unless these companies can start to deliver, an AI bust will have ripple effects throughout the market. 

In other words, Big Tech is burning cash and increasing systems-level risk on the basis of thin promises — promises some of these companies are starting to temper. Microsoft has canceled some leases for the buildout of its US data center capacity. Apple delayed the release of its AI-enabled enhancements to Siri. Microsoft CEO Satya Nadella was recently quoted cautioning against mindlessly chasing after AGI and instead looking at whether AI is generating real-world value.

Of course, deep learning models do have value — even from a societal perspective. They are fantastic at statistical approximation and can support impressive scientific discoveries. A notable example is the AI tool that predicts protein structures and won Google DeepMind scientists a Nobel prize in chemistry. AI models are also identifying faster transportation routes for air travel, optimizing irrigation schedules in agriculture, and helping surgeons detect heart disease. There are likely many real-world applications for this technology that could benefit society — and investors. 

Navneet Alang neatly put it: “It’s not that one should simply resist technology; it can, after all, also have liberating effects. Rather, when big tech comes bearing gifts, you should probably look closely at what’s in the box.”

Investors are best positioned to undertake this examination, and some have begun to do so. Trillium Asset Management and Zevin Asset Management have filed a shareholder resolution at Alphabet calling on the firm to disclose additional information illustrating if and how it will meet its 2030 climate goals in light of its aggressive AI infrastructure expansion plans. As You Sow filed a similar resolution at Meta. Andrea Ranger, Trillium’s Director of Shareholder Advocacy, warns that “Given the economywide risk from unabated emissions, Alphabet’s climate ambitions and actions reflect the urgency of the moment. Enhancing transparency may give investors confidence that it addresses the full suite of risks it faces.”

Other questions investors should be asking include:

  1. What are AGI-adjacent company valuations based on? What are their plans for profitability?
  2. How is this new technology being governed internally? Who has oversight over its risks — financial and social — and how are they measuring and disclosing them to investors?
  3. What due diligence is being done on dual-use AGI? Is there end-user due diligence being undertaken?
  4. How are companies accounting for the carbon emissions of their AI infrastructure? Does this accounting include emissions from partnerships and equity investments?

Investors, particularly those with stakes in the big tech hyperscalers and limited partnerships in venture funds exposed to generative AI, should be critical of overselling the benefits of this new technology — financial or otherwise. Efforts should be made to encourage the deployment of this technology toward more productive ends. Any actual benefits should be weighed against the very real costs for which tech companies should be held accountable.

 

Search

RECENT PRESS RELEASES