With the world’s biggest companies racing to build the most sophisticated artificial intelligence, questions about how far and how fast the technology can be scaled are coming into focus, according to Goldman Sachs Research.
The advent of generative AI has created a surge of excitement about the future of the technology. Unlike other types of AI, gen AI can create its own outputs in natural language. Because it’s “multimodal," it can also generate responses in other formats, including text, numbers, videos, and sound.
Large tech companies are citing initial use cases, and some enterprise players see scope for major productivity gains in certain areas such as code writing, which could make the most valuable employees even more productive.
However, the exact size of future economic benefits is the subject of debate. The cost of building gen AI at scale is extremely high, with big tech companies investing hundreds of billions of dollars — although the cost per query has come down considerably since the technology first launched.
At the inaugural Goldman Sachs European Virtual AI and Semis Symposium, 20 speakers — from CEOs and technologists to macro economists — came together to assess the prospects for AI. In particular, they discussed key topics including use cases, the total addressable market, challenges for further development and adoption of the technology, and implications for European hardware and semiconductors.
We talked to Alexander Duval, head of Europe Tech Hardware & Semiconductors in Goldman Sachs Research, about the main findings of the symposium.
What role are the large tech companies known as “hyperscalers” playing in the development of AI?
So far, US tech giants have been at the vanguard of generative AI use. They've been developing large language models, which they can use both for their existing business and also potentially in creating new business tools.
The symposium heard how the technology is generating a quarter of one hyperscaler’s code and saving meaningful engineering time for another. Broader use cases in the real economy include its use to predict the structure of proteins, and even to “de-age” an actor's appearance in a movie.
Those are striking examples. But it's worth bearing in mind that hyperscalers have been spending hundreds of billions on this. Together, they have spent around $200 billion on AI this year, and that will probably increase to $250 billion next year. Developing large language models can cost tens or hundreds of millions of dollars. And that's why at this symposium, we really wanted to look at whether it's feasible or desirable that the technology could scale to address many more use cases. These hyperscalers have a lot of free cash flow, and we are starting to see examples of use cases, but a number of industry observers believe that at some point we need to see a return on investment for a broader array of use cases and verticals.
Have any key use cases emerged for artificial intelligence in the broader economy?
Because generative AI is multimodal, it could theoretically apply to multiple fields: customer support, coding, medical analysis, marketing and many others. Given that there is a very significant level of investment in AI, the aggregate benefit of such use cases will need to be demonstrated in order to justify a solid return on investment. That being said, some participants at the symposium said that it might not be imperative for AI to scale in one particular area — in other words, a single key use case may not be necessary — as long as the economic benefits from all the different use cases are sufficient in aggregate. You could see efficiency gains across the board.
Some speakers pointed out there are a number of examples of very large successful tech businesses where you could argue that there wasn't a key use case at first. Take the example of ride hailing apps. There was already a perfectly good solution: Walking to the end of the street and hailing a taxi physically. But by leveraging software and network effects, you could create very large economic benefits, as well as benefits to consumers.
Is there still room for smaller technology companies to compete?
Some speakers at the symposium had interesting insights on small language models. At first, technology players were focused on building large language models — and those are still important. But there is also a trend of developing smaller and more efficient models.
Small language models are easier to fine tune, they may have lower energy consumption, and they can be customized to meet an enterprise's specific requirements in a given domain (such as legal, medicine, or finance). They’re also generally less expensive, because they're smaller and use less power.
Large language models will remain important, and tech behemoths have the resources, free cash flow, and balance sheets to drive the development of those. But speakers pointed out that there will be other, perhaps smaller players in the ecosystem who can innovate and develop small language models that will sit on top of those larger models. Some speakers thought this presented an opportunity for small companies to drive innovation at the top of the stack and highlighted the large number of companies being founded daily to do so.
Could the high cost of generative AI hold back development?
Training LLMs requires very high levels of capital investment. You need to build a data center, you need all the semiconductors — that includes both GPUs and memory chips — and you need hardware, power, and utilities. Speakers mentioned that the cost per query in some domains is multiple times higher than for a regular search algorithm.
That said, there has been steady progress on reducing costs. The cost of a generative AI query at some large tech companies has come down significantly since the launch of the technology, and a new gen AI company has said that revenue generated by the latest generation of LLMs exceeded the cost of training prior models. While some speakers stated there could be a risk that spending on AI could reduce if significant returns are not generated, the consensus was that hyperscalers will continue investing in the next few years. In total, Goldman Sachs Research predicts around $1 trillion of investment in AI in the next few years.
What are the other obstacles to scaling AI further?
There are a number of challenges involved in scaling AI. To build the technology, you need access to semiconductors, power, and lots of data.
Data is becoming a key question. We're getting to the point where developers have trained these large language models on practically all of the data that's out there on the internet. AI experts are trying to work out how to surmount this issue.
One potential answer is multimodal learning — where AI models learn by ingesting not only text, but also video and pictures. That will give them a lot more information. There are also stores of data which may be in proprietary silos — at research institutes or corporates, for example — and could theoretically be added to the corpus of data on which these models are trained. We also heard about the possibility that quantum computing will generate some high-fidelity data that could be used to train models.
The huge energy requirement is another bottleneck. Power generation is going to have to increase by at least three to four times by 2030. And one of the panels highlighted that the increased energy requirements from just the four or five biggest hyperscalers in that time will be equivalent to the current energy consumption of France. Fortunately, more advanced semiconductors may also bring better transmission infrastructure and more efficient power conversion. Efforts are also underway to make semiconductors more energy efficient.
What are the risks of adopting AI at scale, and how can we limit them?
The proliferation of AI has given rise to concerns about the ethics of its use. That said, our panels highlighted that enterprises and ethics bodies are monitoring the uses cases being developed, and suggesting guardrails to ensure that AI is not misused. For example, some speakers discussed how having a human familiar with the context of the specific task checking AI outputs reduces the risk of hallucinations having a detrimental impact.
Other risks that warrant further attention include AIs communicating among themselves (potentially in a language that humans can't understand), copyright concerns, bias, and security.
As such, speakers agreed that AI use may be increasingly limited to the least risky domains and conditional on safety features and human supervision.
How could European hardware and semiconductor companies benefit from the development of AI technology?
A number of European hardware and semis companies could benefit. In particular, the symposium learned about atomic layer deposition technology — a type of semiconductor production equipment that allows you to deposit materials on a silicon wafer very accurately in order to build a chip. One particular company in the Netherlands offers this technology. As these chips get more powerful, they need to be more efficient with space and power, so new designs of chips are increasingly leveraging atomic layer deposition technology.
Multiple participants also highlighted Europe’s capabilities in advanced semiconductor packaging, needed to ensure efficient use of space on devices. And lithography — the mechanism for printing transistors onto microchips — will also continue to advance, offering increasingly powerful processing capabilities. Europe is also the dominant producer of that kind of technology.
Finally, significant power conversion will be needed to run AI servers, and Europe has promising capabilities there, too.
This article is being provided for educational purposes only. The information contained in this article does not constitute a recommendation from any Goldman Sachs entity to the recipient, and Goldman Sachs is not providing any financial, economic, legal, investment, accounting, or tax advice through this article or to its recipient. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this article and any liability therefore (including in respect of direct, indirect, or consequential loss or damage) is expressly disclaimed.
Our weekly newsletter with insights and intelligence from across the firm
By submitting this information, you agree to receive marketing emails from Goldman Sachs and accept our privacy policy. You can opt-out at any time.