For OpenAI co-founder and president Greg Brockman it’s not only time to start thinking about artificial intelligence and products like his own company’s ChatGPT text-based AI, but also the move toward even more potent systems and technology—as well as the ways we can develop those technologies responsibly and democratically.
“It’s time to start really thinking about the logical conclusion of, not just general intelligence, but what we call super intelligence,” Brockman said in conversation with SV Angel founder Ron Conway at the recent Goldman Sachs/SV Angel AI Forward conference in San Francisco. “Something that is almost as capable as our most capable organizations.”
Systems that could take on the work of entire teams would have significant advantages over traditional models, Brockman said. “If you want to work hard on a specific problem, you have to recruit all these people. You have to get them aligned. People have to sleep. They have all these competing incentives,” he said. “If you could have systems that are just fully focused on solving problems, think of the amazing things they could accomplish if you do it right.”
But Brockman believes the right way is not just a matter of technology, but governance. “Doing it right is going to be hard,” he said. “It’s going to be a new standard we’re going to have to achieve.” That standard, says Brockman, can be attained through broader international cooperation. Such cooperation already exists in other industries, he said. “I think the International Atomic Energy Agency is an example of the kind of organization we have in mind, one that is actually able to have insight into super-large training runs.”
Brockman thinks this cooperation can, and should, happen between governments and international agencies and the companies engaged in the work of building AI. “Companies like us and the others that are cutting edge in these fields and spending many billions of dollars on this technology – that’s the place that we need to coordinate,” he said. “We need to come together and rise up beyond our own selves and really think about the impact this can have in both the amazing sense and also, if we don’t get it right, in the negative sense.
That cooperation needs to start today, says Brockman, even if the technologies and products don’t yet exist. “We’re not thinking about the open-source models of today. Not even necessarily the GPT4s of today,” he said. “You move forward multiple step functions; those are the models that we really need to be thinking hard about. And we should be thinking hard about them before we’ve actually created them and be putting this policy framework in place.”
With growing focus on the best way to ensure AI is reliable, trustworthy and unbiased, Brockman suggested new ways to manage AI for good.
One of those ways, he argued, is to use AI to better itself. “By far, our best bet to navigate this wilderness is by using AI to help us,” said Brockman. “If you have a system that’s able to do these amazing things, it should be able to help you create the technology itself. And if you can have trustworthy versions of the technology bootstrapped to even more trustworthy versions, you could see where that goes.”
An example of this, Brockman said, would be how different AI systems could work together to summarize a book. One AI model could create a summary, while another could “fact check and point out holes,” said Brockman. “They have a little bit of an argument back and forth.”
But Brockman also sees a role for human input in AI, and not just from the engineers in his own company. “We’re starting to think about democratic decision making,” he said. “It’s not just we’re just sitting in Silicon Valley thinking that we can write these rules for everyone.”
“How do you build a system on top of the internet that gets great input?” asked Brockman. “It almost sounds like the opposite of what you’d expect.” Yet there are positive examples already in existence, Brockman said, pointing to established crowd-sourced resources like Wikipedia. “That’s something where people with very different views have to come together to write a single article that meets some standards.”
As for the future of AI generally and OpenAI specifically, Brockman was keen to point out just how early we are in this era. “We’re still, every year, trying to make an impossible thing happen. We’ve done it for seven years in a row. I think we’re going to do it for another seven years,” he said. “You get a little bit of success. Then a lot of a bit of success. And then overwhelming success. And I think we should expect this to happen with the GPT model. It’s not tapped out. We’re going to go much further.”
This article is being provided for educational purposes only. The information contained in this article does not constitute a recommendation from any Goldman Sachs entity to the recipient, and Goldman Sachs is not providing any financial, economic, legal, investment, accounting, or tax advice through this article or to its recipient. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this article and any liability therefore (including in respect of direct, indirect, or consequential loss or damage) is expressly disclaimed.
Our weekly newsletter with insights and intelligence from across the firm
By submitting this information, you agree to receive marketing emails from Goldman Sachs and accept our privacy policy. You can opt-out at any time.