Talks at GS

OpenAI CFO Sarah Friar on the race to build artificial general intelligence

Apr 11, 2025
Share

OpenAI is at the forefront of the generative AI revolution. How did it get there, and what is the company doing to stay ahead of its competitors? Sarah Friar, chief financial officer of OpenAI and former head of technology equity research at Goldman Sachs, discusses the path to artificial general intelligence and the importance of capital in the AI race with Dan Dees, Goldman Sachs Global Banking & Markets co-head. The conversation was filmed at Goldman Sachs’ Disruptive Tech Summit in London on March 5, 2025.

Transcript:

Sarah Friar: We need to be able to invest. It’s making it easier to build up data centers, faster. It’s making sure power is available. It’s making sure training of people is happening. And I think everyone’s kind of gotten the message. Like, you feel the urgency.

[MUSIC INTRO]

Dan Dees: Good morning, everyone. I’m Dan Dees, I’m the co-head of the Global Banking & Markets business at Goldman Sachs. I’m thrilled to be here. I love this conference. I’ve been coming to it from the very beginning ten years ago and it’s always one of my favorite stops on the global circuit. And it’s been wonderful to see the growth of this tech ecosystem over the course of that decade. So, thank you all for being here and thank you for all the things you do in engaging with Goldman Sachs.

I’m particularly excited to be here with my friend Sarah Friar and spend the next 40 minutes talking to you. Thank you for being here Sarah.

Sarah Friar: My pleasure.

Dan Dees: Sarah is the CFO of OpenAI as you all know. She’s a great person, a great leader, a great communicator. And I always learn so much when we catch up and when I get to listen to you. So, looking forward to it.

Sarah Friar: And vice versa.

Dan Dees: All right, so, we have a lot to cover. Let’s start with your personal background and kind of your journey to where you are. So, you were born in Northern Ireland.

Sarah Friar: Uh-huh.

Dan Dees: Have done leadership stints at Goldman Sachs, which I still can’t believe we let you get away. But that’s a whole separate thing I’ll get over. McKinsey. Salesforce. Square. Nextdoor. And now you’re at OpenAI. So, kind of walk through your journey, what brought you to OpenAI and kind of your guiding light through all those movies.

Sarah Friar: Yeah. I mean, I’ll be fast because I think OpenAI is way more interesting than my background. But it’s hard not to say yes to what I think is the most impactful company working in the most impactful space right now. To be in the crucible of what’s happening from a technology standpoint, something that I think is truly bigger than any tech wave we’ve seen to date. And I’ve had the privilege of, I wasn’t around in the PC revolution, but certainly as the internet came to bear, I was making my way from McKinsey to Stanford Business School, so class of 2000. We were both-- the bubble was on the rise. Also saw the other side of that for a couple of years in Silicon Valley winter. To then see the rise of mobile and see how that impacted companies.

And I kind of made my career, frankly, at Goldman talking about the shift of the the cloud. And so, it’s fascinating to now be on the other side, both talking about the impact of AI broadly at a broad, world level, all the way down to government impact, down into enterprise, business, education. We can get into those topics.

But then also now to be starting to talk about AI infrastructure. It feels like almost that dawn of cloud computing again, albeit it with a very different type of architecture and build. So, fascinating time to get to do what I’m doing. More fascinating, I think, for all of you to be living through it as well.

Dan Dees: Yeah. I’ve seen you through a few of these iterations. Worked closely with you. It’s always extraordinary what you’re involved in. But this one takes it, I think, to a whole new level.

So, you’re talking to a bunch of people, among whom, you know, run businesses and are leaders of their various entities. Are there any learned lessons from all those experiences you share with these people?

Sarah Friar: Yeah. I mean, first of all, I don’t think there is a CEO, a business leader in the world that doesn’t right now know that they need to deploy AI into their business and not feel like they’re behind. And I say that even myself, which might sound a little strange. So, it’s just a leader within OpenAI as I joined, one of the things I wanted to understand was with my team, just with my finance team, what are we doing with our own tools? And it was fascinating to see that, you know, you have little spikes of people who are way ahead and kind of have embraced it. And then a lot of people, because you’re hiring at a rate, they’re coming in from other organizations where there’s no AI. And so, there was like a real kind of difference of who was doing what and why. And not a lot of organization around it. 

So, actually, just even personally as a leader, one of the first things I did was sit down and do a hackathon with my team. And we just brought in our sales team, our solutions engineers, and we did an afternoon session, which really started pen on paper. It was that basic. Like, what are the things you do everyday that feel very routine, kind of the things that you don’t love about your job? And we put people into kind of groups, like the techs team together, the procurement team together, the investor relations team together. And we kind of did a little comparing of notes. And then we just did a very basic custom GPT build to say what could we do, literally in half an hour, that would just make our life better using wall to wall ChatGPT?

And just even from that the energy and the excitement in the room, right? From my investor relations team, when I joined, we were doing a fairly large round. And we were in the midst of diligence hell. And what we realized is there is such a repetition of questions that were all kind of similar, not totally similar, and, you know, we were sitting late at night often remembering, like, oh, that investor asked that question. Go pull that answer. Tweak it slightly. And then send it back out.

Creating a custom GPT that now answers all those questions, we were dancing in that conference room. Literally. And it’s just getting people to understand just from first principle what is just that kind of generalizable productivity lift they get?

Today, I’ll show in a moment Deep Research. If you all haven’t used Deep Research, it is mind-blowing. Now, shame on us because it was really only available in our pro SKU until about a week ago and we just rolled it out into the enterprise SKU, Deep Research is the ability to ask the model for something that you would probably go ask an analyst to do for you.

I’m just coming from a meeting with the CEO of another bank. I will not mention it. And we had been asking him about GPU financing. Very top of mind for me. And so, the team had effectively used Deep Research to go pull a report. His colleague sitting next to him who’s doing some of that work and looking into it said, “I read that report. It was much better than what we did as a team of two MDs, three VPs, six associates, ten analysts. I’m a little embarrassed that we didn’t do the Deep Research report first and then use that to help us ideate and iterate.”

And so, if you haven’t done a Deep Research report, like the one thing I would tell you, like, leave today and go-- it’s going to cost you. If you need to buy our pro SKU, it’s going to cost you 200 bucks. Everyone in this room can afford $200. And just give it a go.

Dan Dees: Have you seen the markets today? It’s--

Sarah Friar: That’s true. We can probably get you a free offer somewhere. But it’s just amazing. So, I think using it personally, getting it into your teams, and then of course it gets much more sophisticated from there getting it into, broadly deployed, into the workflows of your company.

Dan Dees: So, let’s get into that in more detail, I mean, the specifics around it. The hype around AI is significant. The opportunity as I listen to you seems kind of boundless and super exciting. Level set us on where are we now on AI and what are kind of the near-term breakthroughs that we’ll all see?

Sarah Friar: So, I embarrassingly brought a few slides. So, if it’s okay, I’m going to use that. So, I’m going to just level set first of all on where we’re at as a company. Because if I’d been here a year ago, definitely two years ago, we would have showed you that orange bar and would have said, “Hey, we’re a model company. Our goal is always to be the SOTA model, state of the art model. And we are.”

So, back two years ago it was the ChatGPT series, right? We were going from kind of ChatGPT 3 to ChatGPT 4, which took about 18 months to two years. Last year we deployed Reasoning for the first time, so the O series of models. So, shifting from what is more of a predictive, fast-paced, real-time answer model, like, “tell me about the latest news on Goldman Sachs this morning” to something that now reasons much more like a human reasons. And it’s interesting when you watch the chain of thought happening.

An easy way to describe it is if you think about when you do a crossword puzzle, like one across might be five letters. And you think it’s one of three different words. You put in a word. And then you go to one down and it’s clear, oh no, the second letter’s an A, so that actually means what I had in there, I need to score out and put back in the other word. So, it can double back on itself. And that’s really important in the next slide when I talk about agents.

But today, OpenAI is so much more. We’re going down into data center technology because we do think that we’re now in the AI infrastructure V2. Or what Jensen calls AI factories. And we feel like we’re creating a lot of IP there. And it’s really important for us to own that. Think about Amazon at that moment where they’re rocking it on e-commerce. They see AWS starting to take shape. Like, it’s at that stage they had decided to go outsource that to the upstart, Google or something, right? Giving all that IP of AWS away. Think about how different the company would be today. Right? AWS on its own is a 40 percent market share in cloud computing with 38 percent operating margins. So, we think there’s an incredible ability to own the infrastructural piece.

But now we’re going up out of the model into the API layer, which allows us to force multiply out to enterprises and developers. And then up a level again into the application, which is how do we drive feature functionality that makes you as a consumer, both in your personal life and in your work life, love what you’re getting? And so, behind that front door of ChatGPT you now can do video generation with Sora. You can do a Deep Research report. You can do search. You can create projects. You can code. You can create a canvas for writing. And our goal is just to keep loading that up because it does multiple things. First of all, it keeps us as the dominant player, 400 million weekly active users. But it also drives personalization. The model starts to know more about you. So, it’s really answering to Sarah or to Dan. And then it creates stickiness as we do things like collaboration. Just even an investor said to me last week, he’s like, “I really wanted to use Deep Research. So, I bought the pro SKU because you didn’t have it in the enterprise SKU. And I realized I was starting to do work over here, but I wanted to pull it back to work I had done over here.” And he’s like, “It was really annoying. And then I realized, oh, this thing’s already sticky. Like, I’m not going to move off it because of all the history of the work I’ve already done.” Which is good from a business context. So, that is where OpenAI is.

Let’s talk about just broadly how we talk about the five steps towards AGI, artificial general intelligence. So, I already talked about the world of chatbots. So, kind of year 2023 real time predictive response. Last year, we bring reasoning to the table. So, now a model that thinks for longer and can do long horizon tasks that you would send an analyst out to do. 2025 is the year of agents. We started talking about it probably Q3 or Q4 last year. It’s now become the term the industry is using. But this is AI that can go out and do work independently for you.

And this is not vapor ware. We’re not selling ahead. We actually have three things working today. Deep Research, which is the agentic tool to go do a real deep research report for you. Operator, which is what we have launched to allow a task worker to go out on the web and do something for you that might take time in the background. Book a flight. Book a holiday. Book dinner tonight, whatever you want to do. And then the third that is coming is what we call A Suite. We’re not the best marketers, by the way. You might have noticed. But agentic software engineer. And this is not just augmenting the current software engineers in your workforce, which is kind of what we can do today through Copilot. But instead, it’s literally an agentic software engineer that can build an app for you. It can take a PR that you would give to any other engineer and go build it. But not only does it built it, it does all the things that software engineers hate to do. It does its own QA, its own quality assurance, its own bug testing and bug bashing, and it does documentation, things you can never get software engineers to do. So, suddenly you can force multiply your software engineering workforce.

From there we think we’re moving into this world of innovation where it’s no longer about the human knowledge that exists in the world today. It’s about how do I extend that? And we’re actually hearing that from professors and academics that they’re finding the models are coming up with novel things in their field. They don’t yet know if those novel things are real because they now need to go and test and say is that actually a new discovery? But we are actually hearing that back from academia. And then longer-term agentic organizations.

Final slide here is just a reminder back to that orange bar though in terms of state-of-the-art models. We are still by far the state-of-the-art model. O3, I just want to put it into perspective, these are the benchmarks that are just widely agreed upon, the benchmarks that we’re using to say is AI becoming AGI, right? Is it truly getting to that level of human intelligence and beyond?

So, you can kind of go left to right. You can see from a software engineering perspective how it’s scoring. Competitive coder, it’s the 175th best competitive coder in the world. On competitive math, it got one question wrong. And on PhD-level science, it is PhD level across physics, chemistry, biology, and so on. That’s O3.

What my product team assures me is O3 mini is already the number one competitive coder in the world. It’s literally the best coder in the world already.

And then of course these are all very, I would consider almost hard skills, like as a nerd they speak to me. But we’re actually spending a bunch of time thinking about the more EQ side of models. So, 4.5, I told you we’re terrible markets, which we just released this week, we spent a lot more time training that to have, you know, what Silicon Valley loves to call vibes. But effectively, EQ. And so, it is a much higher level of, like, when you’re talking to it, it feels very human. It’s actually better for things like design or writing or creative concepts rather than just pure kind of hard math and science.

I think that’s it. So, I think we can pull down the slides.

Dan Dees: It’s extraordinary when you go through and you look at the number one coder in the world now that can be your AI. You referred to AGI, but what’s your definition? What is AGI, specifically? And when will OpenAI be there?

Sarah Friar: So, I mean, this is the question that everyone’s asking. I mean, AGI in definition is that point where we believe AI systems can take on, you know, a majority of the real kind of value-added human work in the world and do it. And we’re getting pretty close to that being the case. If you ask Sam, he would kind of say, you know, it’s imminent. We may be there.

And it’s also artificial general intelligence. It’s not super intelligence, right? So, frankly, I look at this and say, “I am not PhD level in biology, physics, math, coding, and so on. So, it might already have surpassed Sarah.” Not might. It has for sure. And so, the question is, you know, does the technology already exist? We may not yet be using it to its full extent. In fact, I know for sure we collectively as a world are not using it to its fullest extent. And so, you know, we would say we’re getting pretty close at this point.

Now, I think there will be a lot to go from here, right, because that is a very flat world where a lot of the interaction is through our fingertips, maybe through voice, maybe through auditory, maybe through visual. There is a whole next world of AI where it becomes much more 3D, truly robotics in action where you think about tasks like factory worker tasks, farming, right, areas that today we’ve seen technology begin to move into but hasn’t just fully moved into. I think that’s going to be a whole other field of very fertile ground for people to mine.

Dan Dees: Yeah, okay, it’s extraordinary. It’s dizzying to think about. So, your CEO Sam Altman was at the White House with Larry Ellison. And Larry at the time, I remember watching the clip when it came out was talking about the idea that AI designed vaccines could cure cancer. And he kind of went through the personalization of that, both the diagnosis and then the formulation of the vaccine. Is that realistic as a thing? And what other kind of big, blue sky, kind of big ideas are there?

Sarah Friar: I mean, I think it’s very real. That’s why I made the point about what we’re hearing from academics in their field of expertise is that we might already be pushing that boundary. We might already be finding new discoveries, novel in the world.

Think about what’s happening in models generally. I think we’ll talk in a moment about why more compute. But today, you know, there is always like whatever the noise of the week is. So, there was the laws of scaling are dead. You know? I can’t remember when that was. That might have been a month ago or whatever. No. Now there are actually three scaling laws happening. There’s pre-training, which is when you just make the general-purpose model smarter. And that tends to be more data, more algorithmic expertise, so kind of where researchers come to bear across more compute. And so, it’s certainly leading to an industry that requires a lot of capital to really be successful. And that’s when you hear us talk about these big models like GPT3, GPT4, GPT5. Right? They’re happening on these massive compute fabrics that are scaling up logarithmically. So, that’s kind of pre-training, general purpose model.

Then you get to post-training. So, post-training is where fine tuning starts to happen. So, let’s say I wanted to create models that were really good for diagnosing diseases. So, it would ingest all of the human information that’s available out there broadly speaking to create a general-purpose model. On fine tuning, you might then say, okay, now I want it to really take medical textbooks and I want it to kind of digest this happening in a post-training moment because I want it to get very deep on diagnostics to perhaps move into this world of, like, future vaccines that we don’t yet know about. That’s the post-training world. Also, a law of scaling. And we’re finding a lot of fertile ground there. It’s a lot of where our O series reasoning models are getting better and better, post-training.

And then the final area is what’s called test time. Test time compute. And that’s at the moment where the model’s really doing something for you. During inference you can ask it to go a little deeper. Take a bit more compute and see if you can come up with a better answer, a more accurate answer. So, I asked ChatGPT this morning how would I explain this to an audience in layman’s terms.

Dan Dees: Here it is.

Sarah Friar: Work with me. It described it as a car, right? So, general purpose model is like you’ve built a car. You figured out what a car is. Wheels. Engine. Suspension. It’s going to move. Okay.

The fine-tuning moment is like you’re like, well, I want this car to be able to race. So, I’m going to give it a different engine. So, it’s kind of the general-purpose car exists, but I’m going to upgrade the engine.

The inference, the test time compute is the moment where you can put it into sports mode. Right? Now I’m really hitting a racetrack and I’m going to go from whatever normal four wheel drive I’m in, into sports mode. That’s kind of your three laws of scaling.

And this is why when Larry said we think these models can help us create cures for cancer and so on, yes, because it’s all about how specific can you get?

And we launched something right ahead of the holidays called Reinforcement Fine Tuning where one of the things our researchers have found is that it actually doesn’t take a lot of information for the model to show a significant uplift in output in a particularly niche area. But the key is can you get to that specific information area?

So, if you wanted to do research on, like, neuro diseases on degenerative outcomes, can you get enough information in that particular area? But even a small amount of information causes a massive uplift in the model’s utility.

Dan Dees: Okay. ChatGPT did a nice job. The car analogy, wow.

Sarah Friar: Nice job, right?

Dan Dees: I didn’t see that coming. So, you referenced it, the Stargate announcement, the $500 billion over four years. You know, these numbers are staggering. You and I have worked together a lot in your capacity as CFO of other organizations. These numbers are staggering. How do you think about that? Put that in context for us. And how do you think about the power and infrastructure needs that come around that?

Sarah Friar: Yeah. So, Stargate we announced as a $500 billion investment in compute, a shortcut way to talk about 10 gigawatts of compute being required. How do we get to that number? It’s a big number. And I’ve just talked about these three scaling laws that are required. So, more and more compute at every instance of a model’s development.

What I see today is, I mean, this is my investor pitch. It starts with I make terrible business decisions every single day. Don’t you want to give me money? I decide not to roll out models because I don’t have enough compute. Sora, our video gen model was ready to go probably February/March of last year. We didn’t roll it out until almost December. I think, truly, and it’s and the even fully rolled out. I think-- we’re in Europe. I’m not even sure if it’s all the way here in Europe right now.

I decided not to roll out features. We rolled out Deep Research. We know with that the business community would love this feature. We didn’t have enough compute to put it in the business SKU.

And the worst decision I make every single day is not to give researchers my most valuable resource, the compute they need to do the research they want to do. Most Mondays, Sam comes to work. He’s mad at me because I personally have not brought enough compute to the work that day. And he wants to know what I’m going to do about it.

Why am I in this position? Because two years ago, three years ago the people who sat around that table making compute decisions could not imagine the speed with which we would need compute. They could not imagine how fast our business would have taken off. Like, literally in two years, we have grown to 400 million weekly active users. And our revenue has tripled every single year. This will now be the third year in a row that’s tripled. So, you can kind of imagine the sort of scale we might be at.

Do I blame those people? Sometimes on Monday mornings I do. And then I think, oh my goodness, it must have been so hard to really see that. And then I think, okay Sarah, if you’re here in three years and that is the problem, you should get fired. So, I do not want to get fired. I might die before I get fired at the pace this place goes at. But I do want to make sure that we are not running into being compute constrained.

And I think that in three years from now we will look back and laugh at how we were losing our minds at 500 billion being, like, such a big number. Now, it is a big number. Because 10 GW of IT load or even utility load, I mean, again, good ChatGPT moment, like, Ireland, so I kind of think the Island of Ireland uses about 7 GW. So, the whole country uses less than we’re talking about. So, that does raise questions about just how do you scale that up? Where is the power going to come from? How do we think about a world of renewables and so on? And how do we think about our climate and our environment as we do that? How do we think about reskilling populations to be able to do that sort of build, right? Turns out electricians, HVAC people, I mean, it’s truly resources that can constrain build outs.

And so, this is where I think government are particularly interested in something like Stargate because while it is both a financial investment, it is also a total rethinking of jobs, of economic development zones, of how to stay ahead. And then geopolitics starts to come to bear too because we are in a race with China, as we just saw with DeepSeek. And we should not take that lightly. Because that has ramifications for things like national security as well.

Dan Dees: Yeah, I want to get to that in a second. But one of my takeaways whenever I listen to you talk about this is the extraordinary power of these tools that are being developed. And yet, how hard a time we have just deploying, figuring out to deploy them, in my personal life and in business life.

So, we’re all going to need to figure it out. CEOs, leaders of businesses are going to need to figure out how to deploy AI in their business. And we’re all going to have to figure out how to deploy it in our daily lives.

What are best practices? You kind of went through how you did it when you showed up and whiteboarding it a little bit. But as you work with CEOs, how should we be using this in our business? And then we’ll get to the personal side.

Sarah Friar: Sure. I mean, what I’ve described in my own team is a little basic, frankly, but sometimes that’s just the starting point. The great thing about this technology is most people are trying it in their personal life already.

So, particularly anyone under the age of 35, they’re using it in schools. You know, our Edu SKU, what we’re seeing is university systems deploying this kind of what we call wall-to-wall ChatGPT. So, Arizona State University, ASU, has 181,000 seats deploys across students, faculty, and researchers. California State Universities just did a 500,000-seat deployment for us. Yesterday we announced, I’m not going to get the name right, but it’s effectively the AI research collective which I think today is 15 institutions including Oxford, my alma mater, super excited to see that, effectively deploying broadly at their institutions around research, but also putting in new data. Do you know, you can now get dissertations from 1498, I think, which I don’t know if I’d written a dissertation in 1498, I’d want people to be reading it today. But how fascinating. Back to--

Dan Dees: Written in old English like Chaucer? You have to--

Sarah Friar: Yeah. And think, that goes back to that spiky point about getting data in kind of spike areas, even a small amount of data makes the model very smart.

But education establishments are really going in here. The country Estonia as a country has just grabbed a wall-to-wall ChatGPT deployment for all secondary schools. So, every secondary school in the country will now have access to ChatGPT for learning. So, whether you like it or not, everyone is using it and learning how to use it in a personal way.

There’s just an expectation that this technology is already adding value. This is like value from minute one. So, I think that’s the good thing. It’s just letting a little bit of thousand flowers bloom.

But then I do think you need to be specific. Like, if I take banking as a vertical, you’re a heavily regulated industry. And so, what we have done, and I think are now become much more known for, is we have now gone into institutions, seen success in certain areas, built case studies. And so, when we come to visit you in your particular vertical, we can actually bring really good examples.

And so, in the banking example that I was just in, what we typically see are deployments around things like credit, fraud detection, around things like KYC AML, deployments around wealth management and research. And then in areas like investment banking where, you know, everything from I want to prep for a client meeting to, you know, almost that CRM piece about “this just happened in the market, which clients should I now go to?” So, that kind of whole range.

So, for us, it’s about how do we build those case studies, get them out in the world so when you as executives come, we’re not just waving our arms around and telling you, “Don’t worry, this is going to be great” and you all leave thinking, hmm, sounds a little hype-y. But we can actually show you tangible examples you can take to your board that says here’s how I can either save money or drive more revenue.

Dan Dees: And on the personal side, any personal hacks on how I can look informed and--

Sarah Friar: I feel like-- I mean, I do worry the it overuse chat. I use ChatGPT for everything. So, you know, from travel. I love it for travel, by the way. You know? From just the pure ideation. I want to go away for a week at this time of year, here’s where we’ve been before. Give me some blue-sky ideas. To recipes. Stuck in a supermarket. Know I need to make chimichurri sauce. Can’t quite remember what goes into. This is me at my nerdy worst. My husband was showing me California wildfire insurance. Really hard to get. We got two quotes back. One was, like, you know, coverage only up to about, you know, a quarter of our house. But much lower rate. And almost no retention. Versus full coverage, super high retention, really high rate. And I’m kind of looking at it, very pertinent, I know that you live in LA, trying to decide. And then even more so, so I wanted to do, I was like, I don’t want to take, I think it might be the better insurance, but what if they can’t stand behind it? So, I did a deep research report on those two.

Dan Dees: Shocking for you to do that.

Sarah Friar: And my husband was just like, “I just wanted a yes or a no.”

Dan Dees: Right back in your nerd CFO phase. I love that you go there.

Sarah Friar: He’s like, “That took an hour. I thought it would be five minutes.” But I feel very educated on the California wildfire insurance industry now.

Dan Dees: They all need your counseling. I have one other question. I’ll wrap it into one question for you, which is just the investment on what’s next. What’s the investment plan? And this I’m just asking for a friend, what about going public? How do you think about raising money, thinking about the private context versus public and what are the trade-offs and ideas there?

Sarah Friar: I mean, the focus right now is compute, researchers, data. Those three areas. Compute really through the lens of Stargate. And that’s going to be a lot to kind of birth it into the world. Has to come up as an entity. We have to go find sites, power. We have to dig holes in grounds. Pull up data centers. Fill them with equipment. Buy GPUs. And really prove that we can do this with this first tranche. And then I think that will be kind of contagious North America. But we view this as kind of a global push because we want to be able to put data centers, many cases, closer to customers in a way. But this is a good way to work with governments. So, that’s compute.

Researchers are researchers. We want to hire the best and the brightest and, of course, we want to help the school system continue to kind of push those people out. Like, this really is a new domain. So, there’s not a lot of PhD AI students today. But that’s growing very, very fast. On the data front, it’s working with data providers and so on to do that well.

IPO, I can’t even imagine that right now. I’ve got so much going on. But I’m sure at some point, you know, being a public company is good hygiene. It forces you to-- I mean, first of all, you get access to the capital markets. You know, sunlight is the best disinfectant on any financial enterprise. And then beyond that, it just creates a rigor.

I do think as a company we’re going to be different in the public markets, right? We’re going to have to have investors who are understanding that we will continue to aspire and do very big, ambitious things. And you know, that has to have the right investor mindset. Like, 90-day cycles are probably not going to work perfectly for that.

Dan Dees: Makes sense. Understood. Well, we’re standing by and available. Anyway, Sarah, thank you. Thank you for being here with us.

Sarah Friar: Thank you.

Dan Dees: You’re extraordinary. Thank you.

Sarah Friar: Okay, thank you.

The opinions and views expressed in this program may not necessarily reflect the institutional views of Goldman Sachs or its affiliates. This program should not be copied, distributed, published, or reproduced in whole or in part or disclosed by any recipient to any other person without the express written consent of Goldman Sachs. Each name of a third-party organization mentioned in this program is the property of the company to which it relates, is used here strictly for informational and identification purposes only, and is not used to imply any ownership or license rights between any such company and Goldman Sachs. The content of this program does not constitute a recommendation from any Goldman Sachs entity to the recipient, and is provided for informational purposes only. Goldman Sachs is not providing any financial, economic, legal, investment, accounting, or tax advice through this program or to its recipient. Certain information contained in this program constitutes “forward-looking statements”, and there is no guarantee that these results will be achieved. Goldman Sachs has no obligation to provide updates or changes to the information in this program. Past performance does not guarantee future results, which may vary. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this program and any liability therefore; including in respect of direct, indirect, or consequential loss or damage is expressly disclaimed.

This transcript should not be copied, distributed, published, or reproduced, in whole or in part, or disclosed by any recipient to any other person. The information contained in this transcript does not constitute a recommendation from any Goldman Sachs entity to the recipient. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this transcript and any liability therefor (including in respect of direct, indirect, or consequential loss or damage) are expressly disclaimed. The views expressed in this transcript are not necessarily those of Goldman Sachs, and Goldman Sachs is not providing any financial, economic, legal, accounting, or tax advice or recommendations in this transcript. In addition, the receipt of this transcript by any recipient is not to be taken as constituting the giving of investment advice by Goldman Sachs to that recipient, nor to constitute such person a client of any Goldman Sachs entity. This transcript is provided in conjunction with the associated video/audio content for convenience. The content of this transcript may differ from the associated video/audio, please consult the original content as the definitive source. Goldman Sachs is not responsible for any errors in the transcript.

 

This session was recorded on March 5, 2025.