A Blog by Jonathan Low

 

Feb 28, 2024

Where A Venture Capitalist Thinks the Money Is In AI Investing

The challenge is that those making the initial, large-scale investments in AI can then be leapfrogged by those who use preceding models to develop their own more powerful models at a tenth the cost. 

To offset that iterative advantage, VCs should be thinking about how to develop transformative functions at scale in partnership with specific industries, which gives them the benefit of those relationships across an entire line of business, optimizing impact and return. JL

The Wall Street Journal interviews Hemant Taneja, Managing Director of General Catalyst:

The bulk of AI’s value will be in transforming industries and business functions. Data infrastructure for enterprises is going to evolve to take advantage of large language models. The challenge is that after spending billions of dollars to build a cutting-edge model, the next step is to build the next model. There isn’t time to recover your investment on capital because those that come behind you can do that at a tenth of the cost. Our thinking has been how to apply AI in collaboration with industries as a transformation play. Our investment thesis centers on driving these transformations. The Silicon Valley view has been, “The rest of the world isn’t very smart. We’re just going to build it. Everybody’s going to use it." I don’t think you can have that point of view in an era of transformation.

Hemant Taneja, chief executive and managing director of venture-capital firm General Catalyst, is a prominent voice in Silicon Valley on the need to develop artificial intelligence responsibly.

To that end, he is working with other venture-capital firms and AI founders, as well as the U.S. Commerce Department, on a set of voluntary guidelines designed to help companies keep ethics and impact in mind when building and deploying this new technology.

Taneja sat down with Wall Street Journal reporter Tom Dotan at the CIO Network Summit in California, where he discussed AI’s potential to transform organizations and why he thinks it would be a bad idea to rush AI regulation.

Edited excerpts follow.

AI’s key value

WSJ: We are now well over a year into the generative-AI moment. As a venture capitalist, what ideas do you see as most promising when it comes to generative AI and startups?

TANEJA: To me, the bulk of AI’s value will be its role in transforming industries and business functions. Most of our energy goes to thinking through how to do that. What is the role of AI in industries like healthcare, and what is the role of AI in functions like marketing and customer support? We’ve been spending a lot of our energy and capital at that layer.

We’ve also been building and incubating some stuff in the area of data infrastructure for enterprises, and how that’s going to evolve to be able to take advantage of large language models.

We think about these language models and what’s going to happen to them. How to make money at that layer was the challenge for us. The reason I struggle with it is that after spending billions of dollars to build a really interesting and a cutting-edge model, the next step is to go build the next model. And there isn’t time to actually recover your investment on capital as a business. And that’s challenging because the folks that come behind you can now do that at a tenth of the cost with the cost commoditization that is happening.

WSJ: So do you view AI mostly as an enterprise play at this point?

TANEJA: Yeah. When the social mobile cloud stuff started in 2007, startups could go to app stores, acquire customers and sort of take advantage of these global supply chains to build a company for the consumer market.

AI doesn’t give you any of that. It really is a transformation advantage. The fact that you can build better magical experiences with AI is very profound, but it doesn’t change your ability to actually go make a business happen from a customer-acquisition perspective.

So a lot of our thinking has been how to apply AI in collaboration with industries, and think about it as a transformation play. Our investment thesis centers on driving these transformations, which means working with folks in different industries to bring this technology into an industry, or into a business function, responsibly and in a way that actually drives return on investment.

Fork in the road

WSJ: You’re fairly outspoken about the need for industry and governments to build this new technology in a way that takes ethics and impact into account. Where are we with that?

TANEJA: We’re sort of at this fork in the road with AI. Do we want to enhance human progress or do we want to replace work? And so our first design principle is we want to enhance human progress.

In the context of that, we think a lot about where AI should develop. Do we want it to develop in a world with democratic freedom and human rights, or in a world where that isn’t as respected? Obviously, here in the U.S. our view is we want to see this proliferate in the sort of democratic system. So that’s India, Europe, U.S., big centers of gravity.

Now there’s this tension where no country wants to be left behind. France wants its own strategy. India wants its own strategy. And they want their cultural nuances to be implemented in how AI thinks. When you think about that dynamic, there is the risk we become siloed, which is unlike the way we built the internet. So we need to think about a collaboration model, a common set of frameworks around creating guardrails that actually accelerate progress, but in a responsible way.

That is where we have spent a lot of time. It’s hard because every country has its own ideology. You also have this other dynamic where you don’t want the value to accrue just to big companies. You want to have a level playing field for innovation. So how do you make sure that happens?

WSJ: Would you want to see some sort of global AI regulatory summit where you have leaders from Brussels, the U.S. and maybe from China come up with a generalized framework?

TANEJA: The key is to align on some core protocols. We shouldn’t regulate AI too soon. We don’t understand enough of it. We want to take our time with actual regulation, which is hard to unwind. But we can self-regulate with some key standards and get behind governance standards around how do we use this technology.

It’s sort of a way to build companies that’s much more inclusive than, frankly, what Silicon Valley has been used to. The view here has been, “The rest of the world isn’t very smart. We’re just going to build it. Everybody’s going to use it.” And I don’t think in an era of transformation you can have that point of view.

WSJ: What is the worst-case scenario if this isn’t well managed, if we don’t smartly regulate AI?

TANEJA: Go back 100 years and think about how utilities got built. The worst-case scenario was you regulated it to the point where there’s no room for innovation. Then you atrophy. And that industry is so atrophied it has no mechanism to actually self-course-correct around climate change.

It’s just as profound with AI.

0 comments:

Post a Comment