Nov 13, 2024

Why VCs See AI Value In Smaller, Specialized Startups and Models

As access to more broadly applicable data becomes challenging, VCs are focusing on startups creating models to address specialized issues rather than the broader applications which have dominated the news.

This trend would appear to be a positive sign for the tech - and for venture investing - harkens back to the industry's early days during which innovation and value creation tended to start small and, through growth, became what it is today. JL

The Wall Street Journal interviews Andreesen Partner Martin Casado:

We’re reaching the limit of our ability to build data centers. We’re going to have to find ways of constructing bigger buildings, having more power going into these buildings. For the current version of models we’re running out of general language improvements, and now we’re going to have to go into specialized areas. Everybody looks at the OpenAIs. But as far as value creation and integration, if you look at all the private companies, the smaller companies that are building their own smaller models, they are some of the fastest-growing companies we’ve seen in the history of the industry.

Heaps of money have been invested in artificial intelligence. Valuations are soaring.

What does it mean for companies—and investors—looking to get a piece of it all? Where are the opportunities?

To get some answers, The Wall Street Journal’s global technology editor, Jason Dean, spoke with venture capitalist Martin Casado, a general partner at Andreessen Horowitz, at the annual WSJ Tech Live conference. Here are edited excerpts of their conversation.

The start of a supercycle

WSJ: What type of AI startups are interesting now, and not too expensive or too late?

MARTIN CASADO: It’s good to draw a comparison to the internet. Periodically in the history of the industry, we’ve seen the marginal cost of things go to zero. With compute, the marginal cost of computation went to zero. We had people calculating logarithm tables by hand, then we had a computer do it. That created the compute revolution. Then the internet, the marginal cost of distribution went to zero. When it comes to AI, it really feels like the marginal cost of language, reasoning and creation are going to zero. And if that’s the case, this is a supercycle. And if that’s the case, we’ve got decades. So there’s no “too late.” In that sense, we’re still very, very early.

Right now, these models that you hear about from Google and OpenAI and Anthropic, they’re backed by these massive companies that are not going to crash. Clearly this is strategic value to them, the beginning of something great.

There are these very large models that everybody’s heard about, like Gemini and OpenAI. But there are a bunch of smaller models that do things like speech or music or images. And if you look at that cohort of companies as an investor, they’re actually very successful.

WSJ: There are already huge valuations for some. The barrier to entry there seems like it might be getting more difficult. What is most interesting now?

CASADO: There are three use cases that are working right now, and maybe many more will work in the future.

Probably the use case working the most is creative composition. That thing could be an image, music. A number of companies in this space are growing as quickly as we’ve seen anything growing.

Imagine a AAA videogame. How much does it cost to create? Say $500 million to $1 billion. There’s not one aspect of that game that a model today could not create. You can create the 3-D meshes, the stories, the videos, the textures—and the actual compute inference cost to create all of that is like $10.

Martin Casado, general partner at Andreessen Horowitz, talks with Jason Dean at WSJ Tech Live. Photo: Nikki Ritcher for WSJ

The No. 2 use case, it’s kind of a funny one. It’s companionship. We’ve never, as technologists, solved the emotion problem with computers. They’ve very clearly not been able to emote. But I’ll give you an example. My daughter is a Covid kid. She’s 14 years old right now and spends a lot of time on Character.AI. And not only does she spend time on Character.AI, when she talks to her friends she will bring her characters along. It has kind of entered the social fabric. We’re seeing great use of these kind of companionships.

The third space definitely working is code. For example, with Cursor, an AI code editor, you can program very sophisticated programmers, whether you’re an expert programmer or a novice programmer, using models, and they’re working very well.

Works to remember

WSJ: Can AI produce creative works that will be remembered?

CASADO: We have the largest creative-model portfolio in the venture community, and I see a lot of these models make images or videos or whatever. And then I see a lot of artists pick them up and use them. The No. 1 predictor on whether the output piece is good or not is whether the person using it has a formal background in art.

These things don’t replace artists. They are machines. If you have an eye to beauty, you will make something more beautiful. If you have an eye to what people want, you’ll get something people want.

WSJ: You hear it said that all problems can be solved by throwing more compute at it. You think it’s a little more complicated than that?

CASADO: If you haven’t read it, read “The Bitter Lesson.” It’s very short. It basically says, when it comes to AI, you have two choices. You can come up with some very clever algorithm that’s very smart. Or you can just find a way to take advantage of more computers, because we always have more compute. Just throw more and more compute, and things will get better, which is basically what we’ve done.

However, AI requires data. We’re humans, we’ve been around for a very long time. For 3,000 years we’ve been observing the universe and thinking really hard and deciding what objects are what and how they interact; we’ve done all of this computation and we wrote it down. AI just finally figured out how to suck up that existing work and spit it back out.

Once you’ve run out of that reservoir, things are going to slow down. We’ve created these amazing models that capture all of human knowledge, but they don’t capture future human knowledge. They can’t reason by themselves.

 

That doesn’t mean other things will slow down. For the current version of models we’re probably running out of general language improvements, and now we’re going to have to go into specialized areas.

Hungry for power

WSJ: Power is another big constraint.

CASADO: Right. We’re also reaching the limit of our ability to build data centers. We’re going to have to find ways of constructing bigger buildings, having more power going into these buildings and building networking fabrics.

But for the next epoch to get to the next level of models, to scale with the same levels of improvement, you need 10 times more every epoch. Ten times more computers, 10 times more power, 10 times more data. These are exponential scales.

WSJ: Does that factor into your calculus as an investor? Do you have to think in terms of this would be a great idea, but I don’t know if we’re going to have the computing power, the data, and the electricity to do it in the next five to 10 years?

CASADO: If all frontier models stop today, there’s still so much value to be had just turning these into applications of people using it. Again, everybody looks at the OpenAIs. But as far as value creation and integration, if you look at all the private companies, the smaller companies that are building their own smaller models, they are some of the fastest-growing companies we’ve seen in the history of the industry.

No comments:

Post a Comment