skip to Main Content
AI Apps

Introducing: the Scale Generative AI Index

Over the last several years, two of the dominant constraints in applying and commercializing artificial intelligence have been that:

  • ML models were primarily adept at tasks of understanding (classification, entity extraction, object recognition, etc) across disciplines like speech, language, and images, as opposed to generation.
  • Individual startups were responsible for “do-it-yourself ML” – gathering & labeling proprietary training data and building their own ML models.

But over the last 18 months, the capabilities of machine learning have dramatically expanded, as:

  • Models with real-world-useful generative capabilities have emerged.
  • Foundation models, offering off-the-shelf capabilities to software developers, have become available. We have previously outlined our thesis on why we believe that foundation models are the new public cloud.

We are closely following the momentum of these overlapping but distinct trends. And to track our own work and share some of the knowledge we’ve accumulated, we’re introducing the Scale Generative AI Index, a list of nearly 200 companies in the space and details about what they’re building. We’ll keep adding to this market map as our research progresses.

Our index includes companies that are building on top of foundation models (both for generative and non-generative use cases), companies building generative products with proprietary models, and MLOps/Infra products important to this ecosystem. A breakdown of each of these buckets is below:

Foundation Models

Foundation models (like GPT-3 and Stable Diffusion) are extremely large models trained on broad datasets that can be adapted to a wide range of downstream tasks (Stanford HAI). Foundation models are analogous to the public cloud in making a powerful, new technology (in this case, AI) accessible to developers who do not have specialized machine learning skills.

The low cost and ease-of-use of these models is causing the evolution of AI-apps to accelerate as more engineers jump into building with artificial intelligence.

Today, foundation models (e.g., GPT-3 and Stable Diffusion) are frequently adapted to build generative applications, as that’s their most “wow!” capability. But they also can be applied to more traditional ML use cases such as classification and entity extraction, and importantly, they minimize (but not completely obviate) the need for startups to gather proprietary training data, label it, architect complex data transformations, tune hyperparameters, and select the right model.

Generative Applications 

These are companies utilizing generative AI for its namesake purpose: the creation of net new output in various media types. Needless to say, this is by far the most prolific category and thus comprises the majority of companies on our index. We are seeing startups here that are both building directly on top of existing foundation models, as well as those that have chosen the route of building their own models from scratch, particularly in domains where foundation models don’t exist (e.g. speech).

Non-Generative Applications Built with Foundation Models

While generative use cases are the most popular application of foundation models, many emerging products highlight that generation is only part of the story. Another set of powerful and newly-feasible applications have taken advantage of their embeddings.

More effectively than ever before, text, image, and even an application’s set of possible actions can be represented by embeddings. Semantic search (e.g., what underlies Mem-X) and text-based interfaces (e.g., RunwayML’s newest feature) are two fascinating applications of foundation models.

MLOps, Infrastructure

The prolific innovation we are seeing in the space has also created a need for support infrastructure and frameworks that cater to these new use cases. Companies here come in many forms, but are mainly centered around:

  • ML Ops: Makes selecting, implementing, and fine-tuning foundation models as easy as possible.
  • Vector databases: Databases optimized for vector-based information retrieval

There are other category-of-one companies in this space as well, like Hugging Face.

In sum, the creative abilities of Generative AI enable software to transform a variety of creative fields ranging from the world of voice actors to videography, while Foundation Models are enabling more rapid experimentation with wholly new use cases for AI.

Amid Much Hype… Where’s the Substance?

We would be remiss to ignore that this market map piles onto what feels like an endless stream of hype in the space. And cynics are right to seriously question both the attention span and herd-like behavior of the VC industry in general! While we stand by our conviction that foundation models are the new public cloud, the recent pushback over the outpouring of hype in the last few weeks is very much warranted. In fact, many skeptics bring up a number of good questions that we ourselves are wrestling through.

Where Is the (Enterprise) Value?

Generating an image of an avocado playing guitar may be fun, but, with very few exceptions, is likely not a good business. However, more meaningful use cases do abound even if they are not quite as entertaining.

Avocado playing a guitar

Generated with yours truly, DALLE-2

Generative models have implications that reach far beyond our beautiful avocado art. Investors don’t need to believe that AI will create the next Star Wars or that Hollywood should just throw in the towel to get excited about this category. There is plenty of whitespace in simply building applications that automate the repetitive tasks that many humans loathe — writing sales emails, finding a document without the right keywords, or the manual rotoscoping of an object from video frames — the list goes on.

The Border War

Still, valid questions remain on which players will come out on top here. There is an inevitable “border war” brewing between the foundation model providers themselves and the companies that are built on top of them. After all, the companies which build the foundation models need some way to extract value, yet currently, the lion’s share of revenue in this space lies with the companies who build on top of these foundation models. Over the next year, we’ll be watching to see the extent to which a great application layer/UI offers companies sufficient differentiation in a competitive space. We’re also thinking about whether improvements driven by domain-specific fine-tuning will really give startups enough lift when future foundation models (e.g. GPT-4) inevitably become a whole lot more powerful. And of course, there is the age-old question of: in which circumstances do generative capabilities create new standalone companies versus become features embedded within incumbent applications.

These unknowns, however, do not undermine the innovation we are seeing every single day. After all, VCs have always asked themselves “Can this company add value where the big guns cannot? Is your offering defensible on a technological level? On a product level? Will the incumbent decide to simply integrate your offering and squish you?”

Generative AI is no different. Welcome to AI’s brave new world!

This article co-authored by Jeremy Kaufmann, Max Abram, and Maggie Basta.

Back To Top