Generative AI startups we want to fund

Writing

From text (GPT-3) to images (DALL-E) to music (JukeBox), the types of generative models we have today are dramatically more powerful than anything we've ever seen before. And they’ll only get better from here.

As both AI practitioners and investors, we believe this revolution will enable a new wave of game-changing companies. Already, we’ve seen entrepreneurs build on top of these technologies, using GPT-3 to generate marketing content and DALL-E to generate stock images.

However, the obvious applications of the technology barely even scratch the surface of what is possible. Instead, the biggest companies of the future will be formed by looking at what is only just becoming possible, and applying these technologies in ways that showcase their strengths while working around their (likely temporary) limitations.

Below, we've highlighted four example areas we’re interested in funding. If you’re working on something related, get in touch!

Generating code

One of the areas that has generated the most attention is using AI to generate code, especially through GitHub Copilot.

We do believe that AI has the potential to radically improve developer productivity, potentially by more than 10 or 100x within the next few decades.

However, the most obvious application today – using generative AI to generate large swaths of entire functions – is not necessarily the best pathway forward. Just try spotting the bug in the Copilot-generated code below. And unfortunately, as Jeremy Howard points out, there are plenty of other issues as well -- Copilot-generated code fails to use the standard library, is trained to imitate average (generally bad) programmers, could create legal and copyright issues, and more.

Let’s play "Spot the bug." If you can’t see the bug right away… yeah that’s kinda the problem. Credit: Taken directly from the Copilot website

That's why, though Copilot is impressive, we're more excited by specific, nuanced, and narrow applications of generative AI to software engineering.

For example, generative AI should be able to automatically detect (and correct) coding style violations – a kind of Grammarly but for code. They should also be able to generate additional test cases, help suggest names when refactoring, or apply a large and complicated refactoring operation more broadly across a large codebase. They can also likely find places that seem likely to contain security flaws (e.g. buffer overflows), or help break apart really large functions into smaller, easier-to-use and understand functions. Just last week, Series B startup Replit released a product that moves more in this direction.

Replit uses AI to not only generate but also refactor/rewrite code. Credit: Taken from Replit CEO Amjad Masad's announcement on Twitter

We spent only a few minutes coming up with those initial examples. We’re much more excited to hear ideas from entrepreneurs building in this area, so don’t take these examples as an exhaustive list.

Generating video content

Someday, neural networks may be able to generate entire, realistic videos – perhaps entire movies or TV shows.

Right now, that’s very far away. But today's networks are already capable of generating a very particular type of video content: simple animated "videos," such as content like anime. Some anime is little more than the frames of a comic book stuck directly into video form. Given tools like DALL-E, we're getting close to a world where those original comic frames could all be generated from text, and even the text for each frame could be generated by GPT-3! Such a composite system could, in theory, allow the creation of an entire "animated" video from as little as a single sentence, and in such a way that each individual piece of the result could be easily edited.

Generative AI can also already be used to extend animated video. Existing techniques can turn simple animated movies into something much more richly animated, such as by making the leaves on trees blow in the wind, by having characters talk or move, or even by transferring expressions acted out by humans into the animated style, as in the "MegaPortraits" work below.

Credit: "MegaPortraits: One-shot Megapixel Neural Head Avatars," by Nikita Drobyshev, Jenya Chelishev, Taras Khakhulin, Aleksei Ivakhnenko, Viktor Lempitsky, Egor Zakharov

We see an opportunity for entrepreneurs to build new companies on top of this initial video technology, and then to continuously incorporate new animation techniques as more research develops. It’s unclear exactly whether this new company should be a new movie studio, a set of tools for existing artists, a community of consumers creating a new type of content, or something else entirely. If you have opinions, we’d love to hear!

Generating game content

While the gaming industry has grown rapidly over the past decade, the process of generating "art assets" – the images, textures, and 3D models actually used in-game – can be extremely slow, with game designers painstakingly setting each pixel and tiny movement for every animation.

This won’t always be the case. The latest generative models enable tools that could dramatically reduce the cost for creating these assets, as well as open up the creation process to millions of people who could not otherwise build assets themselves. These tools can not only be used for generating realistic animations and textures, but also for generating entire worlds, dialog trees, and other aspects of gameplay.

In fact, generative models could open up entirely new gameplay possibilities. Projects like AI Dungeon are just the very beginning. While characters in games have always been fairly static, startups like Inworld AI are enabling users to create non-playable characters (NPCs) with the appearance of memories, personalities and human-like behaviors.

Credit: Inworld AI

By building games around the capabilities of our latest generative models, entrepreneurs could create whole new forms of games or experiences that resonate with completely new audiences. It's always hard to tell what will resonate with people ahead of time, but we're excited to see more experimentation in this area.

Generating realistic human interactions

Humans are hard-wired to interact with other humans. Whether it's debate or negotiation or just casual conversation, most of us spend huge chunks of our time talking to other people.

As language models get better, it might be possible for an AI agent to, at least in a limited setting, provide an actually enjoyable, interesting conversational partner. We could imagine games built around interacting with other human-like agents – imagine a murder mystery where you have to actually interview the witnesses, or an improv game played with a mix of virtual and human players, or an AI that can hold its own against your most opinionated, argumentative friend. These types of interactions stretch the definition of what we would consider a "game" today, and could potentially open up entirely new markets.

And these types of interactions could be used for more than just entertainment. Imagine getting to learn a language in the context of a more game-like setting where you get to have conversations with an AI agent that sounds like a Parisian local. Or getting to train your sales teams by having AI agents that raise common customer objections and where you can choose how difficult for them to be.

For some of these applications, you might want to take advantage of technology like the "MetaHuman Creator," where the user can interact with a photorealistic human face. For other applications, however, you may want just voice, or just text, in the same way that we switch between video chat, voice only, or text when interacting with other people today.

Credit: Unreal Engine’s MetaHuman Creator

Finally, rather than just generating new interactions, generative AI can also be used to augment existing interactions. Already, startups like Sanas use generative AI to change the accent of a customer service agent's voice in real time, while Intel has generated some controversy by building technology to detect when students look bored, confused, or distracted.

Where do we go from here?

The technologies we've seen today, while impressive, are only the beginning of what is possible.

We believe the next few years present a unique opportunity for founders to use the rapid advances in generative models to disrupt some of the largest companies in existence. The founders that succeed will be those that figure out how to both take advantage of the capabilities available today and position their companies to grow with the technologies as they continue to develop.

If you’re building something new, regardless of whether it is taking advantage of the latest advances in generative AI or something else entirely, get in touch!