skip to Main Content
Back to Insights

Venture capital has always been a mix of art and science. And training more apprenticeship than algorithmic. It seems most investors trust their network, their read on the founder, their estimates on the market. In other words—pure gut. And for good reason. That judgement has historically been hard to teach and even harder to quantify (except looking back from ten years down the road).

That hasn’t stopped us from trying, and we’ve made some progress, our Scale Studio benchmarking being a notable example. 

It’s no surprise that many scoff at the thought of machines trying to pick winners. But that’s not the question that I’m asking. I’m starting with something smaller and much more tractable. 

“Picking winners” can be boiled down to two questions: is this a market with a venture-backable outcome, and if so, is this the company that wins? Each of these questions can be broken down into hundreds of smaller questions, some of which can be answered by data (market sizing, for example) and some of which are in the “gut” category (is this the team that can get to IPO?). The AI assist spans both. The holy grail? Systems that turn patterns of human behavior into actual investment signal.

Can machines distinguish between the right company at the right time vs. the loudest? Can machines identify companies that will likely attract attention from other top tier VCs? Are there things that we can focus on that will improve our coverage of deals that matter?

I think the answer is yes. But in order to get there, you have to solve for more than just beautiful precision recall curves. You have to solve for trust.

Why venture is uniquely resistant to data-driven tools? 

Most industries have long integrated some form of machine learning into their workflows. Think consumer recommendations, ad targeting, fraud detection, and other parts of financial services (think algo trading). But venture is different. Success is rooted in not only results, but how investors think and work. 

That isn’t to say that the data itself isn’t a problem. It is. There is a lack of data compared to other fields. There are billions of events for the functions above vs. just a few hundred winning companies, which emit signals over very long time frames. And, the work of venture is looking for outliers, so the data that does exist does not always help us find what we are looking for. Pattern matching is helpful, but is not always applicable when we are looking for companies and founders that break the mold. 

In the absence of data, investors make decisions based on gut feeling. AI can still be helpful, but acknowledging these limitations up front helps us channel our energies where they will be most useful. And, like all AI builders, we have to build for the appetites of our specific customers. 

Investors are used to judgement calls, not probability scores. They’re attuned to edge cases and nuances. And perhaps more importantly, they don’t like to be told what to do by a black box, especially in a domain where their expertise is part of their identity. 

I’ve seen this dynamic play out firsthand. We’ve been running AI experiments internally and through the VC Platform community since ChatGPT dropped. In early experiments building for investment teams, our more opaque models did yield slightly better predictive performance. But they failed the adoption test. When users couldn’t understand WHY a company was being surfaced, they dismissed the result—regardless of accuracy. 

So, we made a choice. Prioritize explainability over model purity (for now). Our model is now designed to not only flag deals, but to provide the why. This could be characteristics of the company, like that the founding team went to school together, or growth signals like hiring patterns that show a strong ramp up likely in anticipation of increased revenue growth rates.

These are narratives that an investor can gut check, versus a score to blindly accept. This may change as we build trust in the model, repeating a pattern we expect to see in other industries. AI disrupts first by doing the easily understandable thing, then as users trust it, gains more and more labor share.

What we’re actually doing (and building)

We’re building AI into our decision engine. Where we sit is in a hard to automate zone. We’re not pure seed where it’s founders and ideas. We’re not late where it’s traction and market dominance. We live in the messy middle where discovery and prioritization is the name of the game. 

From a technical perspective, this is not a simple score model or rule engine (though we did start with a simple heuristic). We’re investing in a multi-layered architecture that can process a broad range of structured and unstructured data signals. While picking winners is the ultimate goal, we trust it a bit more to show us where to focus our time and attention. A single seed company may not tell you much. But when ten similar companies pop up in the course of three months, it’s worth paying attention to. 

All of this feeds into our model trained on downstream outcomes. Going back to our critical two decisions— is this a market and will this team win it?—we’ll look at simplified questions that ladder up, like, “will this company raise from a top tier VC fund in the next 6 months?” It’s not perfect, but it’s directional and more importantly, measurable. Winning takes a while in venture. We should know if we’re headed in the right direction by this time next year.

The real work lies in how we assemble these features, filter for noise, and expose the way that the machine is programmed to work to investors so that they make sense. So, while a neural network may produce better results, optimizing for investor adoption might mean we adopt a model that produces explainable results. And, we know integration matters. In order for any model to be useful, we need to surface signals where investors live (email, CRM, browsers). 

What is our North Star? We are trying to build for judgement support, not decision automation.

We aren’t under any illusion that a machine will fully take over a human investor. The goal is to augment our investors so they show up earlier with better inputs. Think about it this way: how valuable would it be if we could flag to an investor:

  • This company is predicted to be in market within the next 45 days
  • It’s likely that this company will get engagement with more than two of your peer firms
  • This company is hiring at triple the rate of other seed stage companies

That might be enough signal to warrant a deeper dive into a company. It won’t do the work, but what it will do is highlight where the needle might be in the proverbial haystack. 

We’re all in. 

This is a core part of how we think the next generation of venture gets built. I believe that the next great venture firm won’t be built just by great investors. Instead, it will have technical leaders embedded in the strategy of the firm. 

So, I’m hiring someone that will lead this work. We’re looking for a builder at heart, who has empathy for how investors think, rigor on how models are designed, and taste in knowing what not to automate. Together, we will create a system that can actually shift how decisions are made in a high trust, high ambiguity environment. 

If that sounds like a fit, we should talk. Because while machines may never win deals on their own, they can—and will—get us to a decision point faster, and with more defensible data. In venture, that’s what matters most. 

Back To Top
Close mobile menu