[h] home [b] blog [n] notebook

venture capital and the misallocation of talent

peter thiel said, sometime around 2013, that we wanted flying cars and got 140 characters. it was a good line and a real complaint. the venture capital industry had developed a set of constraints — large addressable markets, fast growth, exits in 7-10 years, billion-dollar ceiling within the fund's life — that systematically filtered out entire categories of problem. infrastructure takes decades to pay off. basic research has no obvious route to an acquisition. public goods can't be privatized. so brilliant people who might have worked on power grids or drug manufacturing went to work on social networks and ad tech and recommendation algorithms, because those were the problems that paid.

what i want to add to thiel's complaint is that the mechanism is less sinister than it sounds. venture capital is not a conspiracy against important problems. it's a financial instrument with specific characteristics — illiquid, long-horizon, concentrated risk — that requires specific return profiles to function as a fund. a VC fund that deployed $500 million into century-long infrastructure plays would not be able to return capital to its LPs on any reasonable schedule, and the LPs (who are themselves managing other people's money, usually pension funds and endowments with their own constraints) would stop giving money to the fund. the constraint is real. it's not malice; it's just incentive structure all the way down.

but the outcome is still a profound misallocation relative to what matters. the ratio of researchers working on AI capabilities to researchers working on AI interpretability and safety is something like 100:1. this is not because interpretability is less important. by almost any reasonable framework for prioritizing research, understanding the systems we're building before we deploy them in consequential domains is very important. it's because "a new AI system that can do customer service" has a clear business model and "a tool that helps researchers understand what a neural network is computing" is a public good that nobody can charge for, which means it doesn't get funded at scale, which means the talent goes elsewhere.

biotech is a sharper example. we have had extraordinary progress on the software-adjacent parts of biology: DNA sequencing is essentially free compared to 2003, data analysis has transformed genomics, drug target identification has improved dramatically. we have made much less progress on manufacturing biologics at scale, on running faster clinical trials, on fixing the regulatory bottlenecks that add years to drug approval. manufacturing and trials and regulation are schlep — they're unglamorous, they're slow, they involve navigating institutions. they don't attract VC the way a new therapeutic target does. the smartest biologists are mostly not working on them.

within AI the same pattern: the capabilities race is well-funded and gets top talent. alignment and interpretability are chronically underfunded relative to their importance. the labs themselves are better than the pure market outcome — anthropic and deepmind have serious safety research programs — but they're still constrained by competitive dynamics that reward demonstrated capability over demonstrated understanding. a company that paused capability development to focus entirely on interpretability would be out-competed by one that didn't. this is a prisoner's dilemma and it's a real one.

there's no clean answer to this. venture capital is optimizing for returns because that's the deal its investors signed up for. the solution requires funding mechanisms that don't have the same constraints: government research agencies, philanthropic funders with long time horizons, people willing to take salaries and build public goods without equity upside. these exist and do some of this work. they're dramatically underpowered relative to the private capital flowing into VC-friendly problems. until that balance shifts, thiel's complaint will remain valid: we're solving the problems that the incentive structure rewards, which is not the same as the problems that matter most.