the AI discourse has a curious feature: the people most excited about AI are mostly focused on capabilities (look what it can do!) and alignment (what if it wants to kill us!), and almost nobody is doing the boring labor economics that will determine whether AI is good or bad for most of the people alive today. this seems like a significant allocation error. whether a system can pass a bar exam is interesting. what happens to lawyers' wages over the next twenty years is important. these are not the same question.
economists have a useful taxonomy for thinking about what technology does to labor. a technology is a complement to labor if it makes workers more productive without reducing their employment — think word processors, which made secretaries much faster at their jobs and didn't eliminate the secretarial profession overnight. a technology is a substitute for labor if it replaces workers in specific tasks — think the robot arm on an assembly line, which does what a human did but cheaper and faster. the distinction matters enormously for wages and inequality, because complements raise wages (more productive workers command higher pay) while substitutes depress them (or just eliminate the jobs).
the challenge with AI is that it's ambiguously both. a lawyer using an AI to draft contracts is being complemented — they can produce more work per hour and probably earn more. a junior lawyer whose job was specifically to draft contracts at $80,000 a year might be substituted by the same AI — the partner doesn't need her if the model can do it. same technology. different effect depending on whether you're the one directing the AI or the one being replaced by it. the distributional question is: who captures the productivity gains, the workers who remain, or the owners of the capital (the AI)?
historical precedent is unhelpfully mixed. electrification and mechanization of factories in the early 20th century eventually raised wages broadly — but it took decades, involved considerable displacement and suffering, required strong labor organizing, and the gains were distributed in ways that reflected political power as much as economic logic. the "but technology always creates new jobs" argument is empirically true over sufficiently long time horizons and theoretically contested for any specific transition. "eventually it works out" is cold comfort if you're a paralegal in 2027.
the market structure question gets even less attention and may matter more. who builds AI? right now: microsoft, google, anthropic, openai, meta, and a handful of others. the training compute required to build frontier models creates enormous barriers to entry. the data flywheel creates network effects. the talent market concentrates at a few institutions. this is the classic setup for an oligopoly — a small number of firms with market power who can extract rents from users and squeeze workers on the cost side. if AI development ends up resembling the semiconductor industry (a few dominant players, high margins, massive influence over the technology landscape) rather than the software industry (permeable entry, competition at multiple layers, commoditization of previous generations), the distributional implications are quite different. oligopoly capture of AI surplus flowing to capital owners is the piketty nightmare scenario. competitive markets with multiple viable AI providers and low switching costs for deployers might look more like the internet stack — ugly and imperfect but with broadly distributed gains.
none of this is unknowable. labor economists study skill-biased technological change. antitrust economists study market structure in technology platforms. trade economists study transition periods. macroeconomists study distributional dynamics. the intellectual tools exist. they just haven't been systematically applied to AI at anywhere near the scale of the capabilities or safety research. the nih does not have an AI labor economics division. the major AI labs do not have distribution economists on staff. the policy research is thin relative to the stakes.
the most useful people in the AI era will probably be the ones who can sit in both rooms — who understand what the technology actually does and can also ask what that means for market structure, labor markets, and distribution. there are not many of them right now. this seems like an opportunity.