Roland Fryer, PhD
Co-founder, Chief Executive Officer, Sigma Squared
The most consequential technology of our lifetimes is being regulated by people who can’t agree on what it is. Several states and the European Union have enacted sweeping rules governing artificial intelligence. Illinois prohibits using AI in hiring decisions with discriminatory outcomes—a reasonable goal—but defines AI so broadly that nearly any recommendation system, including statistical methods that go back centuries, may be implicated. New York’s RAISE Act requires developers of “frontier” AI systems to report safety incidents within 72 hours. The EU AI Act imposes penalties of up to 7% of global revenue for violations. The regulatory architecture is vast, fragmented and largely incoherent. But the greatest harm may not be what these systems fail to prevent. It may be what they cause.
I’m working with companies that have abandoned hiring algorithms that produced more meritocratic outcomes than human judgment alone—not because the algorithms were flawed, but because the legal exposure wasn’t worth it. The regulation designed to reduce discrimination is, in practice, increasing it.
The hardest part of regulating isn’t deciding what you want. It’s figuring out how to get it. The basic challenge is one of asymmetric information. A regulator—a federal agency, a state legislature or an attorney general—wants a certain behavior from an AI developer, a police department or a hospital. But the regulator often can’t observe the agent’s true costs, underlying motivations, or day-to-day behavior. And the agent, knowing this, behaves strategically.
David Baron and Roger Myerson tackled this problem in a 1982 paper that I read in graduate school and which forever changed how I look at regulation. They asked: How do you design rules for people who know more than you do? The answer most legislators default to is more surveillance. Audit harder. Hire more inspectors. Messrs. Baron and Myerson showed this intuition is completely wrong.
Rather than demand information (which agents can falsify), you should offer a menu of regulatory options. Each is designed so that firms—high cost or low, safe or risky—find it in their self-interest to choose the option meant for them. The trick is to make truthful revelation the rational choice and misrepresentation unprofitable.
Every risky firm has an incentive to claim it is safe. A well-designed menu removes that incentive. The “safe” track carries strict liability for any harm, which is cheap for a genuinely safe firm but ruinously expensive for a risky one. The risky firm rationally self-selects onto the oversight track.
Consider the Illinois law. A company uses a résumé-screening algorithm. Under the new statute, it must send applicants a notice: “We use AI in our hiring process.” The regulator learns nothing. The applicant receives no meaningful protection. The algorithm may or may not discriminate. The regulation does nothing to find out—and creates no incentive whatsoever for a risky system to reveal itself.
Now consider an economic approach with a menu designed to incentivize firms to self-identify. Option A—full transparency to a certified auditor, lighter compliance requirements, and no penalties unless harm is documented. Option B—no transparency required, but strict liability for any documented discrimination, with penalties calibrated to actual social cost. The distinction is deliberate: Option A trades scrutiny for relief; Option B trades opacity for exposure.
A risky firm pretending to be safe faces ruinous liability under Option A, where an auditor will find what it is hiding. It self-selects Option B, accepting liability exposure in exchange for opacity. The safe firm chooses Option A and earns its lighter burden honestly. Every AI law currently on the books creates none of these incentives. They create paperwork.
One shortcoming, by the way, of this menu is it works best for established firms that know what they’ve built. Newer entrants may still be learning their own systems. A fuller regulatory mechanism would need a provisional track—reduced liability in exchange for mandatory monitoring and transparency. Baron and Myerson were writing about monopolists, not startups. Extending their logic to dynamic markets is the next frontier of this research.
The problem compounds when an agent’s actions, not just its type, are unobservable. My research on policing illustrates the point. The Justice Department, unable to observe officer type or effort, writes a rule triggering a negative consequence—investigation—for a certain pattern of complaints. But a bad cop in an easy situation can produce identical statistics to a good cop in a difficult one. Economist Tanaya Devi and I found in a 2020 study that federal investigations caused police effort to collapse. Officer-initiated contact with civilians in Chicago fell 89% in a single month. Officers rationally reduced effort to minimize risk. We estimated nearly 1,000 excess homicides, predominantly black lives, in the next two years. Not because of bad actors. Because of bad design.
Economists Jean-Jacques Laffont and Jean Tirole extended the Baron-Myerson framework to address exactly this combination of hidden type and hidden action. Their insight: Outcome-based penalties only work when embedded in a menu that first induces honest self-selection. Get the menu right, then let liability do the work. The same patchwork of state mandates that purports to make AI safer is guaranteeing we learn less about the systems being deployed, not more.
Regulators are always in the dark, at least a little. The question is whether they build systems that reveal information or bury it deeper. If we want to regulate complex, fast-moving domains like AI—or even persistent ones like policing—we have to start with humility. We don’t know what’s happening inside the black box. But with the right tools, we can coax it to open just enough to align incentives, discourage harm and reward truth. In regulation, the greatest danger isn’t bad actors. It’s being arrogant enough to design policy as if we have perfect information when we don’t.
Mr. Fryer, a Journal contributor, is a professor of economics at Harvard, a founder of Equal Opportunity Ventures and a senior fellow at the Manhattan Institute.
