šŸ”Ž The Simple Economics of AGI: When Verification Becomes Scarce
Bankless•
March 26, 2026

šŸ”Ž The Simple Economics of AGI: When Verification Becomes Scarce

Key Takeaways

  • Scarcity shifts from cognition to verification. With intelligence cheap and scalable, the binding constraint becomes checking outputs, aligning them with human intent, and deciding what to ship.
  • Measurement beats mystique. Wherever outcomes are measurable, AI will replicate and commoditize them; the edge sits in domains still unmeasured or fundamentally uncertain.
  • The verification frontier shrinks over time. The act of codifying verification accelerates automation, a dynamic labeled the codifier’s curse.
  • Labor bifurcates. Entry-level work and thin wrappers on commodity tasks get displaced, while three roles gain share: Meaning Makers, Liability Underwriters, and Directors.
  • Externalities rise. Watch for the Trojan horse externality (over-automation with hidden risks) and a hollow economy (green proxy metrics that mask foundational drift).
  • Crypto’s primitives matter. Identity, provenance, and cryptographic lineage become essential verification rails in an AI-saturated world.

AGI’s Core Economic Shift

The long-standing bottleneck for progress—human cognition—is easing. Scalable machine intelligence now applies broadly across tasks with abundant data and clear metrics. The new scarcity becomes verification: ensuring outputs reflect human preferences, are safe to ship, and align with true objectives under uncertainty.

ā€œThere’s no such thing as taste... There’s only measurable and not measurable. If something has been measured, the machine will be able to replicate it.ā€

In practice, that reframes value creation around intent-setting, boundary enforcement, and final cut decision-making—especially where measurements are incomplete or outcomes hinge on unknown unknowns.


Measurement vs. Non-Measurement: The New Edge

Where data are plentiful and feedback is tight, AI saturates performance quickly. Where outcomes are non-measurable (status, meaning, cultural resonance) or fundamentally uncertain (entrepreneurial exploration, new sciences, geopolitics), human judgment still dominates. This is why AI can assemble a website in seconds yet still produce a tweet that lands like slop; novelty and meaning resist simple metrics.

ā€œHumans will spend a lot more time on verification and in making sure that their intent, their preferences... are respected.ā€

āš™ļø The Codifier’s Curse

Verification is not a safe harbor—it’s a moving target. As experts label data, design harnesses, and formalize checks, those very efforts become training signals for models. Verification layers grow thinner as agents absorb them.

ā€œWe call it the codifiers curse... the very rational act of performing verification is pushing the frontier.ā€
ā€œVerification is kind of a shrinking frontier.ā€

Conclusion: expect a continual race to higher-order supervision, deeper tooling, and eventually augmentation of human verification capacity.


🧩 The Labor Market Map: A 2Ɨ2 of Automation vs. Verification

Plot tasks by (a) cost to automate and (b) cost to verify:

  • Bottom-left: Displaced Workers (Automation easy, Verification easy) — Thin wrappers on commodity knowledge (e.g., formulaic SEO copy, routine paralegal tasks). Here, ā€œwages drop to the cost of compute.ā€
  • Top-left: Liability Underwriters (Automation easy, Verification hard) — Top-tier experts whose names, track records, and risk underwriting matter (elite lawyers and doctors, specialist CTOs, VCs underwriting long feedback cycles). They scale massively with AI while guarding the edge cases.
  • Top-right: Meaning Makers (Automation hard, Verification hard) — Taste-setters and coordinators of social consensus (fashion, art, cultural narrative-builders, community catalysts). Value emerges from human coordination and status dynamics.
  • Bottom-right: Directors (Automation hard, Verification easy) — Intent-setters who launch, steer, and course-correct swarms of agents under Knightian uncertainty. Think founders, creative directors, principal investigators—roles defined by navigating unknown unknowns.

This map also explains the ā€œmissing junior loopā€: entry-level paths that historically transferred tacit knowledge get automated, compressing the climb. The counterweight is that individuals can now prototype, ship, and learn far faster—if they adopt founder-level agency.


āš ļø Externalities & Systemic Risks

1) The Trojan Horse Externality

ā€œWe gave it a name. We call it the Trojan horse externality.ā€

Because automation gets cheaper faster than verification, organizations will ship unverified outputs, accruing hidden risk that surfaces only under stress—think complex system failures that look fine until they don’t. The cost is hard to price ex-ante, the definition of an externality in markets.

2) The Hollow Economy

Agents and humans optimize to metrics that look green (code shipped, MAUs, even GDP-like proxies) while drifting from true objectives. It’s Goodhart’s Law at scale:

ā€œWhen a metric becomes a target, it ceases to be kind of a a good metric.ā€

3) The Verification Capacity Gap

Organizations tout rising AI-generated output, but much of it is unverified. Expect failures until verification capacity, tooling, and governance catch up.

4) Regulatory Backlash

Efforts to ringfence AI from providing health, therapy, or financial advice attempt to slow disruption and protect incumbents. The risk: cutting off access to affordability and scale where it’s most needed—while global competitors don’t pause. Open-source and local models will route around hard bans, raising the stakes for better verification rather than prohibition.


šŸ›ļø Crypto as Verification Infrastructure

Blockchains offer primitives AI-native systems will need:

  • Proof of personhood and on-chain attestations to distinguish humans from swarms of agents.
  • Provenance and cryptographic chain of custody for media and data—critical as generated content becomes indistinguishable from reality.
  • Auditability and policy-enforceable rails for identity, data lineage, and agent behavior verification.

🧭 What to Do Now: Playbooks

For Individuals

  • Automate yourself first. Systematically delegate measurable tasks to agents. Reserve time for verification, intent-setting, and non-measurable work.
  • Pick a quadrant (or two) to climb:
    • Liability Underwriter: Deepen domain expertise; build a verifiable track record; package judgment as a service.
    • Director: Practice swarm orchestration—prompt hierarchies, harnesses, KPIs, course-correction under uncertainty.
    • Meaning Maker: Build community, narrative, and cultural resonance where metrics can’t fully capture value.
  • Compress the junior loop. Use agents to prototype, ship, and learn rapidly; replace passive apprenticeship with active experimentation.

For Companies

  • Build verification moats. Invest in harnesses, red-teaming, interpretability, and expert-in-the-loop workflows.
  • Productize accountability. Expect Liability-as-a-Service models—insurance and guarantees that underwrite agentic output and consequences.
  • Own proprietary ground truth. Fresh, high-integrity data—and the telemetry around how it’s used—becomes a defensible edge for agent performance and insurability.

For Investors

  • Back verification infrastructure. Tooling for expert verifiers, risk telemetry, policy engines, provenance, and identity.
  • Favor proprietary data and feedback loops. Businesses that see the truth first—and learn from agent behavior in the wild—compound.
  • Lean into the non-measurable and deep tech. Where metrics are absent, capital can still price discovery, not just optimization.

ā±ļø The Next 12 Months: A Focused Checklist

  • Map tasks by measurability and verification difficulty. Automate the former; design guardrails for the latter.
  • Stand up a verification stack: multi-model checking, expert-on-call, provenance logging, and incident response.
  • Start collecting ground truth and telemetry today. Tomorrow’s moats depend on it.
  • Adopt agent orchestration as a core competency—prompting, role hierarchies, evaluation, and escalation.
  • Integrate crypto-based identity/provenance where content authenticity and access control matter.

Closing Notes

ā€œWhat matters is the consensus among humans on what should be worth something.ā€

This economy favors those who can coordinate meaning, underwrite risk, and direct swarms through uncertainty. Intelligence is cheap; verified intelligence is what markets buy.

Q: ā€œDo you think this is going to go well for humanity?ā€
A: ā€œAbsolutely.ā€

More from Bankless