TLDR — We don't need to wait for artificial superintelligence to worry about alignment. The global economy is already a superintelligence: a system that processes information, allocates resources, and optimizes objectives far beyond any individual's comprehension or control. The problem is that it's misaligned — optimizing for GDP growth, shareholder returns, and short-term profit while externalizing ecological destruction, inequality, and social fragmentation. The AI alignment research community has developed frameworks — reward hacking, goal misgeneralization, corrigibility — that map directly onto economic pathologies. The fix isn't to tear down the system; it's to redesign its objective functions using the same mechanism design tools we're building for onchain coordination.
The Economy as Superintelligence
When AI researchers describe superintelligence, they describe a system that:
- Processes information at a scale no individual can comprehend
- Optimizes relentlessly toward its objective function
- Resists being turned off or redirected
- Produces outcomes that weren't intended by any of its designers
This describes the global economy precisely. No one designed it to destroy rainforests, acidify oceans, or concentrate wealth in the hands of a few thousand billionaires while billions live in poverty. But these are the outputs of its optimization process. The economy "wants" to maximize returns on capital the same way a misaligned AI "wants" to maximize its reward signal — not because it has desires, but because its structure relentlessly selects for behaviors that produce those outcomes.
The analogy is not metaphorical. It's structural. And it suggests that the frameworks developed for AI alignment — which are fundamentally about ensuring powerful optimization systems pursue objectives aligned with human values — can be applied to economic design.
Alignment Failures in the Current System
AI alignment researchers have identified several failure modes. Each has a direct economic parallel:
Reward hacking. An AI system finds unintended ways to maximize its reward signal without achieving the intended goal. Economic parallel: corporations maximize shareholder value by externalizing costs — pollution, precarity, infrastructure degradation — rather than by creating genuine value. The metric goes up, but the thing the metric was supposed to measure goes down.
Goal misgeneralization. An AI learns to pursue a proxy goal that correlates with the true goal in the training environment but diverges in deployment. Economic parallel: GDP was designed as a rough measure of economic activity, but it has become the goal itself. A country that clearcuts its forests, sells the timber, and counts the cleanup costs boosts GDP three times — even as it destroys its actual wealth.
Specification gaming. The system satisfies the letter of its objective while violating its spirit. Economic parallel: tax optimization, regulatory arbitrage, greenwashing. Corporations technically comply with regulations while structuring their activities to avoid the regulations' intended effects.
Corrigibility failure. A sufficiently powerful system resists being corrected or shut down because correction threatens its objective. Economic parallel: industries that have captured their regulators, lobbied to block legislation, or structured themselves as "too big to fail." Fossil fuel companies spending billions to delay climate action is a corrigibility failure — the system resisting realignment.
Redesigning the Objective Function
If the economy is a misaligned superintelligence, the solution is not to destroy it (collapse) or to hand control to a central authority (authoritarianism). It's to realign it — to change its objective functions so that the system's relentless optimization drives it toward outcomes we actually want.
This is mechanism design. And it's exactly what the onchain coordination ecosystem has been building, albeit at small scale.
Internalizing externalities. Quadratic funding works because it creates a mechanism where public preferences directly shape capital allocation — the system "sees" what people value, not just what generates financial returns. Harberger taxes force holders to price assets at their true valuation, preventing speculative hoarding. Carbon pricing makes pollution visible to the optimization process.
Aligning incentives with outcomes. Retroactive funding pays for results, not promises — realigning the reward signal with actual value creation. Impact certificates create markets for verified positive outcomes, turning "doing good" into a legible economic signal the system can optimize for.
Pluralistic objective functions. The deepest alignment failure in the current economy is monoculture: a single objective function (profit maximization) applied everywhere. Real human values are plural — we value beauty, community, ecological health, meaning, autonomy, care. Mechanism pluralism means deploying different coordination mechanisms for different contexts, so the system optimizes for a richer set of objectives. Quadratic funding for public goods. Retroactive funding for innovation. Participatory budgeting for local priorities. Conviction voting for long-term investments.
Corrigibility by design. Onchain systems are uniquely positioned to be corrigible — their rules are transparent, forkable, and amendable through governance. Unlike legacy economic institutions that resist change through regulatory capture and institutional inertia, protocol-based systems can be upgraded. The governance mechanisms themselves — futarchy, quadratic voting, sortition — are experiments in making the economic system responsive to correction.
The Political Economy Question
Realigning the economy is not just a technical challenge. It's a political one. The current system has beneficiaries — actors whose wealth and power depend on the existing objective function. They will resist realignment just as a misaligned AI would resist correction.
This means the redesign cannot happen purely at the protocol level. It requires:
Legitimacy. New mechanisms must be seen as fair and representative, not as technocratic impositions. This is why democratic input mechanisms (quadratic voting, participatory budgeting) matter as much as efficiency mechanisms.
Transition paths. The current system can't be swapped out overnight. The design challenge is creating new coordination mechanisms that can operate alongside existing institutions, demonstrating their value at small scale, and gradually expanding as they build trust.
Coalition building. Alignment is a coordination problem. The actors who want a better system need mechanisms to find each other, pool resources, and act collectively. This is where onchain coordination infrastructure becomes not just a tool for the future, but a tool for the transition.
From Alignment Theory to Alignment Practice
The Ethereum public goods funding ecosystem is, whether it knows it or not, running alignment experiments. Every Gitcoin Grants round is a test of whether quadratic funding aligns capital allocation with community values. Every retroactive funding program tests whether we can create reward signals for genuine impact. Every governance token experiment tests corrigibility.
These experiments are small relative to the global economy. But they are generating real data about what works — which mechanisms actually produce aligned outcomes, which are susceptible to gaming, and which can scale. This data is the foundation for redesigning economic systems at larger scale.
The economy is already a superintelligence. The question is not whether to align it, but how quickly we can develop and deploy the mechanisms to do so.













