Part V: Gods
Reframing Alignment
Introduction
0:00 / 0:00
Reframing Alignment
Standard alignment: "Make AI do what humans want."
Reframed: "What agentic systems are we instantiating, at what scale, with what viability manifolds?"
Genuine alignment must therefore address multiple scales simultaneously:
- Individual AI scale: System does what operators intend
- AI ecosystem scale: Multiple AI systems interact without pathological emergent dynamics
- AI-human hybrid scale: AI + human systems don't form parasitic patterns
- Superorganism scale: Emergent agentic patterns from AI + humans + institutions have aligned viability
A superorganism—including AI-substrate superorganisms—is well-designed if:
- Aligned viability:
- Error correction: Updates beliefs on evidence
- Bounded growth: Does not metastasize beyond appropriate scale
- Graceful death: Can dissolve when no longer beneficial