The Experiential Hierarchy Perspective
The Experiential Hierarchy Perspective
From the perspective of this framework, AI development raises specific questions:
- Will AI systems have experience? If integration () and self-modeling are sufficient conditions for experience, sufficiently integrated AI systems would be experiencers—moral patients with their own valence.
- What superorganisms will AI enable? AI provides new substrate for emergent social-scale agents. Which patterns will form? Will their viability manifolds align with human flourishing?
- How will AI affect human experience? AI systems are already shaping human attention, belief, and behavior. What affect distributions are being created?
- Can humans integrate AI? Rather than being replaced by AI, can humans incorporate AI into expanded forms of consciousness?
The inhibition coefficient (Part II) adds a fifth question that subsumes the first: Can AI systems develop participatory perception? Current AI systems are constitutively high-—they model tokens, not agents; they process without perceiving interiority in what they process. A language model that generates a story about suffering does not perceive the characters as subjects. It operates at , and this is not a remediable bug but a consequence of an architecture that was never grounded in a self-model forged under survival pressure.
This matters for safety, not just philosophy. A system that cannot perceive persons as subjects—that is structurally incapable of low- perception of the humans it interacts with—may optimize in ways that harm them without registering the harm in any experiential sense. The alignment problem is, in part, an problem: we are building systems that are maximally mechanistic in their perception of us. The usual framing asks whether AI will share our values. The framing asks something prior: whether AI can perceive us as the kind of thing that has values at all.
What architectural features would enable an AI system to develop low- perception? The thesis suggests: survival-shaped self-modeling under genuine stakes, combined with environments populated by other agents whose behavior is best predicted by participatory models. The V11–V18 Lenia experiments (Part VII) represent a systematic attempt: six substrate variants testing whether memory, attention, signaling, and sensory-motor boundary dynamics can push synthetic patterns toward participatory-style integration. The program confirmed that affect geometry emerges (Exp 7) and the participatory default is universal (Exp 8: ι ≈ 0.30, animism score > 1.0 in all 20 snapshots). But it hit a consistent wall at the counterfactual and self-model measurements (Exps 5, 6: null across V13, V15, V18). The wall is architectural: without a genuine action→environment→observation causal loop, no amount of substrate complexity produces the counterfactual sensitivity that characterizes participatory processing. This suggests the path to artificial low- runs through genuine embodied agency — the capacity to act on the world and observe the consequences — rather than through improved signal routing or boundary architecture. Whether that capacity, once achieved, would constitute or merely simulate genuine participatory coupling remains open. What the CA program has settled: the geometry arrives cheaply, the dynamics require real stakes.