Operon v0.22: The Cognitive Architecture
Cognitive Modes, Sleep Consolidation, Social Learning, Curiosity, and What Happens When Agents Start Dreaming
Release: v0.22.0 – v0.22.1v0.21 gave Operon adaptive structure: pattern libraries, watchers, and the feedback loop that connects them. v0.22 goes further into the cognitive architecture that Dupoux, LeCun, and Malik describe as the precondition for autonomous learning. Stages now declare whether they are System A (observational) or System B (action-oriented). A sleep consolidation cycle replays successful patterns, compresses them into templates, runs counterfactual analysis over corrected facts, and promotes histone marks from temporary to permanent. Organisms can share templates across libraries with trust-weighted adoption. And the watcher gains curiosity signals that trigger escalation when the agent encounters genuinely novel territory. This is the release where Operon starts dreaming, socializing, and getting curious.
1. Cognitive Modes: System A/B as Stage Annotations
Operon’s fast/deep nucleus distinction started as cost
optimization: route cheap work to a small model, expensive work to
a large one. v0.22 reframes this as a cognitive architecture principle.
The CognitiveMode enum classifies stages as
OBSERVATIONAL (System A — passive sensing,
information gathering) or ACTION_ORIENTED
(System B — active decision-making, goal pursuit).
from operon_ai import CognitiveMode, SkillStage
SkillStage(name="classifier", role="Router", mode="fast",
cognitive_mode=CognitiveMode.OBSERVATIONAL)
SkillStage(name="planner", role="Strategist", mode="deep",
cognitive_mode=CognitiveMode.ACTION_ORIENTED)
When not set, cognitive mode is inferred from the existing
mode field (fast → observational, deep →
action-oriented). The watcher detects mismatches between declared
mode and actual execution model, and mode_balance()
reports the System A/B distribution across a run.
2. Sleep Consolidation: When the Organism Dreams
The SleepConsolidation class composes five existing
components — AutophagyDaemon,
PatternLibrary, EpisodicMemory,
HistoneStore, and BiTemporalMemory
— into a five-step post-batch consolidation cycle:
- Prune stale context via autophagy
- Replay successful run records into episodic memory with tier promotion (WORKING → EPISODIC)
- Compress recurring high-success patterns into
consolidated
PatternTemplateinstances - Counterfactual replay — diff bi-temporal corrections since each run and ask: “would the outcome have changed with updated facts?”
- Promote frequently-accessed ACETYLATION histone marks to permanent METHYLATION
Why Consolidation Matters
Without consolidation, the organism accumulates raw experience without distilling it. Pattern libraries grow but don’t sharpen. Episodic memories decay without promotion. Histone marks expire before they can become permanent lessons. The sleep cycle is the mechanism that converts operational history into structural knowledge — exactly what Dupoux et al. describe as imagination-based learning during rest.
3. Social Learning: Horizontal Gene Transfer
The SocialLearning module enables cross-organism
template sharing, following the biological analogy of horizontal
gene transfer in bacteria. Organism A exports successful
templates; Organism B imports them with trust-weighted
adoption.
from operon_ai import SocialLearning, PatternLibrary
sl_a = SocialLearning(organism_id="A", library=lib_a)
exchange = sl_a.export_templates(min_success_rate=0.6)
sl_b = SocialLearning(organism_id="B", library=lib_b)
result = sl_b.import_from_peer(exchange)
# result.adopted_template_ids — which templates B adopted
# result.trust_score_used — trust B has for A
Epistemic Vigilance
The TrustRegistry implements epistemic vigilance:
per-peer trust scores are updated via exponential moving average
over adoption outcomes. When an imported template succeeds, trust
in the source peer increases. When it fails, trust decreases.
Trust below a configurable threshold blocks adoption entirely.
This is not blind imitation. It is calibrated trust — the organism learns which peers provide useful templates and which provide noise, and adjusts its adoption behavior accordingly. Provenance tracking traces every adopted template back to its source peer, closing the feedback loop.
4. Curiosity Signals: Intrinsic Motivation
The watcher gains a new signal source: curiosity. When the
EpiplexityMonitor detects EXPLORING status (high
embedding novelty — the agent is encountering genuinely
unfamiliar territory), the watcher emits a curiosity signal.
If the signal value exceeds curiosity_escalation_threshold
and the stage uses a fast model, the watcher recommends ESCALATE
to engage the deep model for more thorough investigation.
This operationalizes intrinsic motivation: the organism actively
seeks deeper understanding of novel inputs rather than processing
them with cheap statistical pattern matching. The curiosity signal
is an epistemic signal (same category as epiplexity) with
source="curiosity", keeping the three-category
taxonomy intact while adding a new dimension within the epistemic
category.
5. Validation
| Suite | Tests | Status |
|---|---|---|
| Cognitive mode (v0.22.0) | 9 | All pass |
| Sleep consolidation (v0.22.0) | 9 | All pass |
| Social learning (v0.22.1) | 20 | All pass |
| Curiosity signals (v0.22.1) | 10 | All pass |
| Full regression suite | 1101 | All pass |
operon_ai/healing/consolidation.py,
operon_ai/coordination/social_learning.py,
operon_ai/patterns/watcher.py,
operon_ai/patterns/types.py,
examples/76–79
6. What Comes Next
v0.22 delivers the cognitive architecture layer. Two phases remain on the roadmap: Phase 7 (developmental staging — critical periods, capability gating, telomere-based maturation) and Phase 8 (release integration — cross-subsystem tests, large-scope bi-temporal adapters, paper polish, publication-grade eval runs).
The progression is now six layers deep: structure (v0.17–0.18) → memory (v0.19–0.20) → adaptation (v0.21) → cognition (v0.22) → development (v0.23) → publication (v0.23.x). Each layer assumes the previous one is stable. The cognitive extensions — dreaming, socializing, getting curious — are only possible because the adaptive loop, the pattern library, and the watcher were already in place.
Code and release: github.com/coredipper/operon, operon-ai on PyPI, consolidation space, social learning space