Structured workflows made of multiple stages, not one giant prompt.
skill_organism(...) is the current practical runtime for building one coherent workflow out of multiple stages.
A lot of real multi-agent workflows are not “many agents everywhere.” They look more like normalize input, route cheaply, do ambiguous work with a stronger model, and attach telemetry or review without rewriting the workflow.
SkillStageSkillOrganismTelemetryProbeSubstrateViewStages can be deterministic handlers or provider-bound agents using a fast nucleus or a deep nucleus.
Stages within an organism have access to three layers of context, each with different lifetime and mutability:
shared_state dictionary. Carries routing hints, counters, and stage outputs. Mutable, not historically reconstructible.BiTemporalMemory substrate. Carries durable factual knowledge with dual time axes. Append-only and fully auditable.shared_state is still useful for lightweight orchestration data — routing labels, counters, temporary outputs.
When you pass substrate=BiTemporalMemory() to skill_organism(...), stages can read facts via read_query and write facts via fact_extractor or emit_output_fact. Handlers receive a frozen SubstrateView(facts, query, record_time) rather than the raw memory instance, keeping them decoupled from memory internals.
SkillStage field | Purpose |
|---|---|
read_query | Subject string or callable → SubstrateView injected before stage runs |
fact_extractor | Callable → emits assert/correct/invalidate events after stage runs |
emit_output_fact | Convenience flag: auto-records (task, stage.name, output) |
fact_tags | Default tags applied to all facts emitted by this stage |
This enables the audit question: “what did the organism know when stage X made its decision?” — answered via retrieve_belief_state() on the append-only history.
Since v0.22, stages can declare a CognitiveMode: OBSERVATIONAL (System A — passive sensing) or ACTION_ORIENTED (System B — active decision-making). When not set, the mode is inferred from the existing mode field. The watcher detects mismatches and reports A/B balance via mode_balance().
For cross-cutting concerns you do not want to hardcode: telemetry (TelemetryProbe), runtime monitoring (WatcherComponent — classifies signals as epistemic/somatic/species-specific, can retry/escalate/halt), review and safety policies, and custom lifecycle hooks via SkillRuntimeComponent.
68_skill_organism_runtime.py — deterministic intake, fast routing, deep planning, attached telemetry71_bitemporal_skill_organism.py — multi-stage workflow with bi-temporal substrate, belief-state reconstruction, and temporal diffs73_watcher_component.py — runtime monitoring with signal classification and retry/escalate/halt interventions76_cognitive_modes.py — System A/B cognitive mode annotations with watcher mode balance