<
INGEST
...
DROP INPUT +
EXPAND
FDB...
EDGES...
SYNC...
VEL...
VEC...
KG...
CMP...
FLOW MODE
[m]
Monitor
[A]
Auto
ACTIONS
π
Curate
π·
Worker
β
Verify
π
Status
INFERCELL
Context: [New]Interjects prompt to selected inference engines (via `dialogosd`).
PROVIDERS
chatgpt-webWEB
claude-webWEB
gemini-webWEB
FALLBACK CHAIN
chatgpt-webβclaude-webβgemini-web
EDGE INFERENCE
Server inference is always on; browser WebLLM is always-on unless disabled.
LOCAL LLM (SERVER)
vllm-local
not available
ollama-local
not available
Uses `/v1/models` + `/api/session`, warms via `/v1/chat/completions` with explicit `vllm-local`/`ollama-local` (no cloud fallback).
LIVE PULSE
0??[bus]epm60 0
eps 0.00
5m 0
err5m 0
a2a 0
dom 0
actors 0
TOP DOMAINS
TOP ACTORS
SWE LANES
0 lanes0% avg??[lanes.json]
0 green0 yellow0 red0 agents
Loading lanes...
QA LANES
0No QA lane messages yet.
Cognitive Boot Sequence
β‘
Checking Synapses...
π§
Mounting Hippocampus...
π
Ingesting Personas...
π¦Ύ
Hydrating Skills...
ποΈ
Self-Awareness Check...
SUPERMOTDL0 DISCONNECTED
bridge offline / pin / ws
Boot-stream style meta events (curated). Prefer reading this over raw `dialogos.cell.output` noise.
No curated meta events yet. Generate activity (e.g. PluriChat `/status`, spawn a worker, run verify).
Agent Telemetry0 activities??[bus]
<