mirror of
https://github.com/techforces-ai/Cial.git
synced 2026-05-15 20:14:11 +00:00
Audit pass over docs/ + adjacent code following the cial-* → core/platform/app
layout consolidation.
Bug fix:
- core/back BuildRunner ALL_FILTERS referenced @cial/core-back and
@cial/core-front, which no longer exist (the packages are @cial/back +
@cial/front). Self-edit deploys with scope=all would have silently
skipped those packages. Filters corrected.
Docs aligned with reality:
- docs/README.md — promotes file-structure.md to the start-here entry.
- architecture/dev-tenant.md — full rewrite: paths now /cial/* throughout,
documents the read-only :ro overlay of /cial/core, the new
--config.confirm-modules-purge=false install flag, the symlink dance for
project skills, and the agent's cwd=/cial + HOME=/cial/data/home setup.
- architecture/deploy-pipeline.md — package-name fix for ALL_FILTERS.
- architecture/core-vs-platform.md — package-name fix for the build list.
- ops/supervisor.md — drops stale "added in Phase 7" annotation.
- ops/deploy-logs.md — example log line uses @cial/back.
- self-edit/recipes.md — protocol path and dependency chain naming.
- design/self-edit-unrestricted.md — banner clarifying it's the original
design record (pre-rename) so an agent doesn't follow stale paths from it.
Tiny code touch:
- core/edge/src/supervisor.dev.ts — comment on CIAL_MONOREPO_ROOT no longer
contradicts itself ("not /cial" → "the bind-mounted repo at /cial").
Build verified: turbo run build for @cial/back still passes (cache miss
re-executed cleanly with the updated runner.ts).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4.1 KiB
4.1 KiB
Deploy pipeline
End-to-end flow when something hits POST /api/v1/self/deploy.
Components
┌────────────────────────┐ JSONL ┌────────────────────────┐
│ core-back │ over UDS │ edge supervisor │
│ ───────────── │ ──────────► │ ───────────── │
│ /api/v1/self/deploy │ │ /run/cial-supervisor │
│ │ │ │ .sock │
│ ▼ │ │ │ │
│ DeployService │ │ ▼ │
│ │ │ │ spawn / SIGTERM │
│ ▼ │ │ child processes │
│ BuildRunner │ └────────────────────────┘
│ (single-flight) │
│ │ │
│ ▼ │
│ spawn pnpm build │
└────────────────────────┘
Lifecycle
- Enqueue —
DeployService.start({ scope })→BuildRunner.enqueue().- One build at a time. New requests either coalesce (same mode + scope) or replace the queued slot.
- Build —
pnpm <filters> buildruns inmonorepoRoot.- Filters depend on scope (
platformvsall) — seerunner.ts. - Stdout/stderr streamed to the WS bridge as
deploy.logevents.
- Filters depend on scope (
- Restart — on build success,
SupervisorClient.restart('platform' | 'all')over the Unix socket.- Supervisor SIGTERMs each named child, respawns it, sends
restart.ack+restart.done. - For
edge(only inall+ unrestricted): supervisor exits, Docker restart-policy recreates the container.
- Supervisor SIGTERMs each named child, respawns it, sends
Files
| File | Role |
|---|---|
core/back/src/modules/deploy/runner.ts |
BuildRunner — pnpm spawn + log streaming |
core/back/src/modules/deploy/service.ts |
Glue — runner ↔ supervisor ↔ WS broadcast |
core/back/src/modules/deploy/supervisor-client.ts |
JSONL-over-UDS client |
core/edge/src/supervisor-ipc.ts |
Wire protocol + scope sets |
core/edge/src/supervisor.ts (prod) |
IPC server + child management |
core/edge/src/supervisor.dev.ts (dev) |
Same, but for pnpm dev:tenant |
core/back/src/modules/self/router.ts |
The agent-facing endpoints |
Build filters
// runner.ts
const PLATFORM_FILTERS = [
'--filter', '@cial/platform-front',
'--filter', '@cial/platform-back',
];
const ALL_FILTERS = [
'--filter', '@cial/protocol',
'--filter', '@cial/sdk',
'--filter', '@cial/core-ui',
'--filter', '@cial/back',
'--filter', '@cial/front',
'--filter', '@cial/edge',
'--filter', '@cial/platform-back',
'--filter', '@cial/platform-front',
];
Restart sets
// supervisor-ipc.ts
const PLATFORM_RESTARTABLES = new Set(['platform-front', 'platform-back']);
const ALL_RESTARTABLES = new Set([
'platform-front', 'platform-back',
'core-front', 'core-back',
'edge', // exits supervisor → docker restart-policy bounces container
]);
WS events
The deploy service broadcasts these to the requesting user's connected tabs:
deploy.start— { deployId, mode, targets[], sessionId }deploy.log— { deployId, stream, line }deploy.restart.start— { service }deploy.restart.done— { service, pid, durationMs }deploy.done— { deployId, ok, exitCode, durationMs, errorSummary }deploy.cancelled— { deployId }
Self-edit calls use requestedByUserId = '__self__', so WS broadcasts go nowhere — the response payload + log polling is enough.