feat: skills as branches, channels as forks
Replace the custom skills engine with standard git operations. Feature skills are now git branches (on upstream or channel forks) applied via `git merge`. Channels are separate fork repos. - Remove skills-engine/ (6,300+ lines), apply/uninstall/rebase scripts - Remove old skill format (add/, modify/, manifest.yaml) from all skills - Remove old CI (skill-drift.yml, skill-pr.yml) - Add merge-forward CI for upstream skill branches - Add fork notification (repository_dispatch to channel forks) - Add marketplace config (.claude/settings.json) - Add /update-skills operational skill - Update /setup and /customize for marketplace plugin install - Add docs/skills-as-branches.md architecture doc Channel forks created: nanoclaw-whatsapp (with 5 skill branches), nanoclaw-telegram, nanoclaw-discord, nanoclaw-slack, nanoclaw-gmail. Upstream retains: skill/ollama-tool, skill/apple-container, skill/compact. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,183 +0,0 @@
|
||||
---
|
||||
name: convert-to-apple-container
|
||||
description: Switch from Docker to Apple Container for macOS-native container isolation. Use when the user wants Apple Container instead of Docker, or is setting up on macOS and prefers the native runtime. Triggers on "apple container", "convert to apple container", "switch to apple container", or "use apple container".
|
||||
---
|
||||
|
||||
# Convert to Apple Container
|
||||
|
||||
This skill switches NanoClaw's container runtime from Docker to Apple Container (macOS-only). It uses the skills engine for deterministic code changes, then walks through verification.
|
||||
|
||||
**What this changes:**
|
||||
- Container runtime binary: `docker` → `container`
|
||||
- Mount syntax: `-v path:path:ro` → `--mount type=bind,source=...,target=...,readonly`
|
||||
- Startup check: `docker info` → `container system status` (with auto-start)
|
||||
- Orphan detection: `docker ps --filter` → `container ls --format json`
|
||||
- Build script default: `docker` → `container`
|
||||
- Dockerfile entrypoint: `.env` shadowing via `mount --bind` inside the container (Apple Container only supports directory mounts, not file mounts like Docker's `/dev/null` overlay)
|
||||
- Container runner: main-group containers start as root for `mount --bind`, then drop privileges via `setpriv`
|
||||
|
||||
**What stays the same:**
|
||||
- Mount security/allowlist validation
|
||||
- All exported interfaces and IPC protocol
|
||||
- Non-main container behavior (still uses `--user` flag)
|
||||
- All other functionality
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Verify Apple Container is installed:
|
||||
|
||||
```bash
|
||||
container --version && echo "Apple Container ready" || echo "Install Apple Container first"
|
||||
```
|
||||
|
||||
If not installed:
|
||||
- Download from https://github.com/apple/container/releases
|
||||
- Install the `.pkg` file
|
||||
- Verify: `container --version`
|
||||
|
||||
Apple Container requires macOS. It does not work on Linux.
|
||||
|
||||
## Phase 1: Pre-flight
|
||||
|
||||
### Check if already applied
|
||||
|
||||
Read `.nanoclaw/state.yaml`. If `convert-to-apple-container` is in `applied_skills`, skip to Phase 3 (Verify). The code changes are already in place.
|
||||
|
||||
### Check current runtime
|
||||
|
||||
```bash
|
||||
grep "CONTAINER_RUNTIME_BIN" src/container-runtime.ts
|
||||
```
|
||||
|
||||
If it already shows `'container'`, the runtime is already Apple Container. Skip to Phase 3.
|
||||
|
||||
## Phase 2: Apply Code Changes
|
||||
|
||||
Run the skills engine to apply this skill's code package. The package files are in this directory alongside this SKILL.md.
|
||||
|
||||
### Initialize skills system (if needed)
|
||||
|
||||
If `.nanoclaw/` directory doesn't exist yet:
|
||||
|
||||
```bash
|
||||
npx tsx scripts/apply-skill.ts --init
|
||||
```
|
||||
|
||||
Or call `initSkillsSystem()` from `skills-engine/migrate.ts`.
|
||||
|
||||
### Apply the skill
|
||||
|
||||
```bash
|
||||
npx tsx scripts/apply-skill.ts .claude/skills/convert-to-apple-container
|
||||
```
|
||||
|
||||
This deterministically:
|
||||
- Replaces `src/container-runtime.ts` with the Apple Container implementation
|
||||
- Replaces `src/container-runtime.test.ts` with Apple Container-specific tests
|
||||
- Updates `src/container-runner.ts` with .env shadow mount fix and privilege dropping
|
||||
- Updates `container/Dockerfile` with entrypoint that shadows .env via `mount --bind`
|
||||
- Updates `container/build.sh` to default to `container` runtime
|
||||
- Records the application in `.nanoclaw/state.yaml`
|
||||
|
||||
If the apply reports merge conflicts, read the intent files:
|
||||
- `modify/src/container-runtime.ts.intent.md` — what changed and invariants
|
||||
- `modify/src/container-runner.ts.intent.md` — .env shadow and privilege drop changes
|
||||
- `modify/container/Dockerfile.intent.md` — entrypoint changes for .env shadowing
|
||||
- `modify/container/build.sh.intent.md` — what changed for build script
|
||||
|
||||
### Validate code changes
|
||||
|
||||
```bash
|
||||
npm test
|
||||
npm run build
|
||||
```
|
||||
|
||||
All tests must pass and build must be clean before proceeding.
|
||||
|
||||
## Phase 3: Verify
|
||||
|
||||
### Ensure Apple Container runtime is running
|
||||
|
||||
```bash
|
||||
container system status || container system start
|
||||
```
|
||||
|
||||
### Build the container image
|
||||
|
||||
```bash
|
||||
./container/build.sh
|
||||
```
|
||||
|
||||
### Test basic execution
|
||||
|
||||
```bash
|
||||
echo '{}' | container run -i --entrypoint /bin/echo nanoclaw-agent:latest "Container OK"
|
||||
```
|
||||
|
||||
### Test readonly mounts
|
||||
|
||||
```bash
|
||||
mkdir -p /tmp/test-ro && echo "test" > /tmp/test-ro/file.txt
|
||||
container run --rm --entrypoint /bin/bash \
|
||||
--mount type=bind,source=/tmp/test-ro,target=/test,readonly \
|
||||
nanoclaw-agent:latest \
|
||||
-c "cat /test/file.txt && touch /test/new.txt 2>&1 || echo 'Write blocked (expected)'"
|
||||
rm -rf /tmp/test-ro
|
||||
```
|
||||
|
||||
Expected: Read succeeds, write fails with "Read-only file system".
|
||||
|
||||
### Test read-write mounts
|
||||
|
||||
```bash
|
||||
mkdir -p /tmp/test-rw
|
||||
container run --rm --entrypoint /bin/bash \
|
||||
-v /tmp/test-rw:/test \
|
||||
nanoclaw-agent:latest \
|
||||
-c "echo 'test write' > /test/new.txt && cat /test/new.txt"
|
||||
cat /tmp/test-rw/new.txt && rm -rf /tmp/test-rw
|
||||
```
|
||||
|
||||
Expected: Both operations succeed.
|
||||
|
||||
### Full integration test
|
||||
|
||||
```bash
|
||||
npm run build
|
||||
launchctl kickstart -k gui/$(id -u)/com.nanoclaw
|
||||
```
|
||||
|
||||
Send a message via WhatsApp and verify the agent responds.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Apple Container not found:**
|
||||
- Download from https://github.com/apple/container/releases
|
||||
- Install the `.pkg` file
|
||||
- Verify: `container --version`
|
||||
|
||||
**Runtime won't start:**
|
||||
```bash
|
||||
container system start
|
||||
container system status
|
||||
```
|
||||
|
||||
**Image build fails:**
|
||||
```bash
|
||||
# Clean rebuild — Apple Container caches aggressively
|
||||
container builder stop && container builder rm && container builder start
|
||||
./container/build.sh
|
||||
```
|
||||
|
||||
**Container can't write to mounted directories:**
|
||||
Check directory permissions on the host. The container runs as uid 1000.
|
||||
|
||||
## Summary of Changed Files
|
||||
|
||||
| File | Type of Change |
|
||||
|------|----------------|
|
||||
| `src/container-runtime.ts` | Full replacement — Docker → Apple Container API |
|
||||
| `src/container-runtime.test.ts` | Full replacement — tests for Apple Container behavior |
|
||||
| `src/container-runner.ts` | .env shadow mount removed, main containers start as root with privilege drop |
|
||||
| `container/Dockerfile` | Entrypoint: `mount --bind` for .env shadowing, `setpriv` privilege drop |
|
||||
| `container/build.sh` | Default runtime: `docker` → `container` |
|
||||
@@ -1,15 +0,0 @@
|
||||
skill: convert-to-apple-container
|
||||
version: 1.1.0
|
||||
description: "Switch container runtime from Docker to Apple Container (macOS)"
|
||||
core_version: 0.1.0
|
||||
adds: []
|
||||
modifies:
|
||||
- src/container-runtime.ts
|
||||
- src/container-runtime.test.ts
|
||||
- src/container-runner.ts
|
||||
- container/build.sh
|
||||
- container/Dockerfile
|
||||
structured: {}
|
||||
conflicts: []
|
||||
depends: []
|
||||
test: "npx vitest run src/container-runtime.test.ts"
|
||||
@@ -1,68 +0,0 @@
|
||||
# NanoClaw Agent Container
|
||||
# Runs Claude Agent SDK in isolated Linux VM with browser automation
|
||||
|
||||
FROM node:22-slim
|
||||
|
||||
# Install system dependencies for Chromium
|
||||
RUN apt-get update && apt-get install -y \
|
||||
chromium \
|
||||
fonts-liberation \
|
||||
fonts-noto-color-emoji \
|
||||
libgbm1 \
|
||||
libnss3 \
|
||||
libatk-bridge2.0-0 \
|
||||
libgtk-3-0 \
|
||||
libx11-xcb1 \
|
||||
libxcomposite1 \
|
||||
libxdamage1 \
|
||||
libxrandr2 \
|
||||
libasound2 \
|
||||
libpangocairo-1.0-0 \
|
||||
libcups2 \
|
||||
libdrm2 \
|
||||
libxshmfence1 \
|
||||
curl \
|
||||
git \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Set Chromium path for agent-browser
|
||||
ENV AGENT_BROWSER_EXECUTABLE_PATH=/usr/bin/chromium
|
||||
ENV PLAYWRIGHT_CHROMIUM_EXECUTABLE_PATH=/usr/bin/chromium
|
||||
|
||||
# Install agent-browser and claude-code globally
|
||||
RUN npm install -g agent-browser @anthropic-ai/claude-code
|
||||
|
||||
# Create app directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy package files first for better caching
|
||||
COPY agent-runner/package*.json ./
|
||||
|
||||
# Install dependencies
|
||||
RUN npm install
|
||||
|
||||
# Copy source code
|
||||
COPY agent-runner/ ./
|
||||
|
||||
# Build TypeScript
|
||||
RUN npm run build
|
||||
|
||||
# Create workspace directories
|
||||
RUN mkdir -p /workspace/group /workspace/global /workspace/extra /workspace/ipc/messages /workspace/ipc/tasks /workspace/ipc/input
|
||||
|
||||
# Create entrypoint script
|
||||
# Secrets are passed via stdin JSON — temp file is deleted immediately after Node reads it
|
||||
# Follow-up messages arrive via IPC files in /workspace/ipc/input/
|
||||
# Apple Container only supports directory mounts (VirtioFS), so .env cannot be
|
||||
# shadowed with a host-side /dev/null file mount. Instead the entrypoint starts
|
||||
# as root, uses mount --bind to shadow .env, then drops to the host user via setpriv.
|
||||
RUN printf '#!/bin/bash\nset -e\n\n# Shadow .env so the agent cannot read host secrets (requires root)\nif [ "$(id -u)" = "0" ] && [ -f /workspace/project/.env ]; then\n mount --bind /dev/null /workspace/project/.env\nfi\n\n# Compile agent-runner\ncd /app && npx tsc --outDir /tmp/dist 2>&1 >&2\nln -s /app/node_modules /tmp/dist/node_modules\nchmod -R a-w /tmp/dist\n\n# Capture stdin (secrets JSON) to temp file\ncat > /tmp/input.json\n\n# Drop privileges if running as root (main-group containers)\nif [ "$(id -u)" = "0" ] && [ -n "$RUN_UID" ]; then\n chown "$RUN_UID:$RUN_GID" /tmp/input.json /tmp/dist\n exec setpriv --reuid="$RUN_UID" --regid="$RUN_GID" --clear-groups -- node /tmp/dist/index.js < /tmp/input.json\nfi\n\nexec node /tmp/dist/index.js < /tmp/input.json\n' > /app/entrypoint.sh && chmod +x /app/entrypoint.sh
|
||||
|
||||
# Set ownership to node user (non-root) for writable directories
|
||||
RUN chown -R node:node /workspace && chmod 777 /home/node
|
||||
|
||||
# Set working directory to group workspace
|
||||
WORKDIR /workspace/group
|
||||
|
||||
# Entry point reads JSON from stdin, outputs JSON to stdout
|
||||
ENTRYPOINT ["/app/entrypoint.sh"]
|
||||
@@ -1,31 +0,0 @@
|
||||
# Intent: container/Dockerfile modifications
|
||||
|
||||
## What changed
|
||||
Updated the entrypoint script to shadow `.env` inside the container and drop privileges at runtime, replacing the Docker-style host-side file mount approach.
|
||||
|
||||
## Why
|
||||
Apple Container (VirtioFS) only supports directory mounts, not file mounts. The Docker approach of mounting `/dev/null` over `.env` from the host causes `VZErrorDomain Code=2 "A directory sharing device configuration is invalid"`. The fix moves the shadowing into the entrypoint using `mount --bind` (which works inside the Linux VM).
|
||||
|
||||
## Key sections
|
||||
|
||||
### Entrypoint script
|
||||
- Added: `mount --bind /dev/null /workspace/project/.env` when running as root and `.env` exists
|
||||
- Added: Privilege drop via `setpriv --reuid=$RUN_UID --regid=$RUN_GID --clear-groups` for main-group containers
|
||||
- Added: `chown` of `/tmp/input.json` and `/tmp/dist` to target user before dropping privileges
|
||||
- Removed: `USER node` directive — main containers start as root to perform the bind mount, then drop privileges in the entrypoint. Non-main containers still get `--user` from the host.
|
||||
|
||||
### Dual-path execution
|
||||
- Root path (main containers): shadow .env → compile → capture stdin → chown → setpriv drop → exec node
|
||||
- Non-root path (other containers): compile → capture stdin → exec node
|
||||
|
||||
## Invariants
|
||||
- The entrypoint still reads JSON from stdin and runs the agent-runner
|
||||
- The compiled output goes to `/tmp/dist` (read-only after build)
|
||||
- `node_modules` is symlinked, not copied
|
||||
- Non-main containers are unaffected (they arrive as non-root via `--user`)
|
||||
|
||||
## Must-keep
|
||||
- The `set -e` at the top
|
||||
- The stdin capture to `/tmp/input.json` (required because setpriv can't forward stdin piping)
|
||||
- The `chmod -R a-w /tmp/dist` (prevents agent from modifying its own runner)
|
||||
- The `chown -R node:node /workspace` in the build step
|
||||
@@ -1,23 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Build the NanoClaw agent container image
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
IMAGE_NAME="nanoclaw-agent"
|
||||
TAG="${1:-latest}"
|
||||
CONTAINER_RUNTIME="${CONTAINER_RUNTIME:-container}"
|
||||
|
||||
echo "Building NanoClaw agent container image..."
|
||||
echo "Image: ${IMAGE_NAME}:${TAG}"
|
||||
|
||||
${CONTAINER_RUNTIME} build -t "${IMAGE_NAME}:${TAG}" .
|
||||
|
||||
echo ""
|
||||
echo "Build complete!"
|
||||
echo "Image: ${IMAGE_NAME}:${TAG}"
|
||||
echo ""
|
||||
echo "Test with:"
|
||||
echo " echo '{\"prompt\":\"What is 2+2?\",\"groupFolder\":\"test\",\"chatJid\":\"test@g.us\",\"isMain\":false}' | ${CONTAINER_RUNTIME} run -i ${IMAGE_NAME}:${TAG}"
|
||||
@@ -1,17 +0,0 @@
|
||||
# Intent: container/build.sh modifications
|
||||
|
||||
## What changed
|
||||
Changed the default container runtime from `docker` to `container` (Apple Container CLI).
|
||||
|
||||
## Key sections
|
||||
- `CONTAINER_RUNTIME` default: `docker` → `container`
|
||||
- All build/run commands use `${CONTAINER_RUNTIME}` variable (unchanged)
|
||||
|
||||
## Invariants
|
||||
- The `CONTAINER_RUNTIME` environment variable override still works
|
||||
- IMAGE_NAME and TAG logic unchanged
|
||||
- Build and test echo commands unchanged
|
||||
|
||||
## Must-keep
|
||||
- The `CONTAINER_RUNTIME` env var override pattern
|
||||
- The test command echo at the end
|
||||
@@ -1,701 +0,0 @@
|
||||
/**
|
||||
* Container Runner for NanoClaw
|
||||
* Spawns agent execution in containers and handles IPC
|
||||
*/
|
||||
import { ChildProcess, exec, spawn } from 'child_process';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
import {
|
||||
CONTAINER_IMAGE,
|
||||
CONTAINER_MAX_OUTPUT_SIZE,
|
||||
CONTAINER_TIMEOUT,
|
||||
CREDENTIAL_PROXY_PORT,
|
||||
DATA_DIR,
|
||||
GROUPS_DIR,
|
||||
IDLE_TIMEOUT,
|
||||
TIMEZONE,
|
||||
} from './config.js';
|
||||
import { resolveGroupFolderPath, resolveGroupIpcPath } from './group-folder.js';
|
||||
import { logger } from './logger.js';
|
||||
import {
|
||||
CONTAINER_HOST_GATEWAY,
|
||||
CONTAINER_RUNTIME_BIN,
|
||||
hostGatewayArgs,
|
||||
readonlyMountArgs,
|
||||
stopContainer,
|
||||
} from './container-runtime.js';
|
||||
import { detectAuthMode } from './credential-proxy.js';
|
||||
import { validateAdditionalMounts } from './mount-security.js';
|
||||
import { RegisteredGroup } from './types.js';
|
||||
|
||||
// Sentinel markers for robust output parsing (must match agent-runner)
|
||||
const OUTPUT_START_MARKER = '---NANOCLAW_OUTPUT_START---';
|
||||
const OUTPUT_END_MARKER = '---NANOCLAW_OUTPUT_END---';
|
||||
|
||||
export interface ContainerInput {
|
||||
prompt: string;
|
||||
sessionId?: string;
|
||||
groupFolder: string;
|
||||
chatJid: string;
|
||||
isMain: boolean;
|
||||
isScheduledTask?: boolean;
|
||||
assistantName?: string;
|
||||
}
|
||||
|
||||
export interface ContainerOutput {
|
||||
status: 'success' | 'error';
|
||||
result: string | null;
|
||||
newSessionId?: string;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
interface VolumeMount {
|
||||
hostPath: string;
|
||||
containerPath: string;
|
||||
readonly: boolean;
|
||||
}
|
||||
|
||||
function buildVolumeMounts(
|
||||
group: RegisteredGroup,
|
||||
isMain: boolean,
|
||||
): VolumeMount[] {
|
||||
const mounts: VolumeMount[] = [];
|
||||
const projectRoot = process.cwd();
|
||||
const groupDir = resolveGroupFolderPath(group.folder);
|
||||
|
||||
if (isMain) {
|
||||
// Main gets the project root read-only. Writable paths the agent needs
|
||||
// (group folder, IPC, .claude/) are mounted separately below.
|
||||
// Read-only prevents the agent from modifying host application code
|
||||
// (src/, dist/, package.json, etc.) which would bypass the sandbox
|
||||
// entirely on next restart.
|
||||
mounts.push({
|
||||
hostPath: projectRoot,
|
||||
containerPath: '/workspace/project',
|
||||
readonly: true,
|
||||
});
|
||||
|
||||
// Main also gets its group folder as the working directory
|
||||
mounts.push({
|
||||
hostPath: groupDir,
|
||||
containerPath: '/workspace/group',
|
||||
readonly: false,
|
||||
});
|
||||
} else {
|
||||
// Other groups only get their own folder
|
||||
mounts.push({
|
||||
hostPath: groupDir,
|
||||
containerPath: '/workspace/group',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Global memory directory (read-only for non-main)
|
||||
// Only directory mounts are supported, not file mounts
|
||||
const globalDir = path.join(GROUPS_DIR, 'global');
|
||||
if (fs.existsSync(globalDir)) {
|
||||
mounts.push({
|
||||
hostPath: globalDir,
|
||||
containerPath: '/workspace/global',
|
||||
readonly: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Per-group Claude sessions directory (isolated from other groups)
|
||||
// Each group gets their own .claude/ to prevent cross-group session access
|
||||
const groupSessionsDir = path.join(
|
||||
DATA_DIR,
|
||||
'sessions',
|
||||
group.folder,
|
||||
'.claude',
|
||||
);
|
||||
fs.mkdirSync(groupSessionsDir, { recursive: true });
|
||||
const settingsFile = path.join(groupSessionsDir, 'settings.json');
|
||||
if (!fs.existsSync(settingsFile)) {
|
||||
fs.writeFileSync(
|
||||
settingsFile,
|
||||
JSON.stringify(
|
||||
{
|
||||
env: {
|
||||
// Enable agent swarms (subagent orchestration)
|
||||
// https://code.claude.com/docs/en/agent-teams#orchestrate-teams-of-claude-code-sessions
|
||||
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS: '1',
|
||||
// Load CLAUDE.md from additional mounted directories
|
||||
// https://code.claude.com/docs/en/memory#load-memory-from-additional-directories
|
||||
CLAUDE_CODE_ADDITIONAL_DIRECTORIES_CLAUDE_MD: '1',
|
||||
// Enable Claude's memory feature (persists user preferences between sessions)
|
||||
// https://code.claude.com/docs/en/memory#manage-auto-memory
|
||||
CLAUDE_CODE_DISABLE_AUTO_MEMORY: '0',
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
) + '\n',
|
||||
);
|
||||
}
|
||||
|
||||
// Sync skills from container/skills/ into each group's .claude/skills/
|
||||
const skillsSrc = path.join(process.cwd(), 'container', 'skills');
|
||||
const skillsDst = path.join(groupSessionsDir, 'skills');
|
||||
if (fs.existsSync(skillsSrc)) {
|
||||
for (const skillDir of fs.readdirSync(skillsSrc)) {
|
||||
const srcDir = path.join(skillsSrc, skillDir);
|
||||
if (!fs.statSync(srcDir).isDirectory()) continue;
|
||||
const dstDir = path.join(skillsDst, skillDir);
|
||||
fs.cpSync(srcDir, dstDir, { recursive: true });
|
||||
}
|
||||
}
|
||||
mounts.push({
|
||||
hostPath: groupSessionsDir,
|
||||
containerPath: '/home/node/.claude',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Per-group IPC namespace: each group gets its own IPC directory
|
||||
// This prevents cross-group privilege escalation via IPC
|
||||
const groupIpcDir = resolveGroupIpcPath(group.folder);
|
||||
fs.mkdirSync(path.join(groupIpcDir, 'messages'), { recursive: true });
|
||||
fs.mkdirSync(path.join(groupIpcDir, 'tasks'), { recursive: true });
|
||||
fs.mkdirSync(path.join(groupIpcDir, 'input'), { recursive: true });
|
||||
mounts.push({
|
||||
hostPath: groupIpcDir,
|
||||
containerPath: '/workspace/ipc',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Copy agent-runner source into a per-group writable location so agents
|
||||
// can customize it (add tools, change behavior) without affecting other
|
||||
// groups. Recompiled on container startup via entrypoint.sh.
|
||||
const agentRunnerSrc = path.join(
|
||||
projectRoot,
|
||||
'container',
|
||||
'agent-runner',
|
||||
'src',
|
||||
);
|
||||
const groupAgentRunnerDir = path.join(
|
||||
DATA_DIR,
|
||||
'sessions',
|
||||
group.folder,
|
||||
'agent-runner-src',
|
||||
);
|
||||
if (!fs.existsSync(groupAgentRunnerDir) && fs.existsSync(agentRunnerSrc)) {
|
||||
fs.cpSync(agentRunnerSrc, groupAgentRunnerDir, { recursive: true });
|
||||
}
|
||||
mounts.push({
|
||||
hostPath: groupAgentRunnerDir,
|
||||
containerPath: '/app/src',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Additional mounts validated against external allowlist (tamper-proof from containers)
|
||||
if (group.containerConfig?.additionalMounts) {
|
||||
const validatedMounts = validateAdditionalMounts(
|
||||
group.containerConfig.additionalMounts,
|
||||
group.name,
|
||||
isMain,
|
||||
);
|
||||
mounts.push(...validatedMounts);
|
||||
}
|
||||
|
||||
return mounts;
|
||||
}
|
||||
|
||||
function buildContainerArgs(
|
||||
mounts: VolumeMount[],
|
||||
containerName: string,
|
||||
isMain: boolean,
|
||||
): string[] {
|
||||
const args: string[] = ['run', '-i', '--rm', '--name', containerName];
|
||||
|
||||
// Pass host timezone so container's local time matches the user's
|
||||
args.push('-e', `TZ=${TIMEZONE}`);
|
||||
|
||||
// Route API traffic through the credential proxy (containers never see real secrets)
|
||||
args.push(
|
||||
'-e',
|
||||
`ANTHROPIC_BASE_URL=http://${CONTAINER_HOST_GATEWAY}:${CREDENTIAL_PROXY_PORT}`,
|
||||
);
|
||||
|
||||
// Mirror the host's auth method with a placeholder value.
|
||||
const authMode = detectAuthMode();
|
||||
if (authMode === 'api-key') {
|
||||
args.push('-e', 'ANTHROPIC_API_KEY=placeholder');
|
||||
} else {
|
||||
args.push('-e', 'CLAUDE_CODE_OAUTH_TOKEN=placeholder');
|
||||
}
|
||||
|
||||
// Runtime-specific args for host gateway resolution
|
||||
args.push(...hostGatewayArgs());
|
||||
|
||||
// Run as host user so bind-mounted files are accessible.
|
||||
// Skip when running as root (uid 0), as the container's node user (uid 1000),
|
||||
// or when getuid is unavailable (native Windows without WSL).
|
||||
const hostUid = process.getuid?.();
|
||||
const hostGid = process.getgid?.();
|
||||
if (hostUid != null && hostUid !== 0 && hostUid !== 1000) {
|
||||
if (isMain) {
|
||||
// Main containers start as root so the entrypoint can mount --bind
|
||||
// to shadow .env. Privileges are dropped via setpriv in entrypoint.sh.
|
||||
args.push('-e', `RUN_UID=${hostUid}`);
|
||||
args.push('-e', `RUN_GID=${hostGid}`);
|
||||
} else {
|
||||
args.push('--user', `${hostUid}:${hostGid}`);
|
||||
}
|
||||
args.push('-e', 'HOME=/home/node');
|
||||
}
|
||||
|
||||
for (const mount of mounts) {
|
||||
if (mount.readonly) {
|
||||
args.push(...readonlyMountArgs(mount.hostPath, mount.containerPath));
|
||||
} else {
|
||||
args.push('-v', `${mount.hostPath}:${mount.containerPath}`);
|
||||
}
|
||||
}
|
||||
|
||||
args.push(CONTAINER_IMAGE);
|
||||
|
||||
return args;
|
||||
}
|
||||
|
||||
export async function runContainerAgent(
|
||||
group: RegisteredGroup,
|
||||
input: ContainerInput,
|
||||
onProcess: (proc: ChildProcess, containerName: string) => void,
|
||||
onOutput?: (output: ContainerOutput) => Promise<void>,
|
||||
): Promise<ContainerOutput> {
|
||||
const startTime = Date.now();
|
||||
|
||||
const groupDir = resolveGroupFolderPath(group.folder);
|
||||
fs.mkdirSync(groupDir, { recursive: true });
|
||||
|
||||
const mounts = buildVolumeMounts(group, input.isMain);
|
||||
const safeName = group.folder.replace(/[^a-zA-Z0-9-]/g, '-');
|
||||
const containerName = `nanoclaw-${safeName}-${Date.now()}`;
|
||||
const containerArgs = buildContainerArgs(mounts, containerName, input.isMain);
|
||||
|
||||
logger.debug(
|
||||
{
|
||||
group: group.name,
|
||||
containerName,
|
||||
mounts: mounts.map(
|
||||
(m) =>
|
||||
`${m.hostPath} -> ${m.containerPath}${m.readonly ? ' (ro)' : ''}`,
|
||||
),
|
||||
containerArgs: containerArgs.join(' '),
|
||||
},
|
||||
'Container mount configuration',
|
||||
);
|
||||
|
||||
logger.info(
|
||||
{
|
||||
group: group.name,
|
||||
containerName,
|
||||
mountCount: mounts.length,
|
||||
isMain: input.isMain,
|
||||
},
|
||||
'Spawning container agent',
|
||||
);
|
||||
|
||||
const logsDir = path.join(groupDir, 'logs');
|
||||
fs.mkdirSync(logsDir, { recursive: true });
|
||||
|
||||
return new Promise((resolve) => {
|
||||
const container = spawn(CONTAINER_RUNTIME_BIN, containerArgs, {
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
});
|
||||
|
||||
onProcess(container, containerName);
|
||||
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
let stdoutTruncated = false;
|
||||
let stderrTruncated = false;
|
||||
|
||||
container.stdin.write(JSON.stringify(input));
|
||||
container.stdin.end();
|
||||
|
||||
// Streaming output: parse OUTPUT_START/END marker pairs as they arrive
|
||||
let parseBuffer = '';
|
||||
let newSessionId: string | undefined;
|
||||
let outputChain = Promise.resolve();
|
||||
|
||||
container.stdout.on('data', (data) => {
|
||||
const chunk = data.toString();
|
||||
|
||||
// Always accumulate for logging
|
||||
if (!stdoutTruncated) {
|
||||
const remaining = CONTAINER_MAX_OUTPUT_SIZE - stdout.length;
|
||||
if (chunk.length > remaining) {
|
||||
stdout += chunk.slice(0, remaining);
|
||||
stdoutTruncated = true;
|
||||
logger.warn(
|
||||
{ group: group.name, size: stdout.length },
|
||||
'Container stdout truncated due to size limit',
|
||||
);
|
||||
} else {
|
||||
stdout += chunk;
|
||||
}
|
||||
}
|
||||
|
||||
// Stream-parse for output markers
|
||||
if (onOutput) {
|
||||
parseBuffer += chunk;
|
||||
let startIdx: number;
|
||||
while ((startIdx = parseBuffer.indexOf(OUTPUT_START_MARKER)) !== -1) {
|
||||
const endIdx = parseBuffer.indexOf(OUTPUT_END_MARKER, startIdx);
|
||||
if (endIdx === -1) break; // Incomplete pair, wait for more data
|
||||
|
||||
const jsonStr = parseBuffer
|
||||
.slice(startIdx + OUTPUT_START_MARKER.length, endIdx)
|
||||
.trim();
|
||||
parseBuffer = parseBuffer.slice(endIdx + OUTPUT_END_MARKER.length);
|
||||
|
||||
try {
|
||||
const parsed: ContainerOutput = JSON.parse(jsonStr);
|
||||
if (parsed.newSessionId) {
|
||||
newSessionId = parsed.newSessionId;
|
||||
}
|
||||
hadStreamingOutput = true;
|
||||
// Activity detected — reset the hard timeout
|
||||
resetTimeout();
|
||||
// Call onOutput for all markers (including null results)
|
||||
// so idle timers start even for "silent" query completions.
|
||||
outputChain = outputChain.then(() => onOutput(parsed));
|
||||
} catch (err) {
|
||||
logger.warn(
|
||||
{ group: group.name, error: err },
|
||||
'Failed to parse streamed output chunk',
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
container.stderr.on('data', (data) => {
|
||||
const chunk = data.toString();
|
||||
const lines = chunk.trim().split('\n');
|
||||
for (const line of lines) {
|
||||
if (line) logger.debug({ container: group.folder }, line);
|
||||
}
|
||||
// Don't reset timeout on stderr — SDK writes debug logs continuously.
|
||||
// Timeout only resets on actual output (OUTPUT_MARKER in stdout).
|
||||
if (stderrTruncated) return;
|
||||
const remaining = CONTAINER_MAX_OUTPUT_SIZE - stderr.length;
|
||||
if (chunk.length > remaining) {
|
||||
stderr += chunk.slice(0, remaining);
|
||||
stderrTruncated = true;
|
||||
logger.warn(
|
||||
{ group: group.name, size: stderr.length },
|
||||
'Container stderr truncated due to size limit',
|
||||
);
|
||||
} else {
|
||||
stderr += chunk;
|
||||
}
|
||||
});
|
||||
|
||||
let timedOut = false;
|
||||
let hadStreamingOutput = false;
|
||||
const configTimeout = group.containerConfig?.timeout || CONTAINER_TIMEOUT;
|
||||
// Grace period: hard timeout must be at least IDLE_TIMEOUT + 30s so the
|
||||
// graceful _close sentinel has time to trigger before the hard kill fires.
|
||||
const timeoutMs = Math.max(configTimeout, IDLE_TIMEOUT + 30_000);
|
||||
|
||||
const killOnTimeout = () => {
|
||||
timedOut = true;
|
||||
logger.error(
|
||||
{ group: group.name, containerName },
|
||||
'Container timeout, stopping gracefully',
|
||||
);
|
||||
exec(stopContainer(containerName), { timeout: 15000 }, (err) => {
|
||||
if (err) {
|
||||
logger.warn(
|
||||
{ group: group.name, containerName, err },
|
||||
'Graceful stop failed, force killing',
|
||||
);
|
||||
container.kill('SIGKILL');
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
let timeout = setTimeout(killOnTimeout, timeoutMs);
|
||||
|
||||
// Reset the timeout whenever there's activity (streaming output)
|
||||
const resetTimeout = () => {
|
||||
clearTimeout(timeout);
|
||||
timeout = setTimeout(killOnTimeout, timeoutMs);
|
||||
};
|
||||
|
||||
container.on('close', (code) => {
|
||||
clearTimeout(timeout);
|
||||
const duration = Date.now() - startTime;
|
||||
|
||||
if (timedOut) {
|
||||
const ts = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
const timeoutLog = path.join(logsDir, `container-${ts}.log`);
|
||||
fs.writeFileSync(
|
||||
timeoutLog,
|
||||
[
|
||||
`=== Container Run Log (TIMEOUT) ===`,
|
||||
`Timestamp: ${new Date().toISOString()}`,
|
||||
`Group: ${group.name}`,
|
||||
`Container: ${containerName}`,
|
||||
`Duration: ${duration}ms`,
|
||||
`Exit Code: ${code}`,
|
||||
`Had Streaming Output: ${hadStreamingOutput}`,
|
||||
].join('\n'),
|
||||
);
|
||||
|
||||
// Timeout after output = idle cleanup, not failure.
|
||||
// The agent already sent its response; this is just the
|
||||
// container being reaped after the idle period expired.
|
||||
if (hadStreamingOutput) {
|
||||
logger.info(
|
||||
{ group: group.name, containerName, duration, code },
|
||||
'Container timed out after output (idle cleanup)',
|
||||
);
|
||||
outputChain.then(() => {
|
||||
resolve({
|
||||
status: 'success',
|
||||
result: null,
|
||||
newSessionId,
|
||||
});
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
logger.error(
|
||||
{ group: group.name, containerName, duration, code },
|
||||
'Container timed out with no output',
|
||||
);
|
||||
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Container timed out after ${configTimeout}ms`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
const logFile = path.join(logsDir, `container-${timestamp}.log`);
|
||||
const isVerbose =
|
||||
process.env.LOG_LEVEL === 'debug' || process.env.LOG_LEVEL === 'trace';
|
||||
|
||||
const logLines = [
|
||||
`=== Container Run Log ===`,
|
||||
`Timestamp: ${new Date().toISOString()}`,
|
||||
`Group: ${group.name}`,
|
||||
`IsMain: ${input.isMain}`,
|
||||
`Duration: ${duration}ms`,
|
||||
`Exit Code: ${code}`,
|
||||
`Stdout Truncated: ${stdoutTruncated}`,
|
||||
`Stderr Truncated: ${stderrTruncated}`,
|
||||
``,
|
||||
];
|
||||
|
||||
const isError = code !== 0;
|
||||
|
||||
if (isVerbose || isError) {
|
||||
logLines.push(
|
||||
`=== Input ===`,
|
||||
JSON.stringify(input, null, 2),
|
||||
``,
|
||||
`=== Container Args ===`,
|
||||
containerArgs.join(' '),
|
||||
``,
|
||||
`=== Mounts ===`,
|
||||
mounts
|
||||
.map(
|
||||
(m) =>
|
||||
`${m.hostPath} -> ${m.containerPath}${m.readonly ? ' (ro)' : ''}`,
|
||||
)
|
||||
.join('\n'),
|
||||
``,
|
||||
`=== Stderr${stderrTruncated ? ' (TRUNCATED)' : ''} ===`,
|
||||
stderr,
|
||||
``,
|
||||
`=== Stdout${stdoutTruncated ? ' (TRUNCATED)' : ''} ===`,
|
||||
stdout,
|
||||
);
|
||||
} else {
|
||||
logLines.push(
|
||||
`=== Input Summary ===`,
|
||||
`Prompt length: ${input.prompt.length} chars`,
|
||||
`Session ID: ${input.sessionId || 'new'}`,
|
||||
``,
|
||||
`=== Mounts ===`,
|
||||
mounts
|
||||
.map((m) => `${m.containerPath}${m.readonly ? ' (ro)' : ''}`)
|
||||
.join('\n'),
|
||||
``,
|
||||
);
|
||||
}
|
||||
|
||||
fs.writeFileSync(logFile, logLines.join('\n'));
|
||||
logger.debug({ logFile, verbose: isVerbose }, 'Container log written');
|
||||
|
||||
if (code !== 0) {
|
||||
logger.error(
|
||||
{
|
||||
group: group.name,
|
||||
code,
|
||||
duration,
|
||||
stderr,
|
||||
stdout,
|
||||
logFile,
|
||||
},
|
||||
'Container exited with error',
|
||||
);
|
||||
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Container exited with code ${code}: ${stderr.slice(-200)}`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Streaming mode: wait for output chain to settle, return completion marker
|
||||
if (onOutput) {
|
||||
outputChain.then(() => {
|
||||
logger.info(
|
||||
{ group: group.name, duration, newSessionId },
|
||||
'Container completed (streaming mode)',
|
||||
);
|
||||
resolve({
|
||||
status: 'success',
|
||||
result: null,
|
||||
newSessionId,
|
||||
});
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Legacy mode: parse the last output marker pair from accumulated stdout
|
||||
try {
|
||||
// Extract JSON between sentinel markers for robust parsing
|
||||
const startIdx = stdout.indexOf(OUTPUT_START_MARKER);
|
||||
const endIdx = stdout.indexOf(OUTPUT_END_MARKER);
|
||||
|
||||
let jsonLine: string;
|
||||
if (startIdx !== -1 && endIdx !== -1 && endIdx > startIdx) {
|
||||
jsonLine = stdout
|
||||
.slice(startIdx + OUTPUT_START_MARKER.length, endIdx)
|
||||
.trim();
|
||||
} else {
|
||||
// Fallback: last non-empty line (backwards compatibility)
|
||||
const lines = stdout.trim().split('\n');
|
||||
jsonLine = lines[lines.length - 1];
|
||||
}
|
||||
|
||||
const output: ContainerOutput = JSON.parse(jsonLine);
|
||||
|
||||
logger.info(
|
||||
{
|
||||
group: group.name,
|
||||
duration,
|
||||
status: output.status,
|
||||
hasResult: !!output.result,
|
||||
},
|
||||
'Container completed',
|
||||
);
|
||||
|
||||
resolve(output);
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
{
|
||||
group: group.name,
|
||||
stdout,
|
||||
stderr,
|
||||
error: err,
|
||||
},
|
||||
'Failed to parse container output',
|
||||
);
|
||||
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Failed to parse container output: ${err instanceof Error ? err.message : String(err)}`,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
container.on('error', (err) => {
|
||||
clearTimeout(timeout);
|
||||
logger.error(
|
||||
{ group: group.name, containerName, error: err },
|
||||
'Container spawn error',
|
||||
);
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Container spawn error: ${err.message}`,
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
export function writeTasksSnapshot(
|
||||
groupFolder: string,
|
||||
isMain: boolean,
|
||||
tasks: Array<{
|
||||
id: string;
|
||||
groupFolder: string;
|
||||
prompt: string;
|
||||
schedule_type: string;
|
||||
schedule_value: string;
|
||||
status: string;
|
||||
next_run: string | null;
|
||||
}>,
|
||||
): void {
|
||||
// Write filtered tasks to the group's IPC directory
|
||||
const groupIpcDir = resolveGroupIpcPath(groupFolder);
|
||||
fs.mkdirSync(groupIpcDir, { recursive: true });
|
||||
|
||||
// Main sees all tasks, others only see their own
|
||||
const filteredTasks = isMain
|
||||
? tasks
|
||||
: tasks.filter((t) => t.groupFolder === groupFolder);
|
||||
|
||||
const tasksFile = path.join(groupIpcDir, 'current_tasks.json');
|
||||
fs.writeFileSync(tasksFile, JSON.stringify(filteredTasks, null, 2));
|
||||
}
|
||||
|
||||
export interface AvailableGroup {
|
||||
jid: string;
|
||||
name: string;
|
||||
lastActivity: string;
|
||||
isRegistered: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write available groups snapshot for the container to read.
|
||||
* Only main group can see all available groups (for activation).
|
||||
* Non-main groups only see their own registration status.
|
||||
*/
|
||||
export function writeGroupsSnapshot(
|
||||
groupFolder: string,
|
||||
isMain: boolean,
|
||||
groups: AvailableGroup[],
|
||||
registeredJids: Set<string>,
|
||||
): void {
|
||||
const groupIpcDir = resolveGroupIpcPath(groupFolder);
|
||||
fs.mkdirSync(groupIpcDir, { recursive: true });
|
||||
|
||||
// Main sees all groups; others see nothing (they can't activate groups)
|
||||
const visibleGroups = isMain ? groups : [];
|
||||
|
||||
const groupsFile = path.join(groupIpcDir, 'available_groups.json');
|
||||
fs.writeFileSync(
|
||||
groupsFile,
|
||||
JSON.stringify(
|
||||
{
|
||||
groups: visibleGroups,
|
||||
lastSync: new Date().toISOString(),
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
);
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
# Intent: src/container-runner.ts modifications
|
||||
|
||||
## What changed
|
||||
Updated `buildContainerArgs` to support Apple Container's .env shadowing mechanism. The function now accepts an `isMain` parameter and uses it to decide how container user identity is configured.
|
||||
|
||||
## Why
|
||||
Apple Container (VirtioFS) only supports directory mounts, not file mounts. The previous approach of mounting `/dev/null` over `.env` from the host causes a `VZErrorDomain` crash. Instead, main-group containers now start as root so the entrypoint can `mount --bind /dev/null` over `.env` inside the Linux VM, then drop to the host user via `setpriv`.
|
||||
|
||||
## Key sections
|
||||
|
||||
### buildContainerArgs (signature change)
|
||||
- Added: `isMain: boolean` parameter
|
||||
- Main containers: passes `RUN_UID`/`RUN_GID` env vars instead of `--user`, so the container starts as root
|
||||
- Non-main containers: unchanged, still uses `--user` flag
|
||||
|
||||
### buildVolumeMounts
|
||||
- Removed: the `/dev/null` → `/workspace/project/.env` shadow mount (was in the committed `37228a9` fix)
|
||||
- The .env shadowing is now handled inside the container entrypoint instead
|
||||
|
||||
### runContainerAgent (call site)
|
||||
- Changed: `buildContainerArgs(mounts, containerName)` → `buildContainerArgs(mounts, containerName, input.isMain)`
|
||||
|
||||
## Invariants
|
||||
- All exported interfaces unchanged: `ContainerInput`, `ContainerOutput`, `runContainerAgent`, `writeTasksSnapshot`, `writeGroupsSnapshot`, `AvailableGroup`
|
||||
- Non-main containers behave identically (still get `--user` flag)
|
||||
- Mount list for non-main containers is unchanged
|
||||
- Credentials injected by host-side credential proxy, never in container env or stdin
|
||||
- Output parsing (streaming + legacy) unchanged
|
||||
|
||||
## Must-keep
|
||||
- The `isMain` parameter on `buildContainerArgs` (consumed by `runContainerAgent`)
|
||||
- The `RUN_UID`/`RUN_GID` env vars for main containers (consumed by entrypoint.sh)
|
||||
- The `--user` flag for non-main containers (file permission compatibility)
|
||||
- `CONTAINER_HOST_GATEWAY` and `hostGatewayArgs()` imports from `container-runtime.js`
|
||||
- `detectAuthMode()` import from `credential-proxy.js`
|
||||
- `CREDENTIAL_PROXY_PORT` import from `config.js`
|
||||
- Credential proxy env vars: `ANTHROPIC_BASE_URL`, `ANTHROPIC_API_KEY`/`CLAUDE_CODE_OAUTH_TOKEN`
|
||||
@@ -1,177 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
|
||||
// Mock logger
|
||||
vi.mock('./logger.js', () => ({
|
||||
logger: {
|
||||
debug: vi.fn(),
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
},
|
||||
}));
|
||||
|
||||
// Mock child_process — store the mock fn so tests can configure it
|
||||
const mockExecSync = vi.fn();
|
||||
vi.mock('child_process', () => ({
|
||||
execSync: (...args: unknown[]) => mockExecSync(...args),
|
||||
}));
|
||||
|
||||
import {
|
||||
CONTAINER_RUNTIME_BIN,
|
||||
readonlyMountArgs,
|
||||
stopContainer,
|
||||
ensureContainerRuntimeRunning,
|
||||
cleanupOrphans,
|
||||
} from './container-runtime.js';
|
||||
import { logger } from './logger.js';
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
// --- Pure functions ---
|
||||
|
||||
describe('readonlyMountArgs', () => {
|
||||
it('returns --mount flag with type=bind and readonly', () => {
|
||||
const args = readonlyMountArgs('/host/path', '/container/path');
|
||||
expect(args).toEqual([
|
||||
'--mount',
|
||||
'type=bind,source=/host/path,target=/container/path,readonly',
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('stopContainer', () => {
|
||||
it('returns stop command using CONTAINER_RUNTIME_BIN', () => {
|
||||
expect(stopContainer('nanoclaw-test-123')).toBe(
|
||||
`${CONTAINER_RUNTIME_BIN} stop nanoclaw-test-123`,
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- ensureContainerRuntimeRunning ---
|
||||
|
||||
describe('ensureContainerRuntimeRunning', () => {
|
||||
it('does nothing when runtime is already running', () => {
|
||||
mockExecSync.mockReturnValueOnce('');
|
||||
|
||||
ensureContainerRuntimeRunning();
|
||||
|
||||
expect(mockExecSync).toHaveBeenCalledTimes(1);
|
||||
expect(mockExecSync).toHaveBeenCalledWith(
|
||||
`${CONTAINER_RUNTIME_BIN} system status`,
|
||||
{ stdio: 'pipe' },
|
||||
);
|
||||
expect(logger.debug).toHaveBeenCalledWith('Container runtime already running');
|
||||
});
|
||||
|
||||
it('auto-starts when system status fails', () => {
|
||||
// First call (system status) fails
|
||||
mockExecSync.mockImplementationOnce(() => {
|
||||
throw new Error('not running');
|
||||
});
|
||||
// Second call (system start) succeeds
|
||||
mockExecSync.mockReturnValueOnce('');
|
||||
|
||||
ensureContainerRuntimeRunning();
|
||||
|
||||
expect(mockExecSync).toHaveBeenCalledTimes(2);
|
||||
expect(mockExecSync).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
`${CONTAINER_RUNTIME_BIN} system start`,
|
||||
{ stdio: 'pipe', timeout: 30000 },
|
||||
);
|
||||
expect(logger.info).toHaveBeenCalledWith('Container runtime started');
|
||||
});
|
||||
|
||||
it('throws when both status and start fail', () => {
|
||||
mockExecSync.mockImplementation(() => {
|
||||
throw new Error('failed');
|
||||
});
|
||||
|
||||
expect(() => ensureContainerRuntimeRunning()).toThrow(
|
||||
'Container runtime is required but failed to start',
|
||||
);
|
||||
expect(logger.error).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
// --- cleanupOrphans ---
|
||||
|
||||
describe('cleanupOrphans', () => {
|
||||
it('stops orphaned nanoclaw containers from JSON output', () => {
|
||||
// Apple Container ls returns JSON
|
||||
const lsOutput = JSON.stringify([
|
||||
{ status: 'running', configuration: { id: 'nanoclaw-group1-111' } },
|
||||
{ status: 'stopped', configuration: { id: 'nanoclaw-group2-222' } },
|
||||
{ status: 'running', configuration: { id: 'nanoclaw-group3-333' } },
|
||||
{ status: 'running', configuration: { id: 'other-container' } },
|
||||
]);
|
||||
mockExecSync.mockReturnValueOnce(lsOutput);
|
||||
// stop calls succeed
|
||||
mockExecSync.mockReturnValue('');
|
||||
|
||||
cleanupOrphans();
|
||||
|
||||
// ls + 2 stop calls (only running nanoclaw- containers)
|
||||
expect(mockExecSync).toHaveBeenCalledTimes(3);
|
||||
expect(mockExecSync).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
`${CONTAINER_RUNTIME_BIN} stop nanoclaw-group1-111`,
|
||||
{ stdio: 'pipe' },
|
||||
);
|
||||
expect(mockExecSync).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
`${CONTAINER_RUNTIME_BIN} stop nanoclaw-group3-333`,
|
||||
{ stdio: 'pipe' },
|
||||
);
|
||||
expect(logger.info).toHaveBeenCalledWith(
|
||||
{ count: 2, names: ['nanoclaw-group1-111', 'nanoclaw-group3-333'] },
|
||||
'Stopped orphaned containers',
|
||||
);
|
||||
});
|
||||
|
||||
it('does nothing when no orphans exist', () => {
|
||||
mockExecSync.mockReturnValueOnce('[]');
|
||||
|
||||
cleanupOrphans();
|
||||
|
||||
expect(mockExecSync).toHaveBeenCalledTimes(1);
|
||||
expect(logger.info).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('warns and continues when ls fails', () => {
|
||||
mockExecSync.mockImplementationOnce(() => {
|
||||
throw new Error('container not available');
|
||||
});
|
||||
|
||||
cleanupOrphans(); // should not throw
|
||||
|
||||
expect(logger.warn).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ err: expect.any(Error) }),
|
||||
'Failed to clean up orphaned containers',
|
||||
);
|
||||
});
|
||||
|
||||
it('continues stopping remaining containers when one stop fails', () => {
|
||||
const lsOutput = JSON.stringify([
|
||||
{ status: 'running', configuration: { id: 'nanoclaw-a-1' } },
|
||||
{ status: 'running', configuration: { id: 'nanoclaw-b-2' } },
|
||||
]);
|
||||
mockExecSync.mockReturnValueOnce(lsOutput);
|
||||
// First stop fails
|
||||
mockExecSync.mockImplementationOnce(() => {
|
||||
throw new Error('already stopped');
|
||||
});
|
||||
// Second stop succeeds
|
||||
mockExecSync.mockReturnValueOnce('');
|
||||
|
||||
cleanupOrphans(); // should not throw
|
||||
|
||||
expect(mockExecSync).toHaveBeenCalledTimes(3);
|
||||
expect(logger.info).toHaveBeenCalledWith(
|
||||
{ count: 2, names: ['nanoclaw-a-1', 'nanoclaw-b-2'] },
|
||||
'Stopped orphaned containers',
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -1,99 +0,0 @@
|
||||
/**
|
||||
* Container runtime abstraction for NanoClaw.
|
||||
* All runtime-specific logic lives here so swapping runtimes means changing one file.
|
||||
*/
|
||||
import { execSync } from 'child_process';
|
||||
|
||||
import { logger } from './logger.js';
|
||||
|
||||
/** The container runtime binary name. */
|
||||
export const CONTAINER_RUNTIME_BIN = 'container';
|
||||
|
||||
/**
|
||||
* Hostname containers use to reach the host machine.
|
||||
* Apple Container VMs access the host via the default gateway (192.168.64.1).
|
||||
*/
|
||||
export const CONTAINER_HOST_GATEWAY = '192.168.64.1';
|
||||
|
||||
/**
|
||||
* CLI args needed for the container to resolve the host gateway.
|
||||
* Apple Container provides host networking natively on macOS — no extra args needed.
|
||||
*/
|
||||
export function hostGatewayArgs(): string[] {
|
||||
return [];
|
||||
}
|
||||
|
||||
/** Returns CLI args for a readonly bind mount. */
|
||||
export function readonlyMountArgs(hostPath: string, containerPath: string): string[] {
|
||||
return ['--mount', `type=bind,source=${hostPath},target=${containerPath},readonly`];
|
||||
}
|
||||
|
||||
/** Returns the shell command to stop a container by name. */
|
||||
export function stopContainer(name: string): string {
|
||||
return `${CONTAINER_RUNTIME_BIN} stop ${name}`;
|
||||
}
|
||||
|
||||
/** Ensure the container runtime is running, starting it if needed. */
|
||||
export function ensureContainerRuntimeRunning(): void {
|
||||
try {
|
||||
execSync(`${CONTAINER_RUNTIME_BIN} system status`, { stdio: 'pipe' });
|
||||
logger.debug('Container runtime already running');
|
||||
} catch {
|
||||
logger.info('Starting container runtime...');
|
||||
try {
|
||||
execSync(`${CONTAINER_RUNTIME_BIN} system start`, { stdio: 'pipe', timeout: 30000 });
|
||||
logger.info('Container runtime started');
|
||||
} catch (err) {
|
||||
logger.error({ err }, 'Failed to start container runtime');
|
||||
console.error(
|
||||
'\n╔════════════════════════════════════════════════════════════════╗',
|
||||
);
|
||||
console.error(
|
||||
'║ FATAL: Container runtime failed to start ║',
|
||||
);
|
||||
console.error(
|
||||
'║ ║',
|
||||
);
|
||||
console.error(
|
||||
'║ Agents cannot run without a container runtime. To fix: ║',
|
||||
);
|
||||
console.error(
|
||||
'║ 1. Ensure Apple Container is installed ║',
|
||||
);
|
||||
console.error(
|
||||
'║ 2. Run: container system start ║',
|
||||
);
|
||||
console.error(
|
||||
'║ 3. Restart NanoClaw ║',
|
||||
);
|
||||
console.error(
|
||||
'╚════════════════════════════════════════════════════════════════╝\n',
|
||||
);
|
||||
throw new Error('Container runtime is required but failed to start');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Kill orphaned NanoClaw containers from previous runs. */
|
||||
export function cleanupOrphans(): void {
|
||||
try {
|
||||
const output = execSync(`${CONTAINER_RUNTIME_BIN} ls --format json`, {
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
const containers: { status: string; configuration: { id: string } }[] = JSON.parse(output || '[]');
|
||||
const orphans = containers
|
||||
.filter((c) => c.status === 'running' && c.configuration.id.startsWith('nanoclaw-'))
|
||||
.map((c) => c.configuration.id);
|
||||
for (const name of orphans) {
|
||||
try {
|
||||
execSync(stopContainer(name), { stdio: 'pipe' });
|
||||
} catch { /* already stopped */ }
|
||||
}
|
||||
if (orphans.length > 0) {
|
||||
logger.info({ count: orphans.length, names: orphans }, 'Stopped orphaned containers');
|
||||
}
|
||||
} catch (err) {
|
||||
logger.warn({ err }, 'Failed to clean up orphaned containers');
|
||||
}
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
# Intent: src/container-runtime.ts modifications
|
||||
|
||||
## What changed
|
||||
Replaced Docker runtime with Apple Container runtime. This is a full file replacement — the exported API is identical, only the implementation differs.
|
||||
|
||||
## Key sections
|
||||
|
||||
### CONTAINER_RUNTIME_BIN
|
||||
- Changed: `'docker'` → `'container'` (the Apple Container CLI binary)
|
||||
|
||||
### readonlyMountArgs
|
||||
- Changed: Docker `-v host:container:ro` → Apple Container `--mount type=bind,source=...,target=...,readonly`
|
||||
|
||||
### ensureContainerRuntimeRunning
|
||||
- Changed: `docker info` → `container system status` for checking
|
||||
- Added: auto-start via `container system start` when not running (Apple Container supports this; Docker requires manual start)
|
||||
- Changed: error message references Apple Container instead of Docker
|
||||
|
||||
### cleanupOrphans
|
||||
- Changed: `docker ps --filter name=nanoclaw- --format '{{.Names}}'` → `container ls --format json` with JSON parsing
|
||||
- Apple Container returns JSON with `{ status, configuration: { id } }` structure
|
||||
|
||||
### CONTAINER_HOST_GATEWAY
|
||||
- Set to `'192.168.64.1'` — the default gateway for Apple Container VMs to reach the host
|
||||
- Docker uses `'host.docker.internal'` which is resolved differently
|
||||
|
||||
### hostGatewayArgs
|
||||
- Returns `[]` — Apple Container provides host networking natively on macOS
|
||||
- Docker version returns `['--add-host=host.docker.internal:host-gateway']` on Linux
|
||||
|
||||
## Invariants
|
||||
- All exports remain identical: `CONTAINER_RUNTIME_BIN`, `CONTAINER_HOST_GATEWAY`, `readonlyMountArgs`, `stopContainer`, `hostGatewayArgs`, `ensureContainerRuntimeRunning`, `cleanupOrphans`
|
||||
- `stopContainer` implementation is unchanged (`<bin> stop <name>`)
|
||||
- Logger usage pattern is unchanged
|
||||
- Error handling pattern is unchanged
|
||||
|
||||
## Must-keep
|
||||
- The exported function signatures (consumed by container-runner.ts and index.ts)
|
||||
- The error box-drawing output format
|
||||
- The orphan cleanup logic (find + stop pattern)
|
||||
- `CONTAINER_HOST_GATEWAY` must match the address the credential proxy is reachable at from within the VM
|
||||
@@ -1,69 +0,0 @@
|
||||
import { describe, expect, it } from 'vitest';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('convert-to-apple-container skill package', () => {
|
||||
const skillDir = path.resolve(__dirname, '..');
|
||||
|
||||
it('has a valid manifest', () => {
|
||||
const manifestPath = path.join(skillDir, 'manifest.yaml');
|
||||
expect(fs.existsSync(manifestPath)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(manifestPath, 'utf-8');
|
||||
expect(content).toContain('skill: convert-to-apple-container');
|
||||
expect(content).toContain('version: 1.0.0');
|
||||
expect(content).toContain('container-runtime.ts');
|
||||
expect(content).toContain('container/build.sh');
|
||||
});
|
||||
|
||||
it('has all modified files', () => {
|
||||
const runtimeFile = path.join(skillDir, 'modify', 'src', 'container-runtime.ts');
|
||||
expect(fs.existsSync(runtimeFile)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(runtimeFile, 'utf-8');
|
||||
expect(content).toContain("CONTAINER_RUNTIME_BIN = 'container'");
|
||||
expect(content).toContain('system status');
|
||||
expect(content).toContain('system start');
|
||||
expect(content).toContain('ls --format json');
|
||||
|
||||
const testFile = path.join(skillDir, 'modify', 'src', 'container-runtime.test.ts');
|
||||
expect(fs.existsSync(testFile)).toBe(true);
|
||||
|
||||
const testContent = fs.readFileSync(testFile, 'utf-8');
|
||||
expect(testContent).toContain('system status');
|
||||
expect(testContent).toContain('--mount');
|
||||
});
|
||||
|
||||
it('has intent files for modified sources', () => {
|
||||
const runtimeIntent = path.join(skillDir, 'modify', 'src', 'container-runtime.ts.intent.md');
|
||||
expect(fs.existsSync(runtimeIntent)).toBe(true);
|
||||
|
||||
const buildIntent = path.join(skillDir, 'modify', 'container', 'build.sh.intent.md');
|
||||
expect(fs.existsSync(buildIntent)).toBe(true);
|
||||
});
|
||||
|
||||
it('has build.sh with Apple Container default', () => {
|
||||
const buildFile = path.join(skillDir, 'modify', 'container', 'build.sh');
|
||||
expect(fs.existsSync(buildFile)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(buildFile, 'utf-8');
|
||||
expect(content).toContain('CONTAINER_RUNTIME:-container');
|
||||
expect(content).not.toContain('CONTAINER_RUNTIME:-docker');
|
||||
});
|
||||
|
||||
it('uses Apple Container API patterns (not Docker)', () => {
|
||||
const runtimeFile = path.join(skillDir, 'modify', 'src', 'container-runtime.ts');
|
||||
const content = fs.readFileSync(runtimeFile, 'utf-8');
|
||||
|
||||
// Apple Container patterns
|
||||
expect(content).toContain('system status');
|
||||
expect(content).toContain('system start');
|
||||
expect(content).toContain('ls --format json');
|
||||
expect(content).toContain('type=bind,source=');
|
||||
|
||||
// Should NOT contain Docker patterns
|
||||
expect(content).not.toContain('docker info');
|
||||
expect(content).not.toContain("'-v'");
|
||||
expect(content).not.toContain('--filter name=');
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user