feat: skills as branches, channels as forks
Replace the custom skills engine with standard git operations. Feature skills are now git branches (on upstream or channel forks) applied via `git merge`. Channels are separate fork repos. - Remove skills-engine/ (6,300+ lines), apply/uninstall/rebase scripts - Remove old skill format (add/, modify/, manifest.yaml) from all skills - Remove old CI (skill-drift.yml, skill-pr.yml) - Add merge-forward CI for upstream skill branches - Add fork notification (repository_dispatch to channel forks) - Add marketplace config (.claude/settings.json) - Add /update-skills operational skill - Update /setup and /customize for marketplace plugin install - Add docs/skills-as-branches.md architecture doc Channel forks created: nanoclaw-whatsapp (with 5 skill branches), nanoclaw-telegram, nanoclaw-discord, nanoclaw-slack, nanoclaw-gmail. Upstream retains: skill/ollama-tool, skill/apple-container, skill/compact. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,242 +0,0 @@
|
||||
---
|
||||
name: add-gmail
|
||||
description: Add Gmail integration to NanoClaw. Can be configured as a tool (agent reads/sends emails when triggered from WhatsApp) or as a full channel (emails can trigger the agent, schedule tasks, and receive replies). Guides through GCP OAuth setup and implements the integration.
|
||||
---
|
||||
|
||||
# Add Gmail Integration
|
||||
|
||||
This skill adds Gmail support to NanoClaw — either as a tool (read, send, search, draft) or as a full channel that polls the inbox.
|
||||
|
||||
## Phase 1: Pre-flight
|
||||
|
||||
### Check if already applied
|
||||
|
||||
Read `.nanoclaw/state.yaml`. If `gmail` is in `applied_skills`, skip to Phase 3 (Setup). The code changes are already in place.
|
||||
|
||||
### Ask the user
|
||||
|
||||
Use `AskUserQuestion`:
|
||||
|
||||
AskUserQuestion: Should incoming emails be able to trigger the agent?
|
||||
|
||||
- **Yes** — Full channel mode: the agent listens on Gmail and responds to incoming emails automatically
|
||||
- **No** — Tool-only: the agent gets full Gmail tools (read, send, search, draft) but won't monitor the inbox. No channel code is added.
|
||||
|
||||
## Phase 2: Apply Code Changes
|
||||
|
||||
### Initialize skills system (if needed)
|
||||
|
||||
If `.nanoclaw/` directory doesn't exist yet:
|
||||
|
||||
```bash
|
||||
npx tsx scripts/apply-skill.ts --init
|
||||
```
|
||||
|
||||
### Path A: Tool-only (user chose "No")
|
||||
|
||||
Do NOT run the full apply script. Only two source files need changes. This avoids adding dead code (`gmail.ts`, `gmail.test.ts`, index.ts channel logic, routing tests, `googleapis` dependency).
|
||||
|
||||
#### 1. Mount Gmail credentials in container
|
||||
|
||||
Apply the changes described in `modify/src/container-runner.ts.intent.md` to `src/container-runner.ts`: import `os`, add a conditional read-write mount of `~/.gmail-mcp` to `/home/node/.gmail-mcp` in `buildVolumeMounts()` after the session mounts.
|
||||
|
||||
#### 2. Add Gmail MCP server to agent runner
|
||||
|
||||
Apply the changes described in `modify/container/agent-runner/src/index.ts.intent.md` to `container/agent-runner/src/index.ts`: add `gmail` MCP server (`npx -y @gongrzhe/server-gmail-autoauth-mcp`) and `'mcp__gmail__*'` to `allowedTools`.
|
||||
|
||||
#### 3. Record in state
|
||||
|
||||
Add `gmail` to `.nanoclaw/state.yaml` under `applied_skills` with `mode: tool-only`.
|
||||
|
||||
#### 4. Validate
|
||||
|
||||
```bash
|
||||
npm run build
|
||||
```
|
||||
|
||||
Build must be clean before proceeding. Skip to Phase 3.
|
||||
|
||||
### Path B: Channel mode (user chose "Yes")
|
||||
|
||||
Run the full skills engine to apply all code changes:
|
||||
|
||||
```bash
|
||||
npx tsx scripts/apply-skill.ts .claude/skills/add-gmail
|
||||
```
|
||||
|
||||
This deterministically:
|
||||
|
||||
- Adds `src/channels/gmail.ts` (GmailChannel class with self-registration via `registerChannel`)
|
||||
- Adds `src/channels/gmail.test.ts` (unit tests)
|
||||
- Appends `import './gmail.js'` to the channel barrel file `src/channels/index.ts`
|
||||
- Three-way merges Gmail credentials mount into `src/container-runner.ts` (~/.gmail-mcp -> /home/node/.gmail-mcp)
|
||||
- Three-way merges Gmail MCP server into `container/agent-runner/src/index.ts` (@gongrzhe/server-gmail-autoauth-mcp)
|
||||
- Installs the `googleapis` npm dependency
|
||||
- Records the application in `.nanoclaw/state.yaml`
|
||||
|
||||
If the apply reports merge conflicts, read the intent files:
|
||||
|
||||
- `modify/src/channels/index.ts.intent.md` — what changed for the barrel file
|
||||
- `modify/src/container-runner.ts.intent.md` — what changed for container-runner.ts
|
||||
- `modify/container/agent-runner/src/index.ts.intent.md` — what changed for agent-runner
|
||||
|
||||
#### Add email handling instructions
|
||||
|
||||
Append the following to `groups/main/CLAUDE.md` (before the formatting section):
|
||||
|
||||
```markdown
|
||||
## Email Notifications
|
||||
|
||||
When you receive an email notification (messages starting with `[Email from ...`), inform the user about it but do NOT reply to the email unless specifically asked. You have Gmail tools available — use them only when the user explicitly asks you to reply, forward, or take action on an email.
|
||||
```
|
||||
|
||||
#### Validate
|
||||
|
||||
```bash
|
||||
npm test
|
||||
npm run build
|
||||
```
|
||||
|
||||
All tests must pass (including the new gmail tests) and build must be clean before proceeding.
|
||||
|
||||
## Phase 3: Setup
|
||||
|
||||
### Check existing Gmail credentials
|
||||
|
||||
```bash
|
||||
ls -la ~/.gmail-mcp/ 2>/dev/null || echo "No Gmail config found"
|
||||
```
|
||||
|
||||
If `credentials.json` already exists, skip to "Build and restart" below.
|
||||
|
||||
### GCP Project Setup
|
||||
|
||||
Tell the user:
|
||||
|
||||
> I need you to set up Google Cloud OAuth credentials:
|
||||
>
|
||||
> 1. Open https://console.cloud.google.com — create a new project or select existing
|
||||
> 2. Go to **APIs & Services > Library**, search "Gmail API", click **Enable**
|
||||
> 3. Go to **APIs & Services > Credentials**, click **+ CREATE CREDENTIALS > OAuth client ID**
|
||||
> - If prompted for consent screen: choose "External", fill in app name and email, save
|
||||
> - Application type: **Desktop app**, name: anything (e.g., "NanoClaw Gmail")
|
||||
> 4. Click **DOWNLOAD JSON** and save as `gcp-oauth.keys.json`
|
||||
>
|
||||
> Where did you save the file? (Give me the full path, or paste the file contents here)
|
||||
|
||||
If user provides a path, copy it:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.gmail-mcp
|
||||
cp "/path/user/provided/gcp-oauth.keys.json" ~/.gmail-mcp/gcp-oauth.keys.json
|
||||
```
|
||||
|
||||
If user pastes JSON content, write it to `~/.gmail-mcp/gcp-oauth.keys.json`.
|
||||
|
||||
### OAuth Authorization
|
||||
|
||||
Tell the user:
|
||||
|
||||
> I'm going to run Gmail authorization. A browser window will open — sign in and grant access. If you see an "app isn't verified" warning, click "Advanced" then "Go to [app name] (unsafe)" — this is normal for personal OAuth apps.
|
||||
|
||||
Run the authorization:
|
||||
|
||||
```bash
|
||||
npx -y @gongrzhe/server-gmail-autoauth-mcp auth
|
||||
```
|
||||
|
||||
If that fails (some versions don't have an auth subcommand), try `timeout 60 npx -y @gongrzhe/server-gmail-autoauth-mcp || true`. Verify with `ls ~/.gmail-mcp/credentials.json`.
|
||||
|
||||
### Build and restart
|
||||
|
||||
Clear stale per-group agent-runner copies (they only get re-created if missing, so existing copies won't pick up the new Gmail server):
|
||||
|
||||
```bash
|
||||
rm -r data/sessions/*/agent-runner-src 2>/dev/null || true
|
||||
```
|
||||
|
||||
Rebuild the container (agent-runner changed):
|
||||
|
||||
```bash
|
||||
cd container && ./build.sh
|
||||
```
|
||||
|
||||
Then compile and restart:
|
||||
|
||||
```bash
|
||||
npm run build
|
||||
launchctl kickstart -k gui/$(id -u)/com.nanoclaw # macOS
|
||||
# Linux: systemctl --user restart nanoclaw
|
||||
```
|
||||
|
||||
## Phase 4: Verify
|
||||
|
||||
### Test tool access (both modes)
|
||||
|
||||
Tell the user:
|
||||
|
||||
> Gmail is connected! Send this in your main channel:
|
||||
>
|
||||
> `@Andy check my recent emails` or `@Andy list my Gmail labels`
|
||||
|
||||
### Test channel mode (Channel mode only)
|
||||
|
||||
Tell the user to send themselves a test email. The agent should pick it up within a minute. Monitor: `tail -f logs/nanoclaw.log | grep -iE "(gmail|email)"`.
|
||||
|
||||
Once verified, offer filter customization via `AskUserQuestion` — by default, only emails in the Primary inbox trigger the agent (Promotions, Social, Updates, and Forums are excluded). The user can keep this default or narrow further by sender, label, or keywords. No code changes needed for filters.
|
||||
|
||||
### Check logs if needed
|
||||
|
||||
```bash
|
||||
tail -f logs/nanoclaw.log
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Gmail connection not responding
|
||||
|
||||
Test directly:
|
||||
|
||||
```bash
|
||||
npx -y @gongrzhe/server-gmail-autoauth-mcp
|
||||
```
|
||||
|
||||
### OAuth token expired
|
||||
|
||||
Re-authorize:
|
||||
|
||||
```bash
|
||||
rm ~/.gmail-mcp/credentials.json
|
||||
npx -y @gongrzhe/server-gmail-autoauth-mcp
|
||||
```
|
||||
|
||||
### Container can't access Gmail
|
||||
|
||||
- Verify `~/.gmail-mcp` is mounted: check `src/container-runner.ts` for the `.gmail-mcp` mount
|
||||
- Check container logs: `cat groups/main/logs/container-*.log | tail -50`
|
||||
|
||||
### Emails not being detected (Channel mode only)
|
||||
|
||||
- By default, the channel polls unread Primary inbox emails (`is:unread category:primary`)
|
||||
- Check logs for Gmail polling errors
|
||||
|
||||
## Removal
|
||||
|
||||
### Tool-only mode
|
||||
|
||||
1. Remove `~/.gmail-mcp` mount from `src/container-runner.ts`
|
||||
2. Remove `gmail` MCP server and `mcp__gmail__*` from `container/agent-runner/src/index.ts`
|
||||
3. Remove `gmail` from `.nanoclaw/state.yaml`
|
||||
4. Clear stale agent-runner copies: `rm -r data/sessions/*/agent-runner-src 2>/dev/null || true`
|
||||
5. Rebuild: `cd container && ./build.sh && cd .. && npm run build && launchctl kickstart -k gui/$(id -u)/com.nanoclaw` (macOS) or `systemctl --user restart nanoclaw` (Linux)
|
||||
|
||||
### Channel mode
|
||||
|
||||
1. Delete `src/channels/gmail.ts` and `src/channels/gmail.test.ts`
|
||||
2. Remove `import './gmail.js'` from `src/channels/index.ts`
|
||||
3. Remove `~/.gmail-mcp` mount from `src/container-runner.ts`
|
||||
4. Remove `gmail` MCP server and `mcp__gmail__*` from `container/agent-runner/src/index.ts`
|
||||
5. Uninstall: `npm uninstall googleapis`
|
||||
6. Remove `gmail` from `.nanoclaw/state.yaml`
|
||||
7. Clear stale agent-runner copies: `rm -r data/sessions/*/agent-runner-src 2>/dev/null || true`
|
||||
8. Rebuild: `cd container && ./build.sh && cd .. && npm run build && launchctl kickstart -k gui/$(id -u)/com.nanoclaw` (macOS) or `systemctl --user restart nanoclaw` (Linux)
|
||||
@@ -1,74 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
|
||||
// Mock registry (registerChannel runs at import time)
|
||||
vi.mock('./registry.js', () => ({ registerChannel: vi.fn() }));
|
||||
|
||||
import { GmailChannel, GmailChannelOpts } from './gmail.js';
|
||||
|
||||
function makeOpts(overrides?: Partial<GmailChannelOpts>): GmailChannelOpts {
|
||||
return {
|
||||
onMessage: vi.fn(),
|
||||
onChatMetadata: vi.fn(),
|
||||
registeredGroups: () => ({}),
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
describe('GmailChannel', () => {
|
||||
let channel: GmailChannel;
|
||||
|
||||
beforeEach(() => {
|
||||
channel = new GmailChannel(makeOpts());
|
||||
});
|
||||
|
||||
describe('ownsJid', () => {
|
||||
it('returns true for gmail: prefixed JIDs', () => {
|
||||
expect(channel.ownsJid('gmail:abc123')).toBe(true);
|
||||
expect(channel.ownsJid('gmail:thread-id-456')).toBe(true);
|
||||
});
|
||||
|
||||
it('returns false for non-gmail JIDs', () => {
|
||||
expect(channel.ownsJid('12345@g.us')).toBe(false);
|
||||
expect(channel.ownsJid('tg:123')).toBe(false);
|
||||
expect(channel.ownsJid('dc:456')).toBe(false);
|
||||
expect(channel.ownsJid('user@s.whatsapp.net')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('name', () => {
|
||||
it('is gmail', () => {
|
||||
expect(channel.name).toBe('gmail');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isConnected', () => {
|
||||
it('returns false before connect', () => {
|
||||
expect(channel.isConnected()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('disconnect', () => {
|
||||
it('sets connected to false', async () => {
|
||||
await channel.disconnect();
|
||||
expect(channel.isConnected()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('constructor options', () => {
|
||||
it('accepts custom poll interval', () => {
|
||||
const ch = new GmailChannel(makeOpts(), 30000);
|
||||
expect(ch.name).toBe('gmail');
|
||||
});
|
||||
|
||||
it('defaults to unread query when no filter configured', () => {
|
||||
const ch = new GmailChannel(makeOpts());
|
||||
const query = (ch as unknown as { buildQuery: () => string }).buildQuery();
|
||||
expect(query).toBe('is:unread category:primary');
|
||||
});
|
||||
|
||||
it('defaults with no options provided', () => {
|
||||
const ch = new GmailChannel(makeOpts());
|
||||
expect(ch.name).toBe('gmail');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,352 +0,0 @@
|
||||
import fs from 'fs';
|
||||
import os from 'os';
|
||||
import path from 'path';
|
||||
|
||||
import { google, gmail_v1 } from 'googleapis';
|
||||
import { OAuth2Client } from 'google-auth-library';
|
||||
|
||||
// isMain flag is used instead of MAIN_GROUP_FOLDER constant
|
||||
import { logger } from '../logger.js';
|
||||
import { registerChannel, ChannelOpts } from './registry.js';
|
||||
import {
|
||||
Channel,
|
||||
OnChatMetadata,
|
||||
OnInboundMessage,
|
||||
RegisteredGroup,
|
||||
} from '../types.js';
|
||||
|
||||
export interface GmailChannelOpts {
|
||||
onMessage: OnInboundMessage;
|
||||
onChatMetadata: OnChatMetadata;
|
||||
registeredGroups: () => Record<string, RegisteredGroup>;
|
||||
}
|
||||
|
||||
interface ThreadMeta {
|
||||
sender: string;
|
||||
senderName: string;
|
||||
subject: string;
|
||||
messageId: string; // RFC 2822 Message-ID for In-Reply-To
|
||||
}
|
||||
|
||||
export class GmailChannel implements Channel {
|
||||
name = 'gmail';
|
||||
|
||||
private oauth2Client: OAuth2Client | null = null;
|
||||
private gmail: gmail_v1.Gmail | null = null;
|
||||
private opts: GmailChannelOpts;
|
||||
private pollIntervalMs: number;
|
||||
private pollTimer: ReturnType<typeof setTimeout> | null = null;
|
||||
private processedIds = new Set<string>();
|
||||
private threadMeta = new Map<string, ThreadMeta>();
|
||||
private consecutiveErrors = 0;
|
||||
private userEmail = '';
|
||||
|
||||
constructor(opts: GmailChannelOpts, pollIntervalMs = 60000) {
|
||||
this.opts = opts;
|
||||
this.pollIntervalMs = pollIntervalMs;
|
||||
}
|
||||
|
||||
async connect(): Promise<void> {
|
||||
const credDir = path.join(os.homedir(), '.gmail-mcp');
|
||||
const keysPath = path.join(credDir, 'gcp-oauth.keys.json');
|
||||
const tokensPath = path.join(credDir, 'credentials.json');
|
||||
|
||||
if (!fs.existsSync(keysPath) || !fs.existsSync(tokensPath)) {
|
||||
logger.warn(
|
||||
'Gmail credentials not found in ~/.gmail-mcp/. Skipping Gmail channel. Run /add-gmail to set up.',
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const keys = JSON.parse(fs.readFileSync(keysPath, 'utf-8'));
|
||||
const tokens = JSON.parse(fs.readFileSync(tokensPath, 'utf-8'));
|
||||
|
||||
const clientConfig = keys.installed || keys.web || keys;
|
||||
const { client_id, client_secret, redirect_uris } = clientConfig;
|
||||
this.oauth2Client = new google.auth.OAuth2(
|
||||
client_id,
|
||||
client_secret,
|
||||
redirect_uris?.[0],
|
||||
);
|
||||
this.oauth2Client.setCredentials(tokens);
|
||||
|
||||
// Persist refreshed tokens
|
||||
this.oauth2Client.on('tokens', (newTokens) => {
|
||||
try {
|
||||
const current = JSON.parse(fs.readFileSync(tokensPath, 'utf-8'));
|
||||
Object.assign(current, newTokens);
|
||||
fs.writeFileSync(tokensPath, JSON.stringify(current, null, 2));
|
||||
logger.debug('Gmail OAuth tokens refreshed');
|
||||
} catch (err) {
|
||||
logger.warn({ err }, 'Failed to persist refreshed Gmail tokens');
|
||||
}
|
||||
});
|
||||
|
||||
this.gmail = google.gmail({ version: 'v1', auth: this.oauth2Client });
|
||||
|
||||
// Verify connection
|
||||
const profile = await this.gmail.users.getProfile({ userId: 'me' });
|
||||
this.userEmail = profile.data.emailAddress || '';
|
||||
logger.info({ email: this.userEmail }, 'Gmail channel connected');
|
||||
|
||||
// Start polling with error backoff
|
||||
const schedulePoll = () => {
|
||||
const backoffMs = this.consecutiveErrors > 0
|
||||
? Math.min(this.pollIntervalMs * Math.pow(2, this.consecutiveErrors), 30 * 60 * 1000)
|
||||
: this.pollIntervalMs;
|
||||
this.pollTimer = setTimeout(() => {
|
||||
this.pollForMessages()
|
||||
.catch((err) => logger.error({ err }, 'Gmail poll error'))
|
||||
.finally(() => {
|
||||
if (this.gmail) schedulePoll();
|
||||
});
|
||||
}, backoffMs);
|
||||
};
|
||||
|
||||
// Initial poll
|
||||
await this.pollForMessages();
|
||||
schedulePoll();
|
||||
}
|
||||
|
||||
async sendMessage(jid: string, text: string): Promise<void> {
|
||||
if (!this.gmail) {
|
||||
logger.warn('Gmail not initialized');
|
||||
return;
|
||||
}
|
||||
|
||||
const threadId = jid.replace(/^gmail:/, '');
|
||||
const meta = this.threadMeta.get(threadId);
|
||||
|
||||
if (!meta) {
|
||||
logger.warn({ jid }, 'No thread metadata for reply, cannot send');
|
||||
return;
|
||||
}
|
||||
|
||||
const subject = meta.subject.startsWith('Re:')
|
||||
? meta.subject
|
||||
: `Re: ${meta.subject}`;
|
||||
|
||||
const headers = [
|
||||
`To: ${meta.sender}`,
|
||||
`From: ${this.userEmail}`,
|
||||
`Subject: ${subject}`,
|
||||
`In-Reply-To: ${meta.messageId}`,
|
||||
`References: ${meta.messageId}`,
|
||||
'Content-Type: text/plain; charset=utf-8',
|
||||
'',
|
||||
text,
|
||||
].join('\r\n');
|
||||
|
||||
const encodedMessage = Buffer.from(headers)
|
||||
.toString('base64')
|
||||
.replace(/\+/g, '-')
|
||||
.replace(/\//g, '_')
|
||||
.replace(/=+$/, '');
|
||||
|
||||
try {
|
||||
await this.gmail.users.messages.send({
|
||||
userId: 'me',
|
||||
requestBody: {
|
||||
raw: encodedMessage,
|
||||
threadId,
|
||||
},
|
||||
});
|
||||
logger.info({ to: meta.sender, threadId }, 'Gmail reply sent');
|
||||
} catch (err) {
|
||||
logger.error({ jid, err }, 'Failed to send Gmail reply');
|
||||
}
|
||||
}
|
||||
|
||||
isConnected(): boolean {
|
||||
return this.gmail !== null;
|
||||
}
|
||||
|
||||
ownsJid(jid: string): boolean {
|
||||
return jid.startsWith('gmail:');
|
||||
}
|
||||
|
||||
async disconnect(): Promise<void> {
|
||||
if (this.pollTimer) {
|
||||
clearTimeout(this.pollTimer);
|
||||
this.pollTimer = null;
|
||||
}
|
||||
this.gmail = null;
|
||||
this.oauth2Client = null;
|
||||
logger.info('Gmail channel stopped');
|
||||
}
|
||||
|
||||
// --- Private ---
|
||||
|
||||
private buildQuery(): string {
|
||||
return 'is:unread category:primary';
|
||||
}
|
||||
|
||||
private async pollForMessages(): Promise<void> {
|
||||
if (!this.gmail) return;
|
||||
|
||||
try {
|
||||
const query = this.buildQuery();
|
||||
const res = await this.gmail.users.messages.list({
|
||||
userId: 'me',
|
||||
q: query,
|
||||
maxResults: 10,
|
||||
});
|
||||
|
||||
const messages = res.data.messages || [];
|
||||
|
||||
for (const stub of messages) {
|
||||
if (!stub.id || this.processedIds.has(stub.id)) continue;
|
||||
this.processedIds.add(stub.id);
|
||||
|
||||
await this.processMessage(stub.id);
|
||||
}
|
||||
|
||||
// Cap processed ID set to prevent unbounded growth
|
||||
if (this.processedIds.size > 5000) {
|
||||
const ids = [...this.processedIds];
|
||||
this.processedIds = new Set(ids.slice(ids.length - 2500));
|
||||
}
|
||||
|
||||
this.consecutiveErrors = 0;
|
||||
} catch (err) {
|
||||
this.consecutiveErrors++;
|
||||
const backoffMs = Math.min(this.pollIntervalMs * Math.pow(2, this.consecutiveErrors), 30 * 60 * 1000);
|
||||
logger.error({ err, consecutiveErrors: this.consecutiveErrors, nextPollMs: backoffMs }, 'Gmail poll failed');
|
||||
}
|
||||
}
|
||||
|
||||
private async processMessage(messageId: string): Promise<void> {
|
||||
if (!this.gmail) return;
|
||||
|
||||
const msg = await this.gmail.users.messages.get({
|
||||
userId: 'me',
|
||||
id: messageId,
|
||||
format: 'full',
|
||||
});
|
||||
|
||||
const headers = msg.data.payload?.headers || [];
|
||||
const getHeader = (name: string) =>
|
||||
headers.find((h) => h.name?.toLowerCase() === name.toLowerCase())
|
||||
?.value || '';
|
||||
|
||||
const from = getHeader('From');
|
||||
const subject = getHeader('Subject');
|
||||
const rfc2822MessageId = getHeader('Message-ID');
|
||||
const threadId = msg.data.threadId || messageId;
|
||||
const timestamp = new Date(
|
||||
parseInt(msg.data.internalDate || '0', 10),
|
||||
).toISOString();
|
||||
|
||||
// Extract sender name and email
|
||||
const senderMatch = from.match(/^(.+?)\s*<(.+?)>$/);
|
||||
const senderName = senderMatch ? senderMatch[1].replace(/"/g, '') : from;
|
||||
const senderEmail = senderMatch ? senderMatch[2] : from;
|
||||
|
||||
// Skip emails from self (our own replies)
|
||||
if (senderEmail === this.userEmail) return;
|
||||
|
||||
// Extract body text
|
||||
const body = this.extractTextBody(msg.data.payload);
|
||||
|
||||
if (!body) {
|
||||
logger.debug({ messageId, subject }, 'Skipping email with no text body');
|
||||
return;
|
||||
}
|
||||
|
||||
const chatJid = `gmail:${threadId}`;
|
||||
|
||||
// Cache thread metadata for replies
|
||||
this.threadMeta.set(threadId, {
|
||||
sender: senderEmail,
|
||||
senderName,
|
||||
subject,
|
||||
messageId: rfc2822MessageId,
|
||||
});
|
||||
|
||||
// Store chat metadata for group discovery
|
||||
this.opts.onChatMetadata(chatJid, timestamp, subject, 'gmail', false);
|
||||
|
||||
// Find the main group to deliver the email notification
|
||||
const groups = this.opts.registeredGroups();
|
||||
const mainEntry = Object.entries(groups).find(
|
||||
([, g]) => g.isMain === true,
|
||||
);
|
||||
|
||||
if (!mainEntry) {
|
||||
logger.debug(
|
||||
{ chatJid, subject },
|
||||
'No main group registered, skipping email',
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const mainJid = mainEntry[0];
|
||||
const content = `[Email from ${senderName} <${senderEmail}>]\nSubject: ${subject}\n\n${body}`;
|
||||
|
||||
this.opts.onMessage(mainJid, {
|
||||
id: messageId,
|
||||
chat_jid: mainJid,
|
||||
sender: senderEmail,
|
||||
sender_name: senderName,
|
||||
content,
|
||||
timestamp,
|
||||
is_from_me: false,
|
||||
});
|
||||
|
||||
// Mark as read
|
||||
try {
|
||||
await this.gmail.users.messages.modify({
|
||||
userId: 'me',
|
||||
id: messageId,
|
||||
requestBody: { removeLabelIds: ['UNREAD'] },
|
||||
});
|
||||
} catch (err) {
|
||||
logger.warn({ messageId, err }, 'Failed to mark email as read');
|
||||
}
|
||||
|
||||
logger.info(
|
||||
{ mainJid, from: senderName, subject },
|
||||
'Gmail email delivered to main group',
|
||||
);
|
||||
}
|
||||
|
||||
private extractTextBody(
|
||||
payload: gmail_v1.Schema$MessagePart | undefined,
|
||||
): string {
|
||||
if (!payload) return '';
|
||||
|
||||
// Direct text/plain body
|
||||
if (payload.mimeType === 'text/plain' && payload.body?.data) {
|
||||
return Buffer.from(payload.body.data, 'base64').toString('utf-8');
|
||||
}
|
||||
|
||||
// Multipart: search parts recursively
|
||||
if (payload.parts) {
|
||||
// Prefer text/plain
|
||||
for (const part of payload.parts) {
|
||||
if (part.mimeType === 'text/plain' && part.body?.data) {
|
||||
return Buffer.from(part.body.data, 'base64').toString('utf-8');
|
||||
}
|
||||
}
|
||||
// Recurse into nested multipart
|
||||
for (const part of payload.parts) {
|
||||
const text = this.extractTextBody(part);
|
||||
if (text) return text;
|
||||
}
|
||||
}
|
||||
|
||||
return '';
|
||||
}
|
||||
}
|
||||
|
||||
registerChannel('gmail', (opts: ChannelOpts) => {
|
||||
const credDir = path.join(os.homedir(), '.gmail-mcp');
|
||||
if (
|
||||
!fs.existsSync(path.join(credDir, 'gcp-oauth.keys.json')) ||
|
||||
!fs.existsSync(path.join(credDir, 'credentials.json'))
|
||||
) {
|
||||
logger.warn('Gmail: credentials not found in ~/.gmail-mcp/');
|
||||
return null;
|
||||
}
|
||||
return new GmailChannel(opts);
|
||||
});
|
||||
@@ -1,17 +0,0 @@
|
||||
skill: gmail
|
||||
version: 1.0.0
|
||||
description: "Gmail integration via Google APIs"
|
||||
core_version: 0.1.0
|
||||
adds:
|
||||
- src/channels/gmail.ts
|
||||
- src/channels/gmail.test.ts
|
||||
modifies:
|
||||
- src/channels/index.ts
|
||||
- src/container-runner.ts
|
||||
- container/agent-runner/src/index.ts
|
||||
structured:
|
||||
npm_dependencies:
|
||||
googleapis: "^144.0.0"
|
||||
conflicts: []
|
||||
depends: []
|
||||
test: "npx vitest run src/channels/gmail.test.ts"
|
||||
@@ -1,593 +0,0 @@
|
||||
/**
|
||||
* NanoClaw Agent Runner
|
||||
* Runs inside a container, receives config via stdin, outputs result to stdout
|
||||
*
|
||||
* Input protocol:
|
||||
* Stdin: Full ContainerInput JSON (read until EOF, like before)
|
||||
* IPC: Follow-up messages written as JSON files to /workspace/ipc/input/
|
||||
* Files: {type:"message", text:"..."}.json — polled and consumed
|
||||
* Sentinel: /workspace/ipc/input/_close — signals session end
|
||||
*
|
||||
* Stdout protocol:
|
||||
* Each result is wrapped in OUTPUT_START_MARKER / OUTPUT_END_MARKER pairs.
|
||||
* Multiple results may be emitted (one per agent teams result).
|
||||
* Final marker after loop ends signals completion.
|
||||
*/
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { query, HookCallback, PreCompactHookInput, PreToolUseHookInput } from '@anthropic-ai/claude-agent-sdk';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
interface ContainerInput {
|
||||
prompt: string;
|
||||
sessionId?: string;
|
||||
groupFolder: string;
|
||||
chatJid: string;
|
||||
isMain: boolean;
|
||||
isScheduledTask?: boolean;
|
||||
assistantName?: string;
|
||||
secrets?: Record<string, string>;
|
||||
}
|
||||
|
||||
interface ContainerOutput {
|
||||
status: 'success' | 'error';
|
||||
result: string | null;
|
||||
newSessionId?: string;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
interface SessionEntry {
|
||||
sessionId: string;
|
||||
fullPath: string;
|
||||
summary: string;
|
||||
firstPrompt: string;
|
||||
}
|
||||
|
||||
interface SessionsIndex {
|
||||
entries: SessionEntry[];
|
||||
}
|
||||
|
||||
interface SDKUserMessage {
|
||||
type: 'user';
|
||||
message: { role: 'user'; content: string };
|
||||
parent_tool_use_id: null;
|
||||
session_id: string;
|
||||
}
|
||||
|
||||
const IPC_INPUT_DIR = '/workspace/ipc/input';
|
||||
const IPC_INPUT_CLOSE_SENTINEL = path.join(IPC_INPUT_DIR, '_close');
|
||||
const IPC_POLL_MS = 500;
|
||||
|
||||
/**
|
||||
* Push-based async iterable for streaming user messages to the SDK.
|
||||
* Keeps the iterable alive until end() is called, preventing isSingleUserTurn.
|
||||
*/
|
||||
class MessageStream {
|
||||
private queue: SDKUserMessage[] = [];
|
||||
private waiting: (() => void) | null = null;
|
||||
private done = false;
|
||||
|
||||
push(text: string): void {
|
||||
this.queue.push({
|
||||
type: 'user',
|
||||
message: { role: 'user', content: text },
|
||||
parent_tool_use_id: null,
|
||||
session_id: '',
|
||||
});
|
||||
this.waiting?.();
|
||||
}
|
||||
|
||||
end(): void {
|
||||
this.done = true;
|
||||
this.waiting?.();
|
||||
}
|
||||
|
||||
async *[Symbol.asyncIterator](): AsyncGenerator<SDKUserMessage> {
|
||||
while (true) {
|
||||
while (this.queue.length > 0) {
|
||||
yield this.queue.shift()!;
|
||||
}
|
||||
if (this.done) return;
|
||||
await new Promise<void>(r => { this.waiting = r; });
|
||||
this.waiting = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function readStdin(): Promise<string> {
|
||||
return new Promise((resolve, reject) => {
|
||||
let data = '';
|
||||
process.stdin.setEncoding('utf8');
|
||||
process.stdin.on('data', chunk => { data += chunk; });
|
||||
process.stdin.on('end', () => resolve(data));
|
||||
process.stdin.on('error', reject);
|
||||
});
|
||||
}
|
||||
|
||||
const OUTPUT_START_MARKER = '---NANOCLAW_OUTPUT_START---';
|
||||
const OUTPUT_END_MARKER = '---NANOCLAW_OUTPUT_END---';
|
||||
|
||||
function writeOutput(output: ContainerOutput): void {
|
||||
console.log(OUTPUT_START_MARKER);
|
||||
console.log(JSON.stringify(output));
|
||||
console.log(OUTPUT_END_MARKER);
|
||||
}
|
||||
|
||||
function log(message: string): void {
|
||||
console.error(`[agent-runner] ${message}`);
|
||||
}
|
||||
|
||||
function getSessionSummary(sessionId: string, transcriptPath: string): string | null {
|
||||
const projectDir = path.dirname(transcriptPath);
|
||||
const indexPath = path.join(projectDir, 'sessions-index.json');
|
||||
|
||||
if (!fs.existsSync(indexPath)) {
|
||||
log(`Sessions index not found at ${indexPath}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const index: SessionsIndex = JSON.parse(fs.readFileSync(indexPath, 'utf-8'));
|
||||
const entry = index.entries.find(e => e.sessionId === sessionId);
|
||||
if (entry?.summary) {
|
||||
return entry.summary;
|
||||
}
|
||||
} catch (err) {
|
||||
log(`Failed to read sessions index: ${err instanceof Error ? err.message : String(err)}`);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Archive the full transcript to conversations/ before compaction.
|
||||
*/
|
||||
function createPreCompactHook(assistantName?: string): HookCallback {
|
||||
return async (input, _toolUseId, _context) => {
|
||||
const preCompact = input as PreCompactHookInput;
|
||||
const transcriptPath = preCompact.transcript_path;
|
||||
const sessionId = preCompact.session_id;
|
||||
|
||||
if (!transcriptPath || !fs.existsSync(transcriptPath)) {
|
||||
log('No transcript found for archiving');
|
||||
return {};
|
||||
}
|
||||
|
||||
try {
|
||||
const content = fs.readFileSync(transcriptPath, 'utf-8');
|
||||
const messages = parseTranscript(content);
|
||||
|
||||
if (messages.length === 0) {
|
||||
log('No messages to archive');
|
||||
return {};
|
||||
}
|
||||
|
||||
const summary = getSessionSummary(sessionId, transcriptPath);
|
||||
const name = summary ? sanitizeFilename(summary) : generateFallbackName();
|
||||
|
||||
const conversationsDir = '/workspace/group/conversations';
|
||||
fs.mkdirSync(conversationsDir, { recursive: true });
|
||||
|
||||
const date = new Date().toISOString().split('T')[0];
|
||||
const filename = `${date}-${name}.md`;
|
||||
const filePath = path.join(conversationsDir, filename);
|
||||
|
||||
const markdown = formatTranscriptMarkdown(messages, summary, assistantName);
|
||||
fs.writeFileSync(filePath, markdown);
|
||||
|
||||
log(`Archived conversation to ${filePath}`);
|
||||
} catch (err) {
|
||||
log(`Failed to archive transcript: ${err instanceof Error ? err.message : String(err)}`);
|
||||
}
|
||||
|
||||
return {};
|
||||
};
|
||||
}
|
||||
|
||||
// Secrets to strip from Bash tool subprocess environments.
|
||||
// These are needed by claude-code for API auth but should never
|
||||
// be visible to commands Kit runs.
|
||||
const SECRET_ENV_VARS = ['ANTHROPIC_API_KEY', 'CLAUDE_CODE_OAUTH_TOKEN'];
|
||||
|
||||
function createSanitizeBashHook(): HookCallback {
|
||||
return async (input, _toolUseId, _context) => {
|
||||
const preInput = input as PreToolUseHookInput;
|
||||
const command = (preInput.tool_input as { command?: string })?.command;
|
||||
if (!command) return {};
|
||||
|
||||
const unsetPrefix = `unset ${SECRET_ENV_VARS.join(' ')} 2>/dev/null; `;
|
||||
return {
|
||||
hookSpecificOutput: {
|
||||
hookEventName: 'PreToolUse',
|
||||
updatedInput: {
|
||||
...(preInput.tool_input as Record<string, unknown>),
|
||||
command: unsetPrefix + command,
|
||||
},
|
||||
},
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
function sanitizeFilename(summary: string): string {
|
||||
return summary
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9]+/g, '-')
|
||||
.replace(/^-+|-+$/g, '')
|
||||
.slice(0, 50);
|
||||
}
|
||||
|
||||
function generateFallbackName(): string {
|
||||
const time = new Date();
|
||||
return `conversation-${time.getHours().toString().padStart(2, '0')}${time.getMinutes().toString().padStart(2, '0')}`;
|
||||
}
|
||||
|
||||
interface ParsedMessage {
|
||||
role: 'user' | 'assistant';
|
||||
content: string;
|
||||
}
|
||||
|
||||
function parseTranscript(content: string): ParsedMessage[] {
|
||||
const messages: ParsedMessage[] = [];
|
||||
|
||||
for (const line of content.split('\n')) {
|
||||
if (!line.trim()) continue;
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
if (entry.type === 'user' && entry.message?.content) {
|
||||
const text = typeof entry.message.content === 'string'
|
||||
? entry.message.content
|
||||
: entry.message.content.map((c: { text?: string }) => c.text || '').join('');
|
||||
if (text) messages.push({ role: 'user', content: text });
|
||||
} else if (entry.type === 'assistant' && entry.message?.content) {
|
||||
const textParts = entry.message.content
|
||||
.filter((c: { type: string }) => c.type === 'text')
|
||||
.map((c: { text: string }) => c.text);
|
||||
const text = textParts.join('');
|
||||
if (text) messages.push({ role: 'assistant', content: text });
|
||||
}
|
||||
} catch {
|
||||
}
|
||||
}
|
||||
|
||||
return messages;
|
||||
}
|
||||
|
||||
function formatTranscriptMarkdown(messages: ParsedMessage[], title?: string | null, assistantName?: string): string {
|
||||
const now = new Date();
|
||||
const formatDateTime = (d: Date) => d.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
|
||||
const lines: string[] = [];
|
||||
lines.push(`# ${title || 'Conversation'}`);
|
||||
lines.push('');
|
||||
lines.push(`Archived: ${formatDateTime(now)}`);
|
||||
lines.push('');
|
||||
lines.push('---');
|
||||
lines.push('');
|
||||
|
||||
for (const msg of messages) {
|
||||
const sender = msg.role === 'user' ? 'User' : (assistantName || 'Assistant');
|
||||
const content = msg.content.length > 2000
|
||||
? msg.content.slice(0, 2000) + '...'
|
||||
: msg.content;
|
||||
lines.push(`**${sender}**: ${content}`);
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for _close sentinel.
|
||||
*/
|
||||
function shouldClose(): boolean {
|
||||
if (fs.existsSync(IPC_INPUT_CLOSE_SENTINEL)) {
|
||||
try { fs.unlinkSync(IPC_INPUT_CLOSE_SENTINEL); } catch { /* ignore */ }
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Drain all pending IPC input messages.
|
||||
* Returns messages found, or empty array.
|
||||
*/
|
||||
function drainIpcInput(): string[] {
|
||||
try {
|
||||
fs.mkdirSync(IPC_INPUT_DIR, { recursive: true });
|
||||
const files = fs.readdirSync(IPC_INPUT_DIR)
|
||||
.filter(f => f.endsWith('.json'))
|
||||
.sort();
|
||||
|
||||
const messages: string[] = [];
|
||||
for (const file of files) {
|
||||
const filePath = path.join(IPC_INPUT_DIR, file);
|
||||
try {
|
||||
const data = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
|
||||
fs.unlinkSync(filePath);
|
||||
if (data.type === 'message' && data.text) {
|
||||
messages.push(data.text);
|
||||
}
|
||||
} catch (err) {
|
||||
log(`Failed to process input file ${file}: ${err instanceof Error ? err.message : String(err)}`);
|
||||
try { fs.unlinkSync(filePath); } catch { /* ignore */ }
|
||||
}
|
||||
}
|
||||
return messages;
|
||||
} catch (err) {
|
||||
log(`IPC drain error: ${err instanceof Error ? err.message : String(err)}`);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Wait for a new IPC message or _close sentinel.
|
||||
* Returns the messages as a single string, or null if _close.
|
||||
*/
|
||||
function waitForIpcMessage(): Promise<string | null> {
|
||||
return new Promise((resolve) => {
|
||||
const poll = () => {
|
||||
if (shouldClose()) {
|
||||
resolve(null);
|
||||
return;
|
||||
}
|
||||
const messages = drainIpcInput();
|
||||
if (messages.length > 0) {
|
||||
resolve(messages.join('\n'));
|
||||
return;
|
||||
}
|
||||
setTimeout(poll, IPC_POLL_MS);
|
||||
};
|
||||
poll();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Run a single query and stream results via writeOutput.
|
||||
* Uses MessageStream (AsyncIterable) to keep isSingleUserTurn=false,
|
||||
* allowing agent teams subagents to run to completion.
|
||||
* Also pipes IPC messages into the stream during the query.
|
||||
*/
|
||||
async function runQuery(
|
||||
prompt: string,
|
||||
sessionId: string | undefined,
|
||||
mcpServerPath: string,
|
||||
containerInput: ContainerInput,
|
||||
sdkEnv: Record<string, string | undefined>,
|
||||
resumeAt?: string,
|
||||
): Promise<{ newSessionId?: string; lastAssistantUuid?: string; closedDuringQuery: boolean }> {
|
||||
const stream = new MessageStream();
|
||||
stream.push(prompt);
|
||||
|
||||
// Poll IPC for follow-up messages and _close sentinel during the query
|
||||
let ipcPolling = true;
|
||||
let closedDuringQuery = false;
|
||||
const pollIpcDuringQuery = () => {
|
||||
if (!ipcPolling) return;
|
||||
if (shouldClose()) {
|
||||
log('Close sentinel detected during query, ending stream');
|
||||
closedDuringQuery = true;
|
||||
stream.end();
|
||||
ipcPolling = false;
|
||||
return;
|
||||
}
|
||||
const messages = drainIpcInput();
|
||||
for (const text of messages) {
|
||||
log(`Piping IPC message into active query (${text.length} chars)`);
|
||||
stream.push(text);
|
||||
}
|
||||
setTimeout(pollIpcDuringQuery, IPC_POLL_MS);
|
||||
};
|
||||
setTimeout(pollIpcDuringQuery, IPC_POLL_MS);
|
||||
|
||||
let newSessionId: string | undefined;
|
||||
let lastAssistantUuid: string | undefined;
|
||||
let messageCount = 0;
|
||||
let resultCount = 0;
|
||||
|
||||
// Load global CLAUDE.md as additional system context (shared across all groups)
|
||||
const globalClaudeMdPath = '/workspace/global/CLAUDE.md';
|
||||
let globalClaudeMd: string | undefined;
|
||||
if (!containerInput.isMain && fs.existsSync(globalClaudeMdPath)) {
|
||||
globalClaudeMd = fs.readFileSync(globalClaudeMdPath, 'utf-8');
|
||||
}
|
||||
|
||||
// Discover additional directories mounted at /workspace/extra/*
|
||||
// These are passed to the SDK so their CLAUDE.md files are loaded automatically
|
||||
const extraDirs: string[] = [];
|
||||
const extraBase = '/workspace/extra';
|
||||
if (fs.existsSync(extraBase)) {
|
||||
for (const entry of fs.readdirSync(extraBase)) {
|
||||
const fullPath = path.join(extraBase, entry);
|
||||
if (fs.statSync(fullPath).isDirectory()) {
|
||||
extraDirs.push(fullPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (extraDirs.length > 0) {
|
||||
log(`Additional directories: ${extraDirs.join(', ')}`);
|
||||
}
|
||||
|
||||
for await (const message of query({
|
||||
prompt: stream,
|
||||
options: {
|
||||
cwd: '/workspace/group',
|
||||
additionalDirectories: extraDirs.length > 0 ? extraDirs : undefined,
|
||||
resume: sessionId,
|
||||
resumeSessionAt: resumeAt,
|
||||
systemPrompt: globalClaudeMd
|
||||
? { type: 'preset' as const, preset: 'claude_code' as const, append: globalClaudeMd }
|
||||
: undefined,
|
||||
allowedTools: [
|
||||
'Bash',
|
||||
'Read', 'Write', 'Edit', 'Glob', 'Grep',
|
||||
'WebSearch', 'WebFetch',
|
||||
'Task', 'TaskOutput', 'TaskStop',
|
||||
'TeamCreate', 'TeamDelete', 'SendMessage',
|
||||
'TodoWrite', 'ToolSearch', 'Skill',
|
||||
'NotebookEdit',
|
||||
'mcp__nanoclaw__*',
|
||||
'mcp__gmail__*',
|
||||
],
|
||||
env: sdkEnv,
|
||||
permissionMode: 'bypassPermissions',
|
||||
allowDangerouslySkipPermissions: true,
|
||||
settingSources: ['project', 'user'],
|
||||
mcpServers: {
|
||||
nanoclaw: {
|
||||
command: 'node',
|
||||
args: [mcpServerPath],
|
||||
env: {
|
||||
NANOCLAW_CHAT_JID: containerInput.chatJid,
|
||||
NANOCLAW_GROUP_FOLDER: containerInput.groupFolder,
|
||||
NANOCLAW_IS_MAIN: containerInput.isMain ? '1' : '0',
|
||||
},
|
||||
},
|
||||
gmail: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@gongrzhe/server-gmail-autoauth-mcp'],
|
||||
},
|
||||
},
|
||||
hooks: {
|
||||
PreCompact: [{ hooks: [createPreCompactHook(containerInput.assistantName)] }],
|
||||
PreToolUse: [{ matcher: 'Bash', hooks: [createSanitizeBashHook()] }],
|
||||
},
|
||||
}
|
||||
})) {
|
||||
messageCount++;
|
||||
const msgType = message.type === 'system' ? `system/${(message as { subtype?: string }).subtype}` : message.type;
|
||||
log(`[msg #${messageCount}] type=${msgType}`);
|
||||
|
||||
if (message.type === 'assistant' && 'uuid' in message) {
|
||||
lastAssistantUuid = (message as { uuid: string }).uuid;
|
||||
}
|
||||
|
||||
if (message.type === 'system' && message.subtype === 'init') {
|
||||
newSessionId = message.session_id;
|
||||
log(`Session initialized: ${newSessionId}`);
|
||||
}
|
||||
|
||||
if (message.type === 'system' && (message as { subtype?: string }).subtype === 'task_notification') {
|
||||
const tn = message as { task_id: string; status: string; summary: string };
|
||||
log(`Task notification: task=${tn.task_id} status=${tn.status} summary=${tn.summary}`);
|
||||
}
|
||||
|
||||
if (message.type === 'result') {
|
||||
resultCount++;
|
||||
const textResult = 'result' in message ? (message as { result?: string }).result : null;
|
||||
log(`Result #${resultCount}: subtype=${message.subtype}${textResult ? ` text=${textResult.slice(0, 200)}` : ''}`);
|
||||
writeOutput({
|
||||
status: 'success',
|
||||
result: textResult || null,
|
||||
newSessionId
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
ipcPolling = false;
|
||||
log(`Query done. Messages: ${messageCount}, results: ${resultCount}, lastAssistantUuid: ${lastAssistantUuid || 'none'}, closedDuringQuery: ${closedDuringQuery}`);
|
||||
return { newSessionId, lastAssistantUuid, closedDuringQuery };
|
||||
}
|
||||
|
||||
async function main(): Promise<void> {
|
||||
let containerInput: ContainerInput;
|
||||
|
||||
try {
|
||||
const stdinData = await readStdin();
|
||||
containerInput = JSON.parse(stdinData);
|
||||
// Delete the temp file the entrypoint wrote — it contains secrets
|
||||
try { fs.unlinkSync('/tmp/input.json'); } catch { /* may not exist */ }
|
||||
log(`Received input for group: ${containerInput.groupFolder}`);
|
||||
} catch (err) {
|
||||
writeOutput({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Failed to parse input: ${err instanceof Error ? err.message : String(err)}`
|
||||
});
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Build SDK env: merge secrets into process.env for the SDK only.
|
||||
// Secrets never touch process.env itself, so Bash subprocesses can't see them.
|
||||
const sdkEnv: Record<string, string | undefined> = { ...process.env };
|
||||
for (const [key, value] of Object.entries(containerInput.secrets || {})) {
|
||||
sdkEnv[key] = value;
|
||||
}
|
||||
|
||||
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
||||
const mcpServerPath = path.join(__dirname, 'ipc-mcp-stdio.js');
|
||||
|
||||
let sessionId = containerInput.sessionId;
|
||||
fs.mkdirSync(IPC_INPUT_DIR, { recursive: true });
|
||||
|
||||
// Clean up stale _close sentinel from previous container runs
|
||||
try { fs.unlinkSync(IPC_INPUT_CLOSE_SENTINEL); } catch { /* ignore */ }
|
||||
|
||||
// Build initial prompt (drain any pending IPC messages too)
|
||||
let prompt = containerInput.prompt;
|
||||
if (containerInput.isScheduledTask) {
|
||||
prompt = `[SCHEDULED TASK - The following message was sent automatically and is not coming directly from the user or group.]\n\n${prompt}`;
|
||||
}
|
||||
const pending = drainIpcInput();
|
||||
if (pending.length > 0) {
|
||||
log(`Draining ${pending.length} pending IPC messages into initial prompt`);
|
||||
prompt += '\n' + pending.join('\n');
|
||||
}
|
||||
|
||||
// Query loop: run query → wait for IPC message → run new query → repeat
|
||||
let resumeAt: string | undefined;
|
||||
try {
|
||||
while (true) {
|
||||
log(`Starting query (session: ${sessionId || 'new'}, resumeAt: ${resumeAt || 'latest'})...`);
|
||||
|
||||
const queryResult = await runQuery(prompt, sessionId, mcpServerPath, containerInput, sdkEnv, resumeAt);
|
||||
if (queryResult.newSessionId) {
|
||||
sessionId = queryResult.newSessionId;
|
||||
}
|
||||
if (queryResult.lastAssistantUuid) {
|
||||
resumeAt = queryResult.lastAssistantUuid;
|
||||
}
|
||||
|
||||
// If _close was consumed during the query, exit immediately.
|
||||
// Don't emit a session-update marker (it would reset the host's
|
||||
// idle timer and cause a 30-min delay before the next _close).
|
||||
if (queryResult.closedDuringQuery) {
|
||||
log('Close sentinel consumed during query, exiting');
|
||||
break;
|
||||
}
|
||||
|
||||
// Emit session update so host can track it
|
||||
writeOutput({ status: 'success', result: null, newSessionId: sessionId });
|
||||
|
||||
log('Query ended, waiting for next IPC message...');
|
||||
|
||||
// Wait for the next message or _close sentinel
|
||||
const nextMessage = await waitForIpcMessage();
|
||||
if (nextMessage === null) {
|
||||
log('Close sentinel received, exiting');
|
||||
break;
|
||||
}
|
||||
|
||||
log(`Got new message (${nextMessage.length} chars), starting new query`);
|
||||
prompt = nextMessage;
|
||||
}
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : String(err);
|
||||
log(`Agent error: ${errorMessage}`);
|
||||
writeOutput({
|
||||
status: 'error',
|
||||
result: null,
|
||||
newSessionId: sessionId,
|
||||
error: errorMessage
|
||||
});
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -1,32 +0,0 @@
|
||||
# Intent: container/agent-runner/src/index.ts modifications
|
||||
|
||||
## What changed
|
||||
Added Gmail MCP server to the agent's available tools so it can read and send emails.
|
||||
|
||||
## Key sections
|
||||
|
||||
### mcpServers (inside runQuery → query() call)
|
||||
- Added: `gmail` MCP server alongside the existing `nanoclaw` server:
|
||||
```
|
||||
gmail: {
|
||||
command: 'npx',
|
||||
args: ['-y', '@gongrzhe/server-gmail-autoauth-mcp'],
|
||||
},
|
||||
```
|
||||
|
||||
### allowedTools (inside runQuery → query() call)
|
||||
- Added: `'mcp__gmail__*'` to allow all Gmail MCP tools
|
||||
|
||||
## Invariants
|
||||
- The `nanoclaw` MCP server configuration is unchanged
|
||||
- All existing allowed tools are preserved
|
||||
- The query loop, IPC handling, MessageStream, and all other logic is untouched
|
||||
- Hooks (PreCompact, sanitize Bash) are unchanged
|
||||
- Output protocol (markers) is unchanged
|
||||
|
||||
## Must-keep
|
||||
- The `nanoclaw` MCP server with its environment variables
|
||||
- All existing allowedTools entries
|
||||
- The hook system (PreCompact, PreToolUse sanitize)
|
||||
- The IPC input/close sentinel handling
|
||||
- The MessageStream class and query loop
|
||||
@@ -1,13 +0,0 @@
|
||||
// Channel self-registration barrel file.
|
||||
// Each import triggers the channel module's registerChannel() call.
|
||||
|
||||
// discord
|
||||
|
||||
// gmail
|
||||
import './gmail.js';
|
||||
|
||||
// slack
|
||||
|
||||
// telegram
|
||||
|
||||
// whatsapp
|
||||
@@ -1,7 +0,0 @@
|
||||
# Intent: Add Gmail channel import
|
||||
|
||||
Add `import './gmail.js';` to the channel barrel file so the Gmail
|
||||
module self-registers with the channel registry on startup.
|
||||
|
||||
This is an append-only change — existing import lines for other channels
|
||||
must be preserved.
|
||||
@@ -1,661 +0,0 @@
|
||||
/**
|
||||
* Container Runner for NanoClaw
|
||||
* Spawns agent execution in containers and handles IPC
|
||||
*/
|
||||
import { ChildProcess, exec, spawn } from 'child_process';
|
||||
import fs from 'fs';
|
||||
import os from 'os';
|
||||
import path from 'path';
|
||||
|
||||
import {
|
||||
CONTAINER_IMAGE,
|
||||
CONTAINER_MAX_OUTPUT_SIZE,
|
||||
CONTAINER_TIMEOUT,
|
||||
DATA_DIR,
|
||||
GROUPS_DIR,
|
||||
IDLE_TIMEOUT,
|
||||
TIMEZONE,
|
||||
} from './config.js';
|
||||
import { readEnvFile } from './env.js';
|
||||
import { resolveGroupFolderPath, resolveGroupIpcPath } from './group-folder.js';
|
||||
import { logger } from './logger.js';
|
||||
import { CONTAINER_RUNTIME_BIN, readonlyMountArgs, stopContainer } from './container-runtime.js';
|
||||
import { validateAdditionalMounts } from './mount-security.js';
|
||||
import { RegisteredGroup } from './types.js';
|
||||
|
||||
// Sentinel markers for robust output parsing (must match agent-runner)
|
||||
const OUTPUT_START_MARKER = '---NANOCLAW_OUTPUT_START---';
|
||||
const OUTPUT_END_MARKER = '---NANOCLAW_OUTPUT_END---';
|
||||
|
||||
export interface ContainerInput {
|
||||
prompt: string;
|
||||
sessionId?: string;
|
||||
groupFolder: string;
|
||||
chatJid: string;
|
||||
isMain: boolean;
|
||||
isScheduledTask?: boolean;
|
||||
assistantName?: string;
|
||||
secrets?: Record<string, string>;
|
||||
}
|
||||
|
||||
export interface ContainerOutput {
|
||||
status: 'success' | 'error';
|
||||
result: string | null;
|
||||
newSessionId?: string;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
interface VolumeMount {
|
||||
hostPath: string;
|
||||
containerPath: string;
|
||||
readonly: boolean;
|
||||
}
|
||||
|
||||
function buildVolumeMounts(
|
||||
group: RegisteredGroup,
|
||||
isMain: boolean,
|
||||
): VolumeMount[] {
|
||||
const mounts: VolumeMount[] = [];
|
||||
const projectRoot = process.cwd();
|
||||
const homeDir = os.homedir();
|
||||
const groupDir = resolveGroupFolderPath(group.folder);
|
||||
|
||||
if (isMain) {
|
||||
// Main gets the project root read-only. Writable paths the agent needs
|
||||
// (group folder, IPC, .claude/) are mounted separately below.
|
||||
// Read-only prevents the agent from modifying host application code
|
||||
// (src/, dist/, package.json, etc.) which would bypass the sandbox
|
||||
// entirely on next restart.
|
||||
mounts.push({
|
||||
hostPath: projectRoot,
|
||||
containerPath: '/workspace/project',
|
||||
readonly: true,
|
||||
});
|
||||
|
||||
// Main also gets its group folder as the working directory
|
||||
mounts.push({
|
||||
hostPath: groupDir,
|
||||
containerPath: '/workspace/group',
|
||||
readonly: false,
|
||||
});
|
||||
} else {
|
||||
// Other groups only get their own folder
|
||||
mounts.push({
|
||||
hostPath: groupDir,
|
||||
containerPath: '/workspace/group',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Global memory directory (read-only for non-main)
|
||||
// Only directory mounts are supported, not file mounts
|
||||
const globalDir = path.join(GROUPS_DIR, 'global');
|
||||
if (fs.existsSync(globalDir)) {
|
||||
mounts.push({
|
||||
hostPath: globalDir,
|
||||
containerPath: '/workspace/global',
|
||||
readonly: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Per-group Claude sessions directory (isolated from other groups)
|
||||
// Each group gets their own .claude/ to prevent cross-group session access
|
||||
const groupSessionsDir = path.join(
|
||||
DATA_DIR,
|
||||
'sessions',
|
||||
group.folder,
|
||||
'.claude',
|
||||
);
|
||||
fs.mkdirSync(groupSessionsDir, { recursive: true });
|
||||
const settingsFile = path.join(groupSessionsDir, 'settings.json');
|
||||
if (!fs.existsSync(settingsFile)) {
|
||||
fs.writeFileSync(settingsFile, JSON.stringify({
|
||||
env: {
|
||||
// Enable agent swarms (subagent orchestration)
|
||||
// https://code.claude.com/docs/en/agent-teams#orchestrate-teams-of-claude-code-sessions
|
||||
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS: '1',
|
||||
// Load CLAUDE.md from additional mounted directories
|
||||
// https://code.claude.com/docs/en/memory#load-memory-from-additional-directories
|
||||
CLAUDE_CODE_ADDITIONAL_DIRECTORIES_CLAUDE_MD: '1',
|
||||
// Enable Claude's memory feature (persists user preferences between sessions)
|
||||
// https://code.claude.com/docs/en/memory#manage-auto-memory
|
||||
CLAUDE_CODE_DISABLE_AUTO_MEMORY: '0',
|
||||
},
|
||||
}, null, 2) + '\n');
|
||||
}
|
||||
|
||||
// Sync skills from container/skills/ into each group's .claude/skills/
|
||||
const skillsSrc = path.join(process.cwd(), 'container', 'skills');
|
||||
const skillsDst = path.join(groupSessionsDir, 'skills');
|
||||
if (fs.existsSync(skillsSrc)) {
|
||||
for (const skillDir of fs.readdirSync(skillsSrc)) {
|
||||
const srcDir = path.join(skillsSrc, skillDir);
|
||||
if (!fs.statSync(srcDir).isDirectory()) continue;
|
||||
const dstDir = path.join(skillsDst, skillDir);
|
||||
fs.cpSync(srcDir, dstDir, { recursive: true });
|
||||
}
|
||||
}
|
||||
mounts.push({
|
||||
hostPath: groupSessionsDir,
|
||||
containerPath: '/home/node/.claude',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Gmail credentials directory (for Gmail MCP inside the container)
|
||||
const gmailDir = path.join(homeDir, '.gmail-mcp');
|
||||
if (fs.existsSync(gmailDir)) {
|
||||
mounts.push({
|
||||
hostPath: gmailDir,
|
||||
containerPath: '/home/node/.gmail-mcp',
|
||||
readonly: false, // MCP may need to refresh OAuth tokens
|
||||
});
|
||||
}
|
||||
|
||||
// Per-group IPC namespace: each group gets its own IPC directory
|
||||
// This prevents cross-group privilege escalation via IPC
|
||||
const groupIpcDir = resolveGroupIpcPath(group.folder);
|
||||
fs.mkdirSync(path.join(groupIpcDir, 'messages'), { recursive: true });
|
||||
fs.mkdirSync(path.join(groupIpcDir, 'tasks'), { recursive: true });
|
||||
fs.mkdirSync(path.join(groupIpcDir, 'input'), { recursive: true });
|
||||
mounts.push({
|
||||
hostPath: groupIpcDir,
|
||||
containerPath: '/workspace/ipc',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Copy agent-runner source into a per-group writable location so agents
|
||||
// can customize it (add tools, change behavior) without affecting other
|
||||
// groups. Recompiled on container startup via entrypoint.sh.
|
||||
const agentRunnerSrc = path.join(projectRoot, 'container', 'agent-runner', 'src');
|
||||
const groupAgentRunnerDir = path.join(DATA_DIR, 'sessions', group.folder, 'agent-runner-src');
|
||||
if (!fs.existsSync(groupAgentRunnerDir) && fs.existsSync(agentRunnerSrc)) {
|
||||
fs.cpSync(agentRunnerSrc, groupAgentRunnerDir, { recursive: true });
|
||||
}
|
||||
mounts.push({
|
||||
hostPath: groupAgentRunnerDir,
|
||||
containerPath: '/app/src',
|
||||
readonly: false,
|
||||
});
|
||||
|
||||
// Additional mounts validated against external allowlist (tamper-proof from containers)
|
||||
if (group.containerConfig?.additionalMounts) {
|
||||
const validatedMounts = validateAdditionalMounts(
|
||||
group.containerConfig.additionalMounts,
|
||||
group.name,
|
||||
isMain,
|
||||
);
|
||||
mounts.push(...validatedMounts);
|
||||
}
|
||||
|
||||
return mounts;
|
||||
}
|
||||
|
||||
/**
|
||||
* Read allowed secrets from .env for passing to the container via stdin.
|
||||
* Secrets are never written to disk or mounted as files.
|
||||
*/
|
||||
function readSecrets(): Record<string, string> {
|
||||
return readEnvFile(['CLAUDE_CODE_OAUTH_TOKEN', 'ANTHROPIC_API_KEY']);
|
||||
}
|
||||
|
||||
function buildContainerArgs(mounts: VolumeMount[], containerName: string): string[] {
|
||||
const args: string[] = ['run', '-i', '--rm', '--name', containerName];
|
||||
|
||||
// Pass host timezone so container's local time matches the user's
|
||||
args.push('-e', `TZ=${TIMEZONE}`);
|
||||
|
||||
// Run as host user so bind-mounted files are accessible.
|
||||
// Skip when running as root (uid 0), as the container's node user (uid 1000),
|
||||
// or when getuid is unavailable (native Windows without WSL).
|
||||
const hostUid = process.getuid?.();
|
||||
const hostGid = process.getgid?.();
|
||||
if (hostUid != null && hostUid !== 0 && hostUid !== 1000) {
|
||||
args.push('--user', `${hostUid}:${hostGid}`);
|
||||
args.push('-e', 'HOME=/home/node');
|
||||
}
|
||||
|
||||
for (const mount of mounts) {
|
||||
if (mount.readonly) {
|
||||
args.push(...readonlyMountArgs(mount.hostPath, mount.containerPath));
|
||||
} else {
|
||||
args.push('-v', `${mount.hostPath}:${mount.containerPath}`);
|
||||
}
|
||||
}
|
||||
|
||||
args.push(CONTAINER_IMAGE);
|
||||
|
||||
return args;
|
||||
}
|
||||
|
||||
export async function runContainerAgent(
|
||||
group: RegisteredGroup,
|
||||
input: ContainerInput,
|
||||
onProcess: (proc: ChildProcess, containerName: string) => void,
|
||||
onOutput?: (output: ContainerOutput) => Promise<void>,
|
||||
): Promise<ContainerOutput> {
|
||||
const startTime = Date.now();
|
||||
|
||||
const groupDir = resolveGroupFolderPath(group.folder);
|
||||
fs.mkdirSync(groupDir, { recursive: true });
|
||||
|
||||
const mounts = buildVolumeMounts(group, input.isMain);
|
||||
const safeName = group.folder.replace(/[^a-zA-Z0-9-]/g, '-');
|
||||
const containerName = `nanoclaw-${safeName}-${Date.now()}`;
|
||||
const containerArgs = buildContainerArgs(mounts, containerName);
|
||||
|
||||
logger.debug(
|
||||
{
|
||||
group: group.name,
|
||||
containerName,
|
||||
mounts: mounts.map(
|
||||
(m) =>
|
||||
`${m.hostPath} -> ${m.containerPath}${m.readonly ? ' (ro)' : ''}`,
|
||||
),
|
||||
containerArgs: containerArgs.join(' '),
|
||||
},
|
||||
'Container mount configuration',
|
||||
);
|
||||
|
||||
logger.info(
|
||||
{
|
||||
group: group.name,
|
||||
containerName,
|
||||
mountCount: mounts.length,
|
||||
isMain: input.isMain,
|
||||
},
|
||||
'Spawning container agent',
|
||||
);
|
||||
|
||||
const logsDir = path.join(groupDir, 'logs');
|
||||
fs.mkdirSync(logsDir, { recursive: true });
|
||||
|
||||
return new Promise((resolve) => {
|
||||
const container = spawn(CONTAINER_RUNTIME_BIN, containerArgs, {
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
});
|
||||
|
||||
onProcess(container, containerName);
|
||||
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
let stdoutTruncated = false;
|
||||
let stderrTruncated = false;
|
||||
|
||||
// Pass secrets via stdin (never written to disk or mounted as files)
|
||||
input.secrets = readSecrets();
|
||||
container.stdin.write(JSON.stringify(input));
|
||||
container.stdin.end();
|
||||
// Remove secrets from input so they don't appear in logs
|
||||
delete input.secrets;
|
||||
|
||||
// Streaming output: parse OUTPUT_START/END marker pairs as they arrive
|
||||
let parseBuffer = '';
|
||||
let newSessionId: string | undefined;
|
||||
let outputChain = Promise.resolve();
|
||||
|
||||
container.stdout.on('data', (data) => {
|
||||
const chunk = data.toString();
|
||||
|
||||
// Always accumulate for logging
|
||||
if (!stdoutTruncated) {
|
||||
const remaining = CONTAINER_MAX_OUTPUT_SIZE - stdout.length;
|
||||
if (chunk.length > remaining) {
|
||||
stdout += chunk.slice(0, remaining);
|
||||
stdoutTruncated = true;
|
||||
logger.warn(
|
||||
{ group: group.name, size: stdout.length },
|
||||
'Container stdout truncated due to size limit',
|
||||
);
|
||||
} else {
|
||||
stdout += chunk;
|
||||
}
|
||||
}
|
||||
|
||||
// Stream-parse for output markers
|
||||
if (onOutput) {
|
||||
parseBuffer += chunk;
|
||||
let startIdx: number;
|
||||
while ((startIdx = parseBuffer.indexOf(OUTPUT_START_MARKER)) !== -1) {
|
||||
const endIdx = parseBuffer.indexOf(OUTPUT_END_MARKER, startIdx);
|
||||
if (endIdx === -1) break; // Incomplete pair, wait for more data
|
||||
|
||||
const jsonStr = parseBuffer
|
||||
.slice(startIdx + OUTPUT_START_MARKER.length, endIdx)
|
||||
.trim();
|
||||
parseBuffer = parseBuffer.slice(endIdx + OUTPUT_END_MARKER.length);
|
||||
|
||||
try {
|
||||
const parsed: ContainerOutput = JSON.parse(jsonStr);
|
||||
if (parsed.newSessionId) {
|
||||
newSessionId = parsed.newSessionId;
|
||||
}
|
||||
hadStreamingOutput = true;
|
||||
// Activity detected — reset the hard timeout
|
||||
resetTimeout();
|
||||
// Call onOutput for all markers (including null results)
|
||||
// so idle timers start even for "silent" query completions.
|
||||
outputChain = outputChain.then(() => onOutput(parsed));
|
||||
} catch (err) {
|
||||
logger.warn(
|
||||
{ group: group.name, error: err },
|
||||
'Failed to parse streamed output chunk',
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
container.stderr.on('data', (data) => {
|
||||
const chunk = data.toString();
|
||||
const lines = chunk.trim().split('\n');
|
||||
for (const line of lines) {
|
||||
if (line) logger.debug({ container: group.folder }, line);
|
||||
}
|
||||
// Don't reset timeout on stderr — SDK writes debug logs continuously.
|
||||
// Timeout only resets on actual output (OUTPUT_MARKER in stdout).
|
||||
if (stderrTruncated) return;
|
||||
const remaining = CONTAINER_MAX_OUTPUT_SIZE - stderr.length;
|
||||
if (chunk.length > remaining) {
|
||||
stderr += chunk.slice(0, remaining);
|
||||
stderrTruncated = true;
|
||||
logger.warn(
|
||||
{ group: group.name, size: stderr.length },
|
||||
'Container stderr truncated due to size limit',
|
||||
);
|
||||
} else {
|
||||
stderr += chunk;
|
||||
}
|
||||
});
|
||||
|
||||
let timedOut = false;
|
||||
let hadStreamingOutput = false;
|
||||
const configTimeout = group.containerConfig?.timeout || CONTAINER_TIMEOUT;
|
||||
// Grace period: hard timeout must be at least IDLE_TIMEOUT + 30s so the
|
||||
// graceful _close sentinel has time to trigger before the hard kill fires.
|
||||
const timeoutMs = Math.max(configTimeout, IDLE_TIMEOUT + 30_000);
|
||||
|
||||
const killOnTimeout = () => {
|
||||
timedOut = true;
|
||||
logger.error({ group: group.name, containerName }, 'Container timeout, stopping gracefully');
|
||||
exec(stopContainer(containerName), { timeout: 15000 }, (err) => {
|
||||
if (err) {
|
||||
logger.warn({ group: group.name, containerName, err }, 'Graceful stop failed, force killing');
|
||||
container.kill('SIGKILL');
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
let timeout = setTimeout(killOnTimeout, timeoutMs);
|
||||
|
||||
// Reset the timeout whenever there's activity (streaming output)
|
||||
const resetTimeout = () => {
|
||||
clearTimeout(timeout);
|
||||
timeout = setTimeout(killOnTimeout, timeoutMs);
|
||||
};
|
||||
|
||||
container.on('close', (code) => {
|
||||
clearTimeout(timeout);
|
||||
const duration = Date.now() - startTime;
|
||||
|
||||
if (timedOut) {
|
||||
const ts = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
const timeoutLog = path.join(logsDir, `container-${ts}.log`);
|
||||
fs.writeFileSync(timeoutLog, [
|
||||
`=== Container Run Log (TIMEOUT) ===`,
|
||||
`Timestamp: ${new Date().toISOString()}`,
|
||||
`Group: ${group.name}`,
|
||||
`Container: ${containerName}`,
|
||||
`Duration: ${duration}ms`,
|
||||
`Exit Code: ${code}`,
|
||||
`Had Streaming Output: ${hadStreamingOutput}`,
|
||||
].join('\n'));
|
||||
|
||||
// Timeout after output = idle cleanup, not failure.
|
||||
// The agent already sent its response; this is just the
|
||||
// container being reaped after the idle period expired.
|
||||
if (hadStreamingOutput) {
|
||||
logger.info(
|
||||
{ group: group.name, containerName, duration, code },
|
||||
'Container timed out after output (idle cleanup)',
|
||||
);
|
||||
outputChain.then(() => {
|
||||
resolve({
|
||||
status: 'success',
|
||||
result: null,
|
||||
newSessionId,
|
||||
});
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
logger.error(
|
||||
{ group: group.name, containerName, duration, code },
|
||||
'Container timed out with no output',
|
||||
);
|
||||
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Container timed out after ${configTimeout}ms`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
const logFile = path.join(logsDir, `container-${timestamp}.log`);
|
||||
const isVerbose = process.env.LOG_LEVEL === 'debug' || process.env.LOG_LEVEL === 'trace';
|
||||
|
||||
const logLines = [
|
||||
`=== Container Run Log ===`,
|
||||
`Timestamp: ${new Date().toISOString()}`,
|
||||
`Group: ${group.name}`,
|
||||
`IsMain: ${input.isMain}`,
|
||||
`Duration: ${duration}ms`,
|
||||
`Exit Code: ${code}`,
|
||||
`Stdout Truncated: ${stdoutTruncated}`,
|
||||
`Stderr Truncated: ${stderrTruncated}`,
|
||||
``,
|
||||
];
|
||||
|
||||
const isError = code !== 0;
|
||||
|
||||
if (isVerbose || isError) {
|
||||
logLines.push(
|
||||
`=== Input ===`,
|
||||
JSON.stringify(input, null, 2),
|
||||
``,
|
||||
`=== Container Args ===`,
|
||||
containerArgs.join(' '),
|
||||
``,
|
||||
`=== Mounts ===`,
|
||||
mounts
|
||||
.map(
|
||||
(m) =>
|
||||
`${m.hostPath} -> ${m.containerPath}${m.readonly ? ' (ro)' : ''}`,
|
||||
)
|
||||
.join('\n'),
|
||||
``,
|
||||
`=== Stderr${stderrTruncated ? ' (TRUNCATED)' : ''} ===`,
|
||||
stderr,
|
||||
``,
|
||||
`=== Stdout${stdoutTruncated ? ' (TRUNCATED)' : ''} ===`,
|
||||
stdout,
|
||||
);
|
||||
} else {
|
||||
logLines.push(
|
||||
`=== Input Summary ===`,
|
||||
`Prompt length: ${input.prompt.length} chars`,
|
||||
`Session ID: ${input.sessionId || 'new'}`,
|
||||
``,
|
||||
`=== Mounts ===`,
|
||||
mounts
|
||||
.map((m) => `${m.containerPath}${m.readonly ? ' (ro)' : ''}`)
|
||||
.join('\n'),
|
||||
``,
|
||||
);
|
||||
}
|
||||
|
||||
fs.writeFileSync(logFile, logLines.join('\n'));
|
||||
logger.debug({ logFile, verbose: isVerbose }, 'Container log written');
|
||||
|
||||
if (code !== 0) {
|
||||
logger.error(
|
||||
{
|
||||
group: group.name,
|
||||
code,
|
||||
duration,
|
||||
stderr,
|
||||
stdout,
|
||||
logFile,
|
||||
},
|
||||
'Container exited with error',
|
||||
);
|
||||
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Container exited with code ${code}: ${stderr.slice(-200)}`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Streaming mode: wait for output chain to settle, return completion marker
|
||||
if (onOutput) {
|
||||
outputChain.then(() => {
|
||||
logger.info(
|
||||
{ group: group.name, duration, newSessionId },
|
||||
'Container completed (streaming mode)',
|
||||
);
|
||||
resolve({
|
||||
status: 'success',
|
||||
result: null,
|
||||
newSessionId,
|
||||
});
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Legacy mode: parse the last output marker pair from accumulated stdout
|
||||
try {
|
||||
// Extract JSON between sentinel markers for robust parsing
|
||||
const startIdx = stdout.indexOf(OUTPUT_START_MARKER);
|
||||
const endIdx = stdout.indexOf(OUTPUT_END_MARKER);
|
||||
|
||||
let jsonLine: string;
|
||||
if (startIdx !== -1 && endIdx !== -1 && endIdx > startIdx) {
|
||||
jsonLine = stdout
|
||||
.slice(startIdx + OUTPUT_START_MARKER.length, endIdx)
|
||||
.trim();
|
||||
} else {
|
||||
// Fallback: last non-empty line (backwards compatibility)
|
||||
const lines = stdout.trim().split('\n');
|
||||
jsonLine = lines[lines.length - 1];
|
||||
}
|
||||
|
||||
const output: ContainerOutput = JSON.parse(jsonLine);
|
||||
|
||||
logger.info(
|
||||
{
|
||||
group: group.name,
|
||||
duration,
|
||||
status: output.status,
|
||||
hasResult: !!output.result,
|
||||
},
|
||||
'Container completed',
|
||||
);
|
||||
|
||||
resolve(output);
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
{
|
||||
group: group.name,
|
||||
stdout,
|
||||
stderr,
|
||||
error: err,
|
||||
},
|
||||
'Failed to parse container output',
|
||||
);
|
||||
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Failed to parse container output: ${err instanceof Error ? err.message : String(err)}`,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
container.on('error', (err) => {
|
||||
clearTimeout(timeout);
|
||||
logger.error({ group: group.name, containerName, error: err }, 'Container spawn error');
|
||||
resolve({
|
||||
status: 'error',
|
||||
result: null,
|
||||
error: `Container spawn error: ${err.message}`,
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
export function writeTasksSnapshot(
|
||||
groupFolder: string,
|
||||
isMain: boolean,
|
||||
tasks: Array<{
|
||||
id: string;
|
||||
groupFolder: string;
|
||||
prompt: string;
|
||||
schedule_type: string;
|
||||
schedule_value: string;
|
||||
status: string;
|
||||
next_run: string | null;
|
||||
}>,
|
||||
): void {
|
||||
// Write filtered tasks to the group's IPC directory
|
||||
const groupIpcDir = resolveGroupIpcPath(groupFolder);
|
||||
fs.mkdirSync(groupIpcDir, { recursive: true });
|
||||
|
||||
// Main sees all tasks, others only see their own
|
||||
const filteredTasks = isMain
|
||||
? tasks
|
||||
: tasks.filter((t) => t.groupFolder === groupFolder);
|
||||
|
||||
const tasksFile = path.join(groupIpcDir, 'current_tasks.json');
|
||||
fs.writeFileSync(tasksFile, JSON.stringify(filteredTasks, null, 2));
|
||||
}
|
||||
|
||||
export interface AvailableGroup {
|
||||
jid: string;
|
||||
name: string;
|
||||
lastActivity: string;
|
||||
isRegistered: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write available groups snapshot for the container to read.
|
||||
* Only main group can see all available groups (for activation).
|
||||
* Non-main groups only see their own registration status.
|
||||
*/
|
||||
export function writeGroupsSnapshot(
|
||||
groupFolder: string,
|
||||
isMain: boolean,
|
||||
groups: AvailableGroup[],
|
||||
registeredJids: Set<string>,
|
||||
): void {
|
||||
const groupIpcDir = resolveGroupIpcPath(groupFolder);
|
||||
fs.mkdirSync(groupIpcDir, { recursive: true });
|
||||
|
||||
// Main sees all groups; others see nothing (they can't activate groups)
|
||||
const visibleGroups = isMain ? groups : [];
|
||||
|
||||
const groupsFile = path.join(groupIpcDir, 'available_groups.json');
|
||||
fs.writeFileSync(
|
||||
groupsFile,
|
||||
JSON.stringify(
|
||||
{
|
||||
groups: visibleGroups,
|
||||
lastSync: new Date().toISOString(),
|
||||
},
|
||||
null,
|
||||
2,
|
||||
),
|
||||
);
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
# Intent: src/container-runner.ts modifications
|
||||
|
||||
## What changed
|
||||
Added a volume mount for Gmail OAuth credentials (`~/.gmail-mcp/`) so the Gmail MCP server inside the container can authenticate with Google.
|
||||
|
||||
## Key sections
|
||||
|
||||
### buildVolumeMounts()
|
||||
- Added: Gmail credentials mount after the `.claude` sessions mount:
|
||||
```
|
||||
const gmailDir = path.join(homeDir, '.gmail-mcp');
|
||||
if (fs.existsSync(gmailDir)) {
|
||||
mounts.push({
|
||||
hostPath: gmailDir,
|
||||
containerPath: '/home/node/.gmail-mcp',
|
||||
readonly: false, // MCP may need to refresh OAuth tokens
|
||||
});
|
||||
}
|
||||
```
|
||||
- Uses `os.homedir()` to resolve the home directory
|
||||
- Mount is read-write because the Gmail MCP server needs to refresh OAuth tokens
|
||||
- Mount is conditional — only added if `~/.gmail-mcp/` exists on the host
|
||||
|
||||
### Imports
|
||||
- Added: `os` import for `os.homedir()`
|
||||
|
||||
## Invariants
|
||||
- All existing mounts are unchanged
|
||||
- Mount ordering is preserved (Gmail added after session mounts, before additional mounts)
|
||||
- The `buildContainerArgs`, `runContainerAgent`, and all other functions are untouched
|
||||
- Additional mount validation via `validateAdditionalMounts` is unchanged
|
||||
|
||||
## Must-keep
|
||||
- All existing volume mounts (project root, group dir, global, sessions, IPC, agent-runner, additional)
|
||||
- The mount security model (allowlist validation for additional mounts)
|
||||
- The `readSecrets` function and stdin-based secret passing
|
||||
- Container lifecycle (spawn, timeout, output parsing)
|
||||
@@ -1,98 +0,0 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('add-gmail skill package', () => {
|
||||
const skillDir = path.resolve(__dirname, '..');
|
||||
|
||||
it('has a valid manifest', () => {
|
||||
const manifestPath = path.join(skillDir, 'manifest.yaml');
|
||||
expect(fs.existsSync(manifestPath)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(manifestPath, 'utf-8');
|
||||
expect(content).toContain('skill: gmail');
|
||||
expect(content).toContain('version: 1.0.0');
|
||||
expect(content).toContain('googleapis');
|
||||
});
|
||||
|
||||
it('has channel file with self-registration', () => {
|
||||
const channelFile = path.join(
|
||||
skillDir,
|
||||
'add',
|
||||
'src',
|
||||
'channels',
|
||||
'gmail.ts',
|
||||
);
|
||||
expect(fs.existsSync(channelFile)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(channelFile, 'utf-8');
|
||||
expect(content).toContain('class GmailChannel');
|
||||
expect(content).toContain('implements Channel');
|
||||
expect(content).toContain("registerChannel('gmail'");
|
||||
});
|
||||
|
||||
it('has channel barrel file modification', () => {
|
||||
const indexFile = path.join(
|
||||
skillDir,
|
||||
'modify',
|
||||
'src',
|
||||
'channels',
|
||||
'index.ts',
|
||||
);
|
||||
expect(fs.existsSync(indexFile)).toBe(true);
|
||||
|
||||
const indexContent = fs.readFileSync(indexFile, 'utf-8');
|
||||
expect(indexContent).toContain("import './gmail.js'");
|
||||
});
|
||||
|
||||
it('has intent files for modified files', () => {
|
||||
expect(
|
||||
fs.existsSync(
|
||||
path.join(skillDir, 'modify', 'src', 'channels', 'index.ts.intent.md'),
|
||||
),
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it('has container-runner mount modification', () => {
|
||||
const crFile = path.join(
|
||||
skillDir,
|
||||
'modify',
|
||||
'src',
|
||||
'container-runner.ts',
|
||||
);
|
||||
expect(fs.existsSync(crFile)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(crFile, 'utf-8');
|
||||
expect(content).toContain('.gmail-mcp');
|
||||
});
|
||||
|
||||
it('has agent-runner Gmail MCP server modification', () => {
|
||||
const arFile = path.join(
|
||||
skillDir,
|
||||
'modify',
|
||||
'container',
|
||||
'agent-runner',
|
||||
'src',
|
||||
'index.ts',
|
||||
);
|
||||
expect(fs.existsSync(arFile)).toBe(true);
|
||||
|
||||
const content = fs.readFileSync(arFile, 'utf-8');
|
||||
expect(content).toContain('mcp__gmail__*');
|
||||
expect(content).toContain('@gongrzhe/server-gmail-autoauth-mcp');
|
||||
});
|
||||
|
||||
it('has test file for the channel', () => {
|
||||
const testFile = path.join(
|
||||
skillDir,
|
||||
'add',
|
||||
'src',
|
||||
'channels',
|
||||
'gmail.test.ts',
|
||||
);
|
||||
expect(fs.existsSync(testFile)).toBe(true);
|
||||
|
||||
const testContent = fs.readFileSync(testFile, 'utf-8');
|
||||
expect(testContent).toContain("describe('GmailChannel'");
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user