* feat: add /add-ollama skill for local model inference Adds a skill that integrates Ollama as an MCP server, allowing the container agent to offload tasks to local models (summarization, translation, general queries) while keeping Claude as orchestrator. Skill contents: - ollama-mcp-stdio.ts: stdio MCP server with ollama_list_models and ollama_generate tools - ollama-watch.sh: macOS notification watcher for Ollama activity - Modifications to index.ts (MCP config) and container-runner.ts (log surfacing) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: rename skill from /add-ollama to /add-ollama-tool Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: gavrielc <gabicohen22@yahoo.com>
736 B
736 B
Intent: src/container-runner.ts modifications
What changed
Surface Ollama MCP server log lines at info level so they appear in nanoclaw.log for the monitoring watcher script.
Key sections
container.stderr handler (inside runContainerAgent)
- Changed: empty line check from
if (line)toif (!line) continue; - Added:
[OLLAMA]tag detection — lines containing[OLLAMA]are logged atlogger.infoinstead oflogger.debug - All other stderr lines remain at
logger.debuglevel
Invariants (must-keep)
- Stderr truncation logic unchanged
- Timeout reset logic unchanged (stderr doesn't reset timeout)
- Stdout parsing logic unchanged
- Volume mount logic unchanged
- All other container lifecycle unchanged