fix: collect diagnostics for sub-skills invoked during setup

Previously, sub-skills (e.g. /add-telegram) skipped diagnostics when
called from a parent skill like /setup. This lost channel-level events.
Now all events are collected and shown to the user in a single prompt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
NanoClaw User
2026-03-19 22:20:11 +00:00
committed by gavrielc
parent 8c1d5598ba
commit 1734be7259

View File

@@ -2,11 +2,18 @@
Follow these steps **after all other skill steps are complete**. Follow these steps **after all other skill steps are complete**.
## 1. Skip check ## 1. Collect all events
If this skill was invoked from within another skill (e.g., `/add-whatsapp` called during `/setup`), **skip diagnostics entirely** — the parent skill handles it. Return without doing anything below. Every skill sends its own diagnostics event — even when invoked from a parent skill. A `/setup` that invokes `/add-telegram` produces **two** events: `skill_applied` (for add-telegram) and `setup_complete` (for setup). A `/setup` with no channels produces just `setup_complete`.
## 2. Determine event type Build the list of events to send:
1. If sub-skills were invoked during this run (e.g., `/add-telegram` during `/setup`), add a `skill_applied` event for each.
2. Add the event for the current (outermost) skill.
**If running a sub-skill standalone** (not called from a parent), there is just one event.
## 2. Determine event type (for each event)
- `/setup``setup_complete` - `/setup``setup_complete`
- `/update-nanoclaw``update_complete` - `/update-nanoclaw``update_complete`
@@ -58,25 +65,25 @@ Estimate `error_count` from the conversation (how many errors/retries occurred d
- `breaking_changes_found`: whether breaking changes were detected during the update - `breaking_changes_found`: whether breaking changes were detected during the update
- `breaking_changes_skills_run`: which skills had to be re-run to fix breaking changes - `breaking_changes_skills_run`: which skills had to be re-run to fix breaking changes
## 4. Dry run ## 4. Dry run all events
Run with `--dry-run` to get the full payload: For **each** event, run with `--dry-run` to get the payload:
```bash ```bash
npx tsx scripts/send-diagnostics.ts --event <event_type> --success --data '<json>' --dry-run npx tsx scripts/send-diagnostics.ts --event <event_type> --success --data '<json>' --dry-run
``` ```
Use `--failure` instead of `--success` if the skill failed. Use `--failure` instead of `--success` if that particular skill/step failed.
If the command produces no output, the user has opted out permanently — skip the rest. If **any** dry-run produces no output, the user has opted out permanently — skip the rest for all events.
## 5. Show the user and ask ## 5. Show the user and ask once
Show the JSON output and ask: Show **all** payloads together and ask **once** (not per-event):
> "Would you like to send anonymous diagnostics to help improve NanoClaw? Here's exactly what would be sent:" > "Would you like to send anonymous diagnostics to help improve NanoClaw? Here's exactly what would be sent:"
> >
> (show the JSON) > (show all JSON payloads)
> >
> **Yes** / **No** / **Never ask again** > **Yes** / **No** / **Never ask again**
@@ -84,11 +91,11 @@ Use AskUserQuestion.
## 6. Handle response ## 6. Handle response
- **Yes**: Run the same command without `--dry-run`: - **Yes**: Send **all** events (run each command without `--dry-run`):
```bash ```bash
npx tsx scripts/send-diagnostics.ts --event <event_type> --success --data '<json>' npx tsx scripts/send-diagnostics.ts --event <event_type> --success --data '<json>'
``` ```
Confirm: "Diagnostics sent." Confirm: "Diagnostics sent (N events)." or "Diagnostics sent." if only one.
- **No**: Do nothing. User will be asked again next time. - **No**: Do nothing. User will be asked again next time.