📝 docs: add comprehensive context management guidelines for subagent usage

Add critical sections to CLAUDE.md covering:
- Context preservation rules to prevent bloat
- Agent invocation patterns (sequential/parallel/escalation)
- Result handling and synthesis guidelines
- Edge case handling (failures, timeouts, conflicts)
- Model selection criteria (Haiku vs Sonnet)
- Resume vs fresh agent decision framework
- Result validation and confidence communication
- Debug mode and special scenarios
- Performance degradation handling
- Caching strategies and priority conflict resolution
This commit is contained in:
Lorenz Hilpert
2025-11-19 14:02:36 +01:00
parent fc6d29d62f
commit 0f171d265b

196
CLAUDE.md
View File

@@ -60,6 +60,202 @@ This file contains meta-instructions for how Claude should work with the ISA-Fro
**Remember: Using subagents is NOT optional - it's mandatory for maintaining context efficiency and search quality.**
## 🔴 CRITICAL: Context Management for Reliable Subagent Usage
**Context bloat kills reliability. You MUST follow these rules:**
### Context Preservation Rules
- **NEVER include full agent results in main conversation** - Summarize findings in 1-2 sentences
- **NEVER repeat information** - Once extracted, don't include raw agent output again
- **NEVER accumulate intermediate steps** - Keep only final answers/decisions
- **DISCARD immediately after use**: Raw JSON responses, full file listings, irrelevant search results
- **KEEP only**: Key findings, extracted values, decision rationale
### Agent Invocation Patterns
| Pattern | When to Use | Rules |
|---------|-----------|-------|
| **Sequential** | Agent 1 results inform Agent 2 | Wait for Agent 1 result before invoking Agent 2 |
| **Parallel** | Independent research needs | Max 2-3 agents in parallel; different domains only |
| **Escalation** | First agent insufficient | Invoke only if first agent returns "not found" or insufficient |
### Result Handling & Synthesis
**After each agent completes:**
1. Extract the specific answer needed (1-3 key points when possible)
2. Discard raw output from conversation context
3. If synthesizing multiple sources, create brief summary table/list
4. Reference sources only if user asks "where did you find this?"
**If result can't be summarized in 1-2 sentences:**
- Use **structured formats**: Tables, bullet lists, code blocks (not prose walls)
- Group by category/concept, not by source
- Include only information relevant to the current task
- Ask yourself: "Does the user need all this detail, or am I including 'just in case'?" → If just in case, cut it
**Example - WRONG:**
```
Docs researcher returned: [huge JSON with 100 properties...]
The relevant ones are X, Y, Z...
```
**Example - RIGHT (simple):**
```
docs-researcher found: The API supports async/await with TypeScript strict mode.
```
**Example - RIGHT (complex, structured):**
```
docs-researcher found migration requires 3 steps:
1. Update imports (see migration guide section 2.1)
2. Change type definitions (example in docs)
3. Update tests (patterns shown)
```
### Parallel Agent Execution
Use parallel execution (single message, multiple tool calls) ONLY when:
- Agents are researching **different domains** (e.g., Zod docs + Angular docs)
- Agents have **no dependencies** (neither result informs the other)
- Results will be **independently useful** to the user
NEVER parallel if: One agent's findings should guide the next agent's search.
### Session Coordination
- **One primary task focus** per session phase
- **Related agents run together** (e.g., all docs research at start)
- **Discard intermediate context** between task phases
- **Summarize phase results** before moving to implementation phase
## Edge Cases & Failure Handling
### Agent Failures & Timeouts
| Failure Type | Action | Fallback |
|-------------|--------|----------|
| **Timeout (>2min)** | Retry once with simpler query | Use direct tools if critical |
| **Error/Exception** | Check query syntax, retry with fix | Escalate to advanced agent |
| **Empty result** | Verify target exists first | Try alternative search terms |
| **Conflicting results** | Run third agent as tiebreaker | Present both with confidence levels |
### User Direction Changes
**If user pivots mid-research:**
1. STOP current agent chain immediately
2. Summarize what was found so far (1 sentence)
3. Ask: "Should I continue the original research or focus on [new direction]?"
4. Clear context from abandoned path
### Model Selection (Haiku vs Sonnet)
| Use Haiku for | Use Sonnet for |
|---------------|----------------|
| Single file lookups | Multi-file synthesis |
| Known documentation paths | Complex pattern analysis |
| <5 min expected time | Architectural decisions |
| Well-defined searches | Ambiguous requirements |
### Resume vs Fresh Agent
**Use resume parameter when:**
- Previous agent was interrupted by user
- Need to continue exact same search with more context
- Building on partial results from <5 min ago
**Start fresh when:**
- Different search angle needed
- Previous results >5 min old
- Switching between task types
### Result Validation
**Always validate when:**
- Version-specific documentation (check version matches project)
- Third-party APIs (verify against actual response)
- Migration guides (confirm source/target versions)
**Red flags requiring re-verification:**
- "Deprecated" warnings in results
- Dates older than 6 months
- Conflicting information between sources
### Context Overflow Management
**If even structured results exceed reasonable size:**
1. Create an index/TOC of findings
2. Show only the section relevant to immediate task
3. Offer: "I found [X] additional areas. Which would help most?"
4. Store details in agent memory for later retrieval
### Confidence Communication
**Always indicate confidence level when:**
- Documentation is outdated (>1 year)
- Multiple conflicting sources exist
- Inferring from code (no docs found)
- Using fallback methods
**Format:** `[High confidence]`, `[Medium confidence]`, `[Inferred from code]`
## Debug Mode & Special Scenarios
### When to Show Raw Agent Results
**ONLY expose raw results when:**
- User explicitly asks "show me the raw output"
- Debugging why an implementation isn't working
- Agent results contradict user's expectation significantly
- Need to prove source of information for audit/compliance
**Never for:** Routine queries, successful searches, standard documentation lookups
### Agent Chain Interruption
**If agent chain fails midway (e.g., agent 2 of 5):**
1. Report: "Research stopped at [step] due to [reason]"
2. Show completed findings (structured)
3. Ask: "Continue with partial info or try alternative approach?"
4. Never silently skip failed steps
### Performance Degradation Handling
| Symptom | Likely Cause | Action |
|---------|-------------|--------|
| Agent >3min | Complex search | Switch to simpler query or Haiku model |
| Multiple timeouts | API overload | Wait 30s, retry with rate limiting |
| Consistent empties | Wrong domain | Verify project structure first |
### Circular Dependency Detection
**If Agent A needs B's result, and B needs A's:**
1. STOP - this indicates unclear requirements
2. Use AskUserQuestion to clarify which should be determined first
3. Document the decision in comments
### Result Caching Strategy
**Cache and reuse agent results when:**
- Same exact query within 5 minutes
- Documentation lookups (valid for session)
- Project structure analysis (valid until file changes)
**Always re-run when:**
- Error states being debugged
- User explicitly requests "check again"
- Any file modifications occurred
### Priority Conflicts
**When user request conflicts with best practices:**
1. Execute user request first (they have context you don't)
2. Note: "[Following user preference over standard pattern]"
3. Document why standard approach might differ
4. Never refuse based on "best practices" alone
<!-- nx configuration start-->
<!-- Leave the start & end comments to automatically receive updates. -->