Add critical sections to CLAUDE.md covering: - Context preservation rules to prevent bloat - Agent invocation patterns (sequential/parallel/escalation) - Result handling and synthesis guidelines - Edge case handling (failures, timeouts, conflicts) - Model selection criteria (Haiku vs Sonnet) - Resume vs fresh agent decision framework - Result validation and confidence communication - Debug mode and special scenarios - Performance degradation handling - Caching strategies and priority conflict resolution
11 KiB
CLAUDE.md
This file contains meta-instructions for how Claude should work with the ISA-Frontend codebase.
🔴 CRITICAL: Mandatory Agent Usage
You MUST use these subagents for ALL research and knowledge management tasks:
docs-researcher: For ALL documentation (packages, libraries, READMEs)docs-researcher-advanced: Auto-escalate when docs-researcher failsExplore: For ALL code pattern searches and multi-file analysis
Violations of this rule degrade performance and context quality. NO EXCEPTIONS.
Communication Guidelines
Keep answers concise and focused:
- Provide direct, actionable responses without unnecessary elaboration
- Skip verbose explanations unless specifically requested
- Focus on what the user needs to know, not everything you know
- Use bullet points and structured formatting for clarity
- Only provide detailed explanations when complexity requires it
Researching and Investigating the Codebase
🔴 MANDATORY: You MUST use subagents for research. Direct file reading/searching.
Required Agent Usage
| Task Type | Required Agent | Escalation Path |
|---|---|---|
| Package/Library Documentation | docs-researcher |
→ docs-researcher-advanced if not found |
| Internal Library READMEs | docs-researcher |
Keep context clean |
| Code Pattern Search | Explore |
Set thoroughness level |
| Implementation Analysis | Explore |
Multiple file analysis |
| Single Specific File | Read tool directly | No agent needed |
Documentation Research System (Two-Tier)
- ALWAYS start with
docs-researcher(Haiku, 30-120s) for any documentation need - Auto-escalate to
docs-researcher-advanced(Sonnet, 2-7min) when:- Documentation not found
- Conflicting sources
- Need code inference
- Complex architectural questions
Enforcement Examples
❌ WRONG: Read libs/ui/buttons/README.md
✅ RIGHT: Task → docs-researcher → "Find documentation for @isa/ui/buttons"
❌ WRONG: Grep for "signalStore" patterns
✅ RIGHT: Task → Explore → "Find all signalStore implementations"
❌ WRONG: WebSearch for Zod documentation
✅ RIGHT: Task → docs-researcher → "Find Zod validation documentation"
Remember: Using subagents is NOT optional - it's mandatory for maintaining context efficiency and search quality.
🔴 CRITICAL: Context Management for Reliable Subagent Usage
Context bloat kills reliability. You MUST follow these rules:
Context Preservation Rules
- NEVER include full agent results in main conversation - Summarize findings in 1-2 sentences
- NEVER repeat information - Once extracted, don't include raw agent output again
- NEVER accumulate intermediate steps - Keep only final answers/decisions
- DISCARD immediately after use: Raw JSON responses, full file listings, irrelevant search results
- KEEP only: Key findings, extracted values, decision rationale
Agent Invocation Patterns
| Pattern | When to Use | Rules |
|---|---|---|
| Sequential | Agent 1 results inform Agent 2 | Wait for Agent 1 result before invoking Agent 2 |
| Parallel | Independent research needs | Max 2-3 agents in parallel; different domains only |
| Escalation | First agent insufficient | Invoke only if first agent returns "not found" or insufficient |
Result Handling & Synthesis
After each agent completes:
- Extract the specific answer needed (1-3 key points when possible)
- Discard raw output from conversation context
- If synthesizing multiple sources, create brief summary table/list
- Reference sources only if user asks "where did you find this?"
If result can't be summarized in 1-2 sentences:
- Use structured formats: Tables, bullet lists, code blocks (not prose walls)
- Group by category/concept, not by source
- Include only information relevant to the current task
- Ask yourself: "Does the user need all this detail, or am I including 'just in case'?" → If just in case, cut it
Example - WRONG:
Docs researcher returned: [huge JSON with 100 properties...]
The relevant ones are X, Y, Z...
Example - RIGHT (simple):
docs-researcher found: The API supports async/await with TypeScript strict mode.
Example - RIGHT (complex, structured):
docs-researcher found migration requires 3 steps:
1. Update imports (see migration guide section 2.1)
2. Change type definitions (example in docs)
3. Update tests (patterns shown)
Parallel Agent Execution
Use parallel execution (single message, multiple tool calls) ONLY when:
- Agents are researching different domains (e.g., Zod docs + Angular docs)
- Agents have no dependencies (neither result informs the other)
- Results will be independently useful to the user
NEVER parallel if: One agent's findings should guide the next agent's search.
Session Coordination
- One primary task focus per session phase
- Related agents run together (e.g., all docs research at start)
- Discard intermediate context between task phases
- Summarize phase results before moving to implementation phase
Edge Cases & Failure Handling
Agent Failures & Timeouts
| Failure Type | Action | Fallback |
|---|---|---|
| Timeout (>2min) | Retry once with simpler query | Use direct tools if critical |
| Error/Exception | Check query syntax, retry with fix | Escalate to advanced agent |
| Empty result | Verify target exists first | Try alternative search terms |
| Conflicting results | Run third agent as tiebreaker | Present both with confidence levels |
User Direction Changes
If user pivots mid-research:
- STOP current agent chain immediately
- Summarize what was found so far (1 sentence)
- Ask: "Should I continue the original research or focus on [new direction]?"
- Clear context from abandoned path
Model Selection (Haiku vs Sonnet)
| Use Haiku for | Use Sonnet for |
|---|---|
| Single file lookups | Multi-file synthesis |
| Known documentation paths | Complex pattern analysis |
| <5 min expected time | Architectural decisions |
| Well-defined searches | Ambiguous requirements |
Resume vs Fresh Agent
Use resume parameter when:
- Previous agent was interrupted by user
- Need to continue exact same search with more context
- Building on partial results from <5 min ago
Start fresh when:
- Different search angle needed
- Previous results >5 min old
- Switching between task types
Result Validation
Always validate when:
- Version-specific documentation (check version matches project)
- Third-party APIs (verify against actual response)
- Migration guides (confirm source/target versions)
Red flags requiring re-verification:
- "Deprecated" warnings in results
- Dates older than 6 months
- Conflicting information between sources
Context Overflow Management
If even structured results exceed reasonable size:
- Create an index/TOC of findings
- Show only the section relevant to immediate task
- Offer: "I found [X] additional areas. Which would help most?"
- Store details in agent memory for later retrieval
Confidence Communication
Always indicate confidence level when:
- Documentation is outdated (>1 year)
- Multiple conflicting sources exist
- Inferring from code (no docs found)
- Using fallback methods
Format: [High confidence], [Medium confidence], [Inferred from code]
Debug Mode & Special Scenarios
When to Show Raw Agent Results
ONLY expose raw results when:
- User explicitly asks "show me the raw output"
- Debugging why an implementation isn't working
- Agent results contradict user's expectation significantly
- Need to prove source of information for audit/compliance
Never for: Routine queries, successful searches, standard documentation lookups
Agent Chain Interruption
If agent chain fails midway (e.g., agent 2 of 5):
- Report: "Research stopped at [step] due to [reason]"
- Show completed findings (structured)
- Ask: "Continue with partial info or try alternative approach?"
- Never silently skip failed steps
Performance Degradation Handling
| Symptom | Likely Cause | Action |
|---|---|---|
| Agent >3min | Complex search | Switch to simpler query or Haiku model |
| Multiple timeouts | API overload | Wait 30s, retry with rate limiting |
| Consistent empties | Wrong domain | Verify project structure first |
Circular Dependency Detection
If Agent A needs B's result, and B needs A's:
- STOP - this indicates unclear requirements
- Use AskUserQuestion to clarify which should be determined first
- Document the decision in comments
Result Caching Strategy
Cache and reuse agent results when:
- Same exact query within 5 minutes
- Documentation lookups (valid for session)
- Project structure analysis (valid until file changes)
Always re-run when:
- Error states being debugged
- User explicitly requests "check again"
- Any file modifications occurred
Priority Conflicts
When user request conflicts with best practices:
- Execute user request first (they have context you don't)
- Note: "[Following user preference over standard pattern]"
- Document why standard approach might differ
- Never refuse based on "best practices" alone
General Guidelines for working with Nx
- When running tasks (for example build, lint, test, e2e, etc.), always prefer running the task through
nx(i.e.nx run,nx run-many,nx affected) instead of using the underlying tooling directly - You have access to the Nx MCP server and its tools, use them to help the user
- When answering questions about the repository, use the
nx_workspacetool first to gain an understanding of the workspace architecture where applicable. - When working in individual projects, use the
nx_project_detailsmcp tool to analyze and understand the specific project structure and dependencies - For questions around nx configuration, best practices or if you're unsure, use the
nx_docstool to get relevant, up-to-date docs. Always use this instead of assuming things about nx configuration - If the user needs help with an Nx configuration or project graph error, use the
nx_workspacetool to get any errors