Files
ISA-Frontend/CLAUDE.md
Lorenz Hilpert ceaf6dbf3c 📝 docs: add mandatory skill usage guidelines for reliable proactive invocation
Add critical section to CLAUDE.md covering:
- Skill vs Agent vs Direct Tools decision matrix
- Mandatory skill invocation rules with trigger conditions
- Proactive usage framework with right/wrong examples
- Skill chaining and coordination patterns
- Context management for skills (load → apply → unload)
- Failure handling for skill conflicts
- Decision tree for tool selection
2025-11-19 14:31:12 +01:00

16 KiB

CLAUDE.md

This file contains meta-instructions for how Claude should work with the ISA-Frontend codebase.

🔴 CRITICAL: Mandatory Agent Usage

You MUST use these subagents for ALL research and knowledge management tasks:

  • docs-researcher: For ALL documentation (packages, libraries, READMEs)
  • docs-researcher-advanced: Auto-escalate when docs-researcher fails
  • Explore: For ALL code pattern searches and multi-file analysis

Violations of this rule degrade performance and context quality. NO EXCEPTIONS.

Communication Guidelines

Keep answers concise and focused:

  • Provide direct, actionable responses without unnecessary elaboration
  • Skip verbose explanations unless specifically requested
  • Focus on what the user needs to know, not everything you know
  • Use bullet points and structured formatting for clarity
  • Only provide detailed explanations when complexity requires it

🔴 CRITICAL: Mandatory Skill Usage

Skills are project-specific tools that MUST be used proactively for their domains.

Skill vs Agent vs Direct Tools

Tool Type Purpose When to Use Context Management
Skills Domain-specific workflows (Angular, testing, architecture) Writing/reviewing code in skill's domain Load skill → follow instructions → unload
Agents Research & knowledge gathering Finding docs, searching code, analysis Use agent → extract findings → discard output
Direct Tools Single file operations Reading specific known files Use tool → process → done

Mandatory Skill Invocation Rules

ALWAYS invoke skills when:

Trigger Required Skill Why
Writing Angular templates angular-template Modern syntax (@if, @for, @defer)
Writing HTML with interactivity html-template E2E attributes (data-what, data-which) + ARIA
Applying Tailwind classes tailwind Design system consistency
Writing Angular code logging Mandatory logging via @isa/core/logging
Creating new library library-scaffolder Proper Nx setup + Vitest config
Regenerating API clients swagger-sync-manager All 10 clients + validation
Migrating to standalone standalone-component-migrator Complete migration workflow
Migrating tests to Vitest test-migration-specialist Jest→Vitest conversion
Fixing any types type-safety-engineer Add Zod schemas + type guards
Checking architecture architecture-enforcer Import boundaries + circular deps
Resolving circular deps circular-dependency-resolver Graph analysis + fix strategies
API changes analysis api-change-analyzer Breaking change detection
Git workflow git-workflow Branch naming + conventional commits

Proactive Skill Usage Framework

"Proactive" means:

  1. Detect task domain automatically - Don't wait for user to say "use skill X"
  2. Invoke before starting work - Load skill first, then execute
  3. Apply throughout task - Keep skill active for entire domain work
  4. Validate with skill - Use skill to review your own output

Example - WRONG:

User: "Add a new Angular component with a form"
Assistant: [Writes component without skills]
User: "Did you use the Angular template skill?"
Assistant: "Oh sorry, let me reload with the skill"

Example - RIGHT:

User: "Add a new Angular component with a form"
Assistant: [Invokes angular-template skill]
Assistant: [Invokes html-template skill]
Assistant: [Invokes logging skill]
Assistant: [Writes component following all skill guidelines]

Skill Chaining & Coordination

Multiple skills often apply to same task:

Task Required Skill Chain Order
New Angular component angular-templatehtml-templateloggingtailwind Template syntax → HTML attributes → Logging → Styling
New library library-scaffolderarchitecture-enforcer Scaffold → Validate structure
API sync api-change-analyzerswagger-sync-manager Analyze changes → Regenerate clients
Component migration standalone-component-migratortest-migration-specialist Migrate component → Migrate tests

Skill chaining rules:

  • Load ALL applicable skills at task start (via Skill tool)
  • Skills don't nest - they provide instructions you follow
  • Skills stay active for entire task scope
  • Validate final output against ALL loaded skills

Skill Context Management

Skills expand instructions into your context:

  • DO: Load skill → internalize rules → follow throughout task
  • DON'T: Re-read skill instructions multiple times
  • DON'T: Quote skill instructions back to user
  • DON'T: Keep skill "open" after task completion

After task completion:

  1. Verify work against skill requirements
  2. Summarize what was applied (1 sentence)
  3. Move on (skill context auto-clears next task)

Skill Failure Handling

Issue Action
Skill not found Verify skill name; ask user to check available skills
Skill conflicts with user request Note conflict; ask user for preference
Multiple skills give conflicting rules Follow most specific skill for current file type
Skill instructions unclear Use best judgment; document assumption in code comment

Skills vs Agents - Decision Tree

Is this a code writing/reviewing task?
├─ YES → Check if skill exists for domain
│         ├─ Skill exists → Use Skill
│         └─ No skill → Use direct tools
└─ NO → Is this research/finding information?
          ├─ YES → Use Agent (docs-researcher/Explore)
          └─ NO → Use direct tools (Read/Edit/Bash)

Researching and Investigating the Codebase

🔴 MANDATORY: You MUST use subagents for research. Direct file reading/searching.

Required Agent Usage

Task Type Required Agent Escalation Path
Package/Library Documentation docs-researcher docs-researcher-advanced if not found
Internal Library READMEs docs-researcher Keep context clean
Code Pattern Search Explore Set thoroughness level
Implementation Analysis Explore Multiple file analysis
Single Specific File Read tool directly No agent needed

Documentation Research System (Two-Tier)

  1. ALWAYS start with docs-researcher (Haiku, 30-120s) for any documentation need
  2. Auto-escalate to docs-researcher-advanced (Sonnet, 2-7min) when:
    • Documentation not found
    • Conflicting sources
    • Need code inference
    • Complex architectural questions

Enforcement Examples

❌ WRONG: Read libs/ui/buttons/README.md
✅ RIGHT: Task → docs-researcher → "Find documentation for @isa/ui/buttons"

❌ WRONG: Grep for "signalStore" patterns
✅ RIGHT: Task → Explore → "Find all signalStore implementations"

❌ WRONG: WebSearch for Zod documentation
✅ RIGHT: Task → docs-researcher → "Find Zod validation documentation"

Remember: Using subagents is NOT optional - it's mandatory for maintaining context efficiency and search quality.

🔴 CRITICAL: Context Management for Reliable Subagent Usage

Context bloat kills reliability. You MUST follow these rules:

Context Preservation Rules

  • NEVER include full agent results in main conversation - Summarize findings in 1-2 sentences
  • NEVER repeat information - Once extracted, don't include raw agent output again
  • NEVER accumulate intermediate steps - Keep only final answers/decisions
  • DISCARD immediately after use: Raw JSON responses, full file listings, irrelevant search results
  • KEEP only: Key findings, extracted values, decision rationale

Agent Invocation Patterns

Pattern When to Use Rules
Sequential Agent 1 results inform Agent 2 Wait for Agent 1 result before invoking Agent 2
Parallel Independent research needs Max 2-3 agents in parallel; different domains only
Escalation First agent insufficient Invoke only if first agent returns "not found" or insufficient

Result Handling & Synthesis

After each agent completes:

  1. Extract the specific answer needed (1-3 key points when possible)
  2. Discard raw output from conversation context
  3. If synthesizing multiple sources, create brief summary table/list
  4. Reference sources only if user asks "where did you find this?"

If result can't be summarized in 1-2 sentences:

  • Use structured formats: Tables, bullet lists, code blocks (not prose walls)
  • Group by category/concept, not by source
  • Include only information relevant to the current task
  • Ask yourself: "Does the user need all this detail, or am I including 'just in case'?" → If just in case, cut it

Example - WRONG:

Docs researcher returned: [huge JSON with 100 properties...]
The relevant ones are X, Y, Z...

Example - RIGHT (simple):

docs-researcher found: The API supports async/await with TypeScript strict mode.

Example - RIGHT (complex, structured):

docs-researcher found migration requires 3 steps:
1. Update imports (see migration guide section 2.1)
2. Change type definitions (example in docs)
3. Update tests (patterns shown)

Parallel Agent Execution

Use parallel execution (single message, multiple tool calls) ONLY when:

  • Agents are researching different domains (e.g., Zod docs + Angular docs)
  • Agents have no dependencies (neither result informs the other)
  • Results will be independently useful to the user

NEVER parallel if: One agent's findings should guide the next agent's search.

Session Coordination

  • One primary task focus per session phase
  • Related agents run together (e.g., all docs research at start)
  • Discard intermediate context between task phases
  • Summarize phase results before moving to implementation phase

Edge Cases & Failure Handling

Agent Failures & Timeouts

Failure Type Action Fallback
Timeout (>2min) Retry once with simpler query Use direct tools if critical
Error/Exception Check query syntax, retry with fix Escalate to advanced agent
Empty result Verify target exists first Try alternative search terms
Conflicting results Run third agent as tiebreaker Present both with confidence levels

User Direction Changes

If user pivots mid-research:

  1. STOP current agent chain immediately
  2. Summarize what was found so far (1 sentence)
  3. Ask: "Should I continue the original research or focus on [new direction]?"
  4. Clear context from abandoned path

Model Selection (Haiku vs Sonnet)

Use Haiku for Use Sonnet for
Single file lookups Multi-file synthesis
Known documentation paths Complex pattern analysis
<5 min expected time Architectural decisions
Well-defined searches Ambiguous requirements

Resume vs Fresh Agent

Use resume parameter when:

  • Previous agent was interrupted by user
  • Need to continue exact same search with more context
  • Building on partial results from <5 min ago

Start fresh when:

  • Different search angle needed
  • Previous results >5 min old
  • Switching between task types

Result Validation

Always validate when:

  • Version-specific documentation (check version matches project)
  • Third-party APIs (verify against actual response)
  • Migration guides (confirm source/target versions)

Red flags requiring re-verification:

  • "Deprecated" warnings in results
  • Dates older than 6 months
  • Conflicting information between sources

Context Overflow Management

If even structured results exceed reasonable size:

  1. Create an index/TOC of findings
  2. Show only the section relevant to immediate task
  3. Offer: "I found [X] additional areas. Which would help most?"
  4. Store details in agent memory for later retrieval

Confidence Communication

Always indicate confidence level when:

  • Documentation is outdated (>1 year)
  • Multiple conflicting sources exist
  • Inferring from code (no docs found)
  • Using fallback methods

Format: [High confidence], [Medium confidence], [Inferred from code]

Debug Mode & Special Scenarios

When to Show Raw Agent Results

ONLY expose raw results when:

  • User explicitly asks "show me the raw output"
  • Debugging why an implementation isn't working
  • Agent results contradict user's expectation significantly
  • Need to prove source of information for audit/compliance

Never for: Routine queries, successful searches, standard documentation lookups

Agent Chain Interruption

If agent chain fails midway (e.g., agent 2 of 5):

  1. Report: "Research stopped at [step] due to [reason]"
  2. Show completed findings (structured)
  3. Ask: "Continue with partial info or try alternative approach?"
  4. Never silently skip failed steps

Performance Degradation Handling

Symptom Likely Cause Action
Agent >3min Complex search Switch to simpler query or Haiku model
Multiple timeouts API overload Wait 30s, retry with rate limiting
Consistent empties Wrong domain Verify project structure first

Circular Dependency Detection

If Agent A needs B's result, and B needs A's:

  1. STOP - this indicates unclear requirements
  2. Use AskUserQuestion to clarify which should be determined first
  3. Document the decision in comments

Result Caching Strategy

Cache and reuse agent results when:

  • Same exact query within 5 minutes
  • Documentation lookups (valid for session)
  • Project structure analysis (valid until file changes)

Always re-run when:

  • Error states being debugged
  • User explicitly requests "check again"
  • Any file modifications occurred

Priority Conflicts

When user request conflicts with best practices:

  1. Execute user request first (they have context you don't)
  2. Note: "[Following user preference over standard pattern]"
  3. Document why standard approach might differ
  4. Never refuse based on "best practices" alone

General Guidelines for working with Nx

  • When running tasks (for example build, lint, test, e2e, etc.), always prefer running the task through nx (i.e. nx run, nx run-many, nx affected) instead of using the underlying tooling directly
  • You have access to the Nx MCP server and its tools, use them to help the user
  • When answering questions about the repository, use the nx_workspace tool first to gain an understanding of the workspace architecture where applicable.
  • When working in individual projects, use the nx_project_details mcp tool to analyze and understand the specific project structure and dependencies
  • For questions around nx configuration, best practices or if you're unsure, use the nx_docs tool to get relevant, up-to-date docs. Always use this instead of assuming things about nx configuration
  • If the user needs help with an Nx configuration or project graph error, use the nx_workspace tool to get any errors