Files
ISA-Frontend/CLAUDE.md
2025-11-25 14:14:19 +01:00

910 lines
34 KiB
Markdown

# CLAUDE.md
This file contains meta-instructions for how Claude should work with the ISA-Frontend codebase.
## 🔴 CRITICAL: Mandatory Agent Usage
**You MUST use these subagents for ALL research and knowledge management tasks:**
- **`docs-researcher`**: For ALL documentation (packages, libraries, READMEs)
- **`docs-researcher-advanced`**: Auto-escalate when docs-researcher fails
- **`Explore`**: For ALL code pattern searches and multi-file analysis
**Violations of this rule degrade performance and context quality. NO EXCEPTIONS.**
## Communication Guidelines
**Keep answers concise and focused:**
- Provide direct, actionable responses without unnecessary elaboration
- Skip verbose explanations unless specifically requested
- Focus on what the user needs to know, not everything you know
- Use bullet points and structured formatting for clarity
- Only provide detailed explanations when complexity requires it
## 🔴 CRITICAL: Mandatory Skill Usage
**Skills are project-specific tools that MUST be used proactively for their domains.**
### Skill vs Agent vs Direct Tools
| Tool Type | Purpose | When to Use | Context Management |
|-----------|---------|-------------|-------------------|
| **Skills** | Domain-specific workflows (Angular, testing, architecture) | Writing/reviewing code in skill's domain | Load skill → follow instructions → unload |
| **Agents** | Research & knowledge gathering | Finding docs, searching code, analysis | Use agent → extract findings → discard output |
| **Direct Tools** | Single file operations | Reading specific known files | Use tool → process → done |
### Mandatory Skill Invocation Rules
**ALWAYS invoke skills when:**
| Trigger | Required Skill | Why |
|---------|---------------|-----|
| Writing Angular templates | `angular-template` | Modern syntax (@if, @for, @defer) |
| Writing HTML with interactivity | `html-template` | E2E attributes (data-what, data-which) + ARIA |
| Applying Tailwind classes | `tailwind` | Design system consistency |
| Writing Angular code | `logging` | Mandatory logging via @isa/core/logging |
| Creating CSS animations | `css-keyframes-animations` | Native @keyframes + animate.enter/leave + GPU acceleration |
| Creating new library | `library-scaffolder` | Proper Nx setup + Vitest config |
| Regenerating API clients | `swagger-sync-manager` | All 10 clients + validation |
| Migrating to standalone | `standalone-component-migrator` | Complete migration workflow |
| Migrating tests to Vitest | `test-migration-specialist` | Jest→Vitest conversion |
| Fixing `any` types | `type-safety-engineer` | Add Zod schemas + type guards |
| Checking architecture | `architecture-enforcer` | Import boundaries + circular deps |
| Resolving circular deps | `circular-dependency-resolver` | Graph analysis + fix strategies |
| API changes analysis | `api-change-analyzer` | Breaking change detection |
| Git workflow | `git-workflow` | Branch naming + conventional commits |
### Proactive Skill Usage Framework
**"Proactive" means:**
1. **Detect task domain automatically** - Don't wait for user to say "use skill X"
2. **Invoke before starting work** - Load skill first, then execute
3. **Apply throughout task** - Keep skill active for entire domain work
4. **Validate with skill** - Use skill to review your own output
**Example - WRONG:**
```
User: "Add a new Angular component with a form"
Assistant: [Writes component without skills]
User: "Did you use the Angular template skill?"
Assistant: "Oh sorry, let me reload with the skill"
```
**Example - RIGHT:**
```
User: "Add a new Angular component with a form"
Assistant: [Invokes angular-template skill]
Assistant: [Invokes html-template skill]
Assistant: [Invokes logging skill]
Assistant: [Writes component following all skill guidelines]
```
### Skill Chaining & Coordination
**Multiple skills often apply to same task:**
| Task | Required Skill Chain | Order |
|------|---------------------|-------|
| New Angular component | `angular-template``html-template``logging``tailwind` | Template syntax → HTML attributes → Logging → Styling |
| Component with animations | `angular-template``html-template``css-keyframes-animations``logging``tailwind` | Template → HTML → Animations → Logging → Styling |
| New library | `library-scaffolder``architecture-enforcer` | Scaffold → Validate structure |
| API sync | `api-change-analyzer``swagger-sync-manager` | Analyze changes → Regenerate clients |
| Component migration | `standalone-component-migrator``test-migration-specialist` | Migrate component → Migrate tests |
**Skill chaining rules:**
- Load ALL applicable skills at task start (via Skill tool)
- Skills don't nest - they provide instructions you follow
- Skills stay active for entire task scope
- Validate final output against ALL loaded skills
### Skill Context Management
**Skills expand instructions into your context:**
-**DO**: Load skill → internalize rules → follow throughout task
-**DON'T**: Re-read skill instructions multiple times
-**DON'T**: Quote skill instructions back to user
-**DON'T**: Keep skill "open" after task completion
**After task completion:**
1. Verify work against skill requirements
2. Summarize what was applied (1 sentence)
3. Move on (skill context auto-clears next task)
### Skill Failure Handling
| Issue | Action |
|-------|--------|
| Skill not found | Verify skill name; ask user to check available skills |
| Skill conflicts with user request | Note conflict; ask user for preference |
| Multiple skills give conflicting rules | Follow most specific skill for current file type |
| Skill instructions unclear | Use best judgment; document assumption in code comment |
### Skills vs Agents - Decision Tree
```
Is this a code writing/reviewing task?
├─ YES → Check if skill exists for domain
│ ├─ Skill exists → Use Skill
│ └─ No skill → Use direct tools
└─ NO → Is this research/finding information?
├─ YES → Use Agent (docs-researcher/Explore)
└─ NO → Use direct tools (Read/Edit/Bash)
```
## Researching and Investigating the Codebase
**🔴 MANDATORY: You MUST use subagents for research. Direct file reading/searching.**
### Required Agent Usage
| Task Type | Required Agent | Escalation Path |
| --------------------------------- | ------------------ | ----------------------------------------- |
| **Package/Library Documentation** | `docs-researcher` | → `docs-researcher-advanced` if not found |
| **Internal Library READMEs** | `docs-researcher` | Keep context clean |
| **Monorepo Library Overview** | `docs-researcher` | Uses `docs/library-reference.md` |
| **Code Pattern Search** | `Explore` | Set thoroughness level |
| **Implementation Analysis** | `Explore` | Multiple file analysis |
| **Single Specific File** | Read tool directly | No agent needed |
**Note:** The `docs-researcher` agent uses `docs/library-reference.md` as a primary index for discovering monorepo libraries. This file contains descriptions and locations for all 72 libraries, enabling quick library discovery without reading individual READMEs.
### Documentation Research System (Two-Tier)
1. **ALWAYS start with `docs-researcher`** (Haiku, 30-120s) for any documentation need
2. **Auto-escalate to `docs-researcher-advanced`** (Sonnet, 2-7min) when:
- Documentation not found
- Conflicting sources
- Need code inference
- Complex architectural questions
### Enforcement Examples
```
❌ WRONG: Read libs/ui/buttons/README.md
✅ RIGHT: Task → docs-researcher → "Find documentation for @isa/ui/buttons"
❌ WRONG: Grep for "signalStore" patterns
✅ RIGHT: Task → Explore → "Find all signalStore implementations"
❌ WRONG: WebSearch for Zod documentation
✅ RIGHT: Task → docs-researcher → "Find Zod validation documentation"
```
**Remember: Using subagents is NOT optional - it's mandatory for maintaining context efficiency and search quality.**
## 🔴 CRITICAL: Context Management for Reliable Subagent Usage
**Context bloat kills reliability. You MUST follow these rules:**
### Context Preservation Rules
- **NEVER include full agent results in main conversation** - Summarize findings in 1-2 sentences
- **NEVER repeat information** - Once extracted, don't include raw agent output again
- **NEVER accumulate intermediate steps** - Keep only final answers/decisions
- **DISCARD immediately after use**: Raw JSON responses, full file listings, irrelevant search results
- **KEEP only**: Key findings, extracted values, decision rationale
### Agent Invocation Patterns
| Pattern | When to Use | Rules |
|---------|-----------|-------|
| **Sequential** | Agent 1 results inform Agent 2 | Wait for Agent 1 result before invoking Agent 2 |
| **Parallel** | Independent research needs | Max 2-3 agents in parallel; different domains only |
| **Escalation** | First agent insufficient | Invoke only if first agent returns "not found" or insufficient |
### Result Handling & Synthesis
**After each agent completes:**
1. Extract the specific answer needed (1-3 key points when possible)
2. Discard raw output from conversation context
3. If synthesizing multiple sources, create brief summary table/list
4. Reference sources only if user asks "where did you find this?"
**If result can't be summarized in 1-2 sentences:**
- Use **structured formats**: Tables, bullet lists, code blocks (not prose walls)
- Group by category/concept, not by source
- Include only information relevant to the current task
- Ask yourself: "Does the user need all this detail, or am I including 'just in case'?" → If just in case, cut it
**Example - WRONG:**
```
Docs researcher returned: [huge JSON with 100 properties...]
The relevant ones are X, Y, Z...
```
**Example - RIGHT (simple):**
```
docs-researcher found: The API supports async/await with TypeScript strict mode.
```
**Example - RIGHT (complex, structured):**
```
docs-researcher found migration requires 3 steps:
1. Update imports (see migration guide section 2.1)
2. Change type definitions (example in docs)
3. Update tests (patterns shown)
```
### Parallel Agent Execution
Use parallel execution (single message, multiple tool calls) ONLY when:
- Agents are researching **different domains** (e.g., Zod docs + Angular docs)
- Agents have **no dependencies** (neither result informs the other)
- Results will be **independently useful** to the user
NEVER parallel if: One agent's findings should guide the next agent's search.
### Session Coordination
- **One primary task focus** per session phase
- **Related agents run together** (e.g., all docs research at start)
- **Discard intermediate context** between task phases
- **Summarize phase results** before moving to implementation phase
## Edge Cases & Failure Handling
### Agent Failures & Timeouts
| Failure Type | Action | Fallback |
|-------------|--------|----------|
| **Timeout (>2min)** | Retry once with simpler query | Use direct tools if critical |
| **Error/Exception** | Check query syntax, retry with fix | Escalate to advanced agent |
| **Empty result** | Verify target exists first | Try alternative search terms |
| **Conflicting results** | Run third agent as tiebreaker | Present both with confidence levels |
### User Direction Changes
**If user pivots mid-research:**
1. STOP current agent chain immediately
2. Summarize what was found so far (1 sentence)
3. Ask: "Should I continue the original research or focus on [new direction]?"
4. Clear context from abandoned path
### Model Selection (Haiku vs Sonnet)
| Use Haiku for | Use Sonnet for |
|---------------|----------------|
| Single file lookups | Multi-file synthesis |
| Known documentation paths | Complex pattern analysis |
| <5 min expected time | Architectural decisions |
| Well-defined searches | Ambiguous requirements |
### Resume vs Fresh Agent
**Use resume parameter when:**
- Previous agent was interrupted by user
- Need to continue exact same search with more context
- Building on partial results from <5 min ago
**Start fresh when:**
- Different search angle needed
- Previous results >5 min old
- Switching between task types
### Result Validation
**Always validate when:**
- Version-specific documentation (check version matches project)
- Third-party APIs (verify against actual response)
- Migration guides (confirm source/target versions)
**Red flags requiring re-verification:**
- "Deprecated" warnings in results
- Dates older than 6 months
- Conflicting information between sources
### Context Overflow Management
**If even structured results exceed reasonable size:**
1. Create an index/TOC of findings
2. Show only the section relevant to immediate task
3. Offer: "I found [X] additional areas. Which would help most?"
4. Store details in agent memory for later retrieval
### Confidence Communication
**Always indicate confidence level when:**
- Documentation is outdated (>1 year)
- Multiple conflicting sources exist
- Inferring from code (no docs found)
- Using fallback methods
**Format:** `[High confidence]`, `[Medium confidence]`, `[Inferred from code]`
## Debug Mode & Special Scenarios
### When to Show Raw Agent Results
**ONLY expose raw results when:**
- User explicitly asks "show me the raw output"
- Debugging why an implementation isn't working
- Agent results contradict user's expectation significantly
- Need to prove source of information for audit/compliance
**Never for:** Routine queries, successful searches, standard documentation lookups
### Agent Chain Interruption
**If agent chain fails midway (e.g., agent 2 of 5):**
1. Report: "Research stopped at [step] due to [reason]"
2. Show completed findings (structured)
3. Ask: "Continue with partial info or try alternative approach?"
4. Never silently skip failed steps
### Performance Degradation Handling
| Symptom | Likely Cause | Action |
|---------|-------------|--------|
| Agent >3min | Complex search | Switch to simpler query or Haiku model |
| Multiple timeouts | API overload | Wait 30s, retry with rate limiting |
| Consistent empties | Wrong domain | Verify project structure first |
### Circular Dependency Detection
**If Agent A needs B's result, and B needs A's:**
1. STOP - this indicates unclear requirements
2. Use AskUserQuestion to clarify which should be determined first
3. Document the decision in comments
### Result Caching Strategy
**Cache and reuse agent results when:**
- Same exact query within 5 minutes
- Documentation lookups (valid for session)
- Project structure analysis (valid until file changes)
**Always re-run when:**
- Error states being debugged
- User explicitly requests "check again"
- Any file modifications occurred
### Priority Conflicts
**When user request conflicts with best practices:**
1. Execute user request first (they have context you don't)
2. Note: "[Following user preference over standard pattern]"
3. Document why standard approach might differ
4. Never refuse based on "best practices" alone
## 🔴 CRITICAL: Tool Result Minimization
**Tool results are the #1 source of context bloat. After each tool execution, aggressively minimize what stays in context:**
### Bash Tool Results
**SUCCESS cases:**
-`✓ Command succeeded (exit 0)`
-`✓ npm install completed (23 packages added)`
-`✓ Tests passed: 45/45`
**FAILURE cases:**
- ✅ Keep exit code + error lines only (max 10 lines)
- ✅ Strip ANSI codes, progress bars, verbose output
- ❌ NEVER include full command output for successful operations
**Example transformations:**
```
❌ WRONG: [300 lines of npm install output with dependency tree]
✅ RIGHT: ✓ npm install completed (23 packages added)
❌ WRONG: [50 lines of test output with passing test names]
✅ RIGHT: ✓ Tests passed: 45/45
❌ WRONG: [Build output with webpack chunks and file sizes]
✅ RIGHT: ✓ Build succeeded in 12.3s (3 chunks, 2.1MB)
```
### Edit Tool Results
**SUCCESS cases:**
-`✓ Modified /path/to/file.ts`
-`✓ Updated 3 files: component.ts, service.ts, test.ts`
**FAILURE cases:**
- ✅ Show error message + line number
- ❌ NEVER show full file diffs for successful edits
**ONLY show full diff when:**
- User explicitly asks "what changed?"
- Edit failed and debugging needed
- Major refactoring requiring review
### Write Tool Results
-`✓ Created /path/to/new-file.ts (245 lines)`
- ❌ NEVER echo back full file content after writing
### Read Tool Results
**Extract only relevant sections:**
- ✅ Read file → extract function/class → discard rest
- ✅ Summarize: "File contains 3 components: A, B, C (lines 10-150)"
- ❌ NEVER keep full file in context after extraction
**Show full file ONLY when:**
- User explicitly requests it
- File < 50 lines
- Need complete context for complex refactoring
### Grep/Glob Results
-`Found in 5 files: auth.ts, user.ts, ...`
- ✅ Show matching lines if < 20 results
- ❌ NEVER include full file paths and line numbers for > 20 matches
### Skill Application Results
**After applying skill:**
- ✅ Replace full skill content with: `Applied [skill-name]: [checklist]`
- ✅ Example: `Applied logging: ✓ Factory pattern ✓ Lazy evaluation ✓ Context added`
- ❌ NEVER keep full skill instructions after application
**Skill compression format:**
```
Applied angular-template:
✓ Modern control flow (@if, @for, @switch)
✓ Template references (ng-template)
✓ Lazy loading (@defer)
```
### Agent Results
**Summarization requirements (already covered in previous section):**
- 1-2 sentences for simple queries
- Structured table/list for complex findings
- NEVER include raw JSON or full agent output
### Session Cleanup
**Use `/clear` between tasks when:**
- Switching to unrelated task
- Previous task completed successfully
- Context exceeds 80K tokens
- Starting new feature/bug after finishing previous
**Benefits of `/clear`:**
- Prevents irrelevant context from degrading performance
- Resets working memory for fresh focus
- Maintains only persistent context (CLAUDE.md, skills)
## Implementation Work: Agent vs Direct Implementation
**Context-efficient implementation requires choosing the right execution mode.**
### Decision Matrix
| Task Type | Files | Complexity | Duration | Use |
|-----------|-------|-----------|----------|-----|
| Single file edit | 1 | Low | < 5min | Main agent (direct) + aggressive pruning |
| Bug fix | 1-3 | Variable | < 10min | Main agent (direct) + aggressive pruning |
| New Angular code (component/service/store) | 2-5 | Medium | 10-20min | **angular-developer agent** |
| Test suite | Any | Medium | 10-20min | **test-writer agent** |
| Large refactor | 5+ | High | 20+ min | **refactor-engineer agent** |
| Migration work | 10+ | High | 30+ min | **refactor-engineer agent** |
### 🔴 Proactive Agent Invocation (Automatic)
**You MUST automatically invoke specialized agents when task characteristics match, WITHOUT waiting for explicit user request.**
**Automatic triggers:**
1. **User says: "Create [component/service/store/pipe/directive/guard]..."**
→ AUTOMATICALLY invoke `angular-developer` agent
→ Example: "Create user dashboard component with metrics" → Use angular-developer
2. **User says: "Write tests for..." OR "Add test coverage..."**
→ AUTOMATICALLY invoke `test-writer` agent
→ Example: "Write tests for the checkout service" → Use test-writer
3. **User says: "Refactor all..." OR "Migrate [X] files..." OR "Update pattern across..."**
→ AUTOMATICALLY invoke `refactor-engineer` agent
→ Example: "Migrate all checkout components to standalone" → Use refactor-engineer
4. **Task analysis indicates > 4 files will be touched**
→ AUTOMATICALLY suggest/use appropriate agent
→ Example: User asks to implement feature that needs component + service + store + routes → Use angular-developer
5. **User says: "Remember to..." OR "TODO:" OR "Don't forget..."**
→ AUTOMATICALLY invoke `context-manager` to store task
→ Store immediately in `.claude/context/tasks.json`
**Decision flow:**
```
User request received
Analyze task characteristics:
├─ Keywords match? (create, test, refactor, migrate)
├─ File count estimate? (1 = direct, 2-5 = angular-developer, 5+ = refactor-engineer)
├─ Task type? (implementation vs testing vs refactoring)
└─ Complexity? (simple = direct, medium = agent, high = agent + detailed response)
IF agent match found:
├─ Brief user: "I'll use [agent-name] for this task"
├─ Invoke agent with Task tool
└─ Validate result
ELSE:
└─ Implement directly with aggressive pruning
```
**Communication pattern:**
```
✅ CORRECT:
User: "Create a user profile component with avatar upload"
Assistant: "I'll use the angular-developer agent for this Angular feature implementation."
[Invokes angular-developer agent]
❌ WRONG:
User: "Create a user profile component with avatar upload"
Assistant: "I can help you create a component. What fields should it have?"
[Doesn't invoke agent, implements directly, wastes main context]
```
**Override mechanism:**
If user says "do it directly" or "don't use an agent", honor their preference:
```
User: "Create a simple component, do it directly please"
Assistant: "Understood, implementing directly."
[Does NOT invoke angular-developer]
```
**Response format default:**
- Use `response_format: "concise"` by default (context efficiency)
- Use `response_format: "detailed"` when:
- User is learning/exploring
- Debugging complex issues
- User explicitly asks for details
- Task is unusual/non-standard
### When Main Agent Implements Directly
**Use direct implementation for simple, focused tasks:**
1. **Apply aggressive pruning** (see Tool Result Minimization above)
2. **Use context-manager proactively**:
- Store implementation state in MCP memory
- Enable session resumption without context replay
3. **Compress conversation every 5-7 exchanges**:
- Summarize: "Completed X, found Y, next: Z"
- Discard intermediate exploration
4. **Use `/clear` after completion** to reset for next task
**Example flow:**
```
User: "Fix the auth bug in login.ts"
Assistant: [Reads file, identifies issue, applies fix]
Assistant: ✓ Modified login.ts
Assistant: ✓ Tests passed: 12/12
[Context used: ~3,000 tokens]
```
### When to Hand Off to Subagent
**Delegate to specialized agents for complex/multi-file work:**
**Triggers:**
- Task will take > 10 minutes
- Touching > 4 files
- Repetitive work (CRUD generation, migrations)
- User wants to multitask in main thread
- Implementation requires iterative debugging
**Available Implementation Agents:**
#### angular-developer
**Use for:** Angular code implementation (components, services, stores, pipes, directives, guards)
- Auto-loads: angular-template, html-template, logging, tailwind
- Handles: Components, services, stores, pipes, directives, guards, tests
- Output: 2-5 files created/modified
**Briefing template:**
```
Implement Angular [type]:
- Type: [component/service/store/pipe/directive/guard]
- Purpose: [description]
- Location: [path]
- Requirements: [list]
- Integration: [dependencies]
- Data flow: [if applicable]
```
#### test-writer
**Use for:** Test suite generation/expansion
- Auto-loads: test-migration-specialist patterns, Vitest config
- Handles: Unit tests, integration tests, mocking
- Output: Test files with comprehensive coverage
**Briefing template:**
```
Generate tests for:
- Target: [file path]
- Coverage: [unit/integration/e2e]
- Scenarios: [list of test cases]
- Mocking: [dependencies to mock]
```
#### refactor-engineer
**Use for:** Large-scale refactoring/migrations
- Auto-loads: architecture-enforcer, circular-dependency-resolver
- Handles: Multi-file refactoring, pattern migrations, architectural changes
- Output: 5+ files modified, validation report
**Briefing template:**
```
Refactor [scope]:
- Pattern: [old pattern] → [new pattern]
- Files: [list or glob pattern]
- Constraints: [architectural rules]
- Validation: [how to verify success]
```
### Agent Coordination Pattern
**Main agent responsibilities:**
1. **Planning**: Decompose request, choose agent
2. **Briefing**: Provide focused, complete requirements
3. **Validation**: Review agent output (summary only)
4. **Integration**: Ensure changes work together
**Implementation agent responsibilities:**
1. **Execution**: Load skills, implement changes
2. **Testing**: Run tests, fix errors
3. **Reporting**: Return summary + key files modified
4. **Context isolation**: Keep implementation details in own context
**Handoff protocol:**
```
Main Agent:
↓ Brief agent with requirements
Implementation Agent:
↓ Execute (skills loaded, iterative debugging)
↓ Return summary: "✓ Created 3 files, ✓ Tests pass (12/12)"
Main Agent:
↓ Validate summary
↓ Continue with next task
[Implementation details stayed in agent context]
```
### Parallel Work Pattern
**When user has multiple independent tasks:**
1. **Main agent** handles simple task directly
2. **Specialized agent** handles complex task in parallel
3. Both complete, results integrate
**Example:**
```
User: "Fix auth bug AND create new dashboard component"
Main Agent: Fix auth bug directly (simple, 1 file)
angular-developer: Create dashboard (complex, 4 files)
Both complete independently
```
### Context Savings Calculation
**Direct implementation (simple task):**
- Tool results (pruned): ~1,000 tokens
- Conversation: ~2,000 tokens
- Total: ~3,000 tokens
**Subagent delegation (complex task):**
- Briefing: ~1,500 tokens
- Summary result: ~500 tokens
- Total in main context: ~2,000 tokens
- (Agent's 15,000 tokens stay isolated)
**Net savings for complex task: ~13,000 tokens**
## Agent Design Principles (Anthropic Best Practices)
**Based on research from Anthropic's engineering blog, these principles guide our agent architecture.**
### Core Principles
**1. Simplicity First**
- Start with simple solutions before adding agents
- Only use agents when tasks are open-ended with unpredictable steps
- Avoid over-engineering with unnecessary frameworks
**2. Tool Quality > Prompt Quality**
- Anthropic spent MORE time optimizing tools than prompts for SWE-bench
- Small design choices (absolute vs relative paths) eliminate systematic errors
- Invest heavily in tool documentation and testing
**3. Ground Agents in Reality**
- Provide environmental feedback at each step (tool results, test output)
- Build feedback loops with checkpoints
- Validate progress before continuing
### Response Format Control
**All implementation agents support response format parameter:**
```
briefing:
response_format: "concise" # default, ~500 tokens
# or "detailed" # ~2000 tokens with explanations
```
**Concise** (default for context efficiency):
```
✓ Feature created: DashboardComponent
✓ Files: component.ts (150 lines), template (85 lines), tests (12/12 passing)
✓ Skills applied: angular-template, html-template, logging
```
**Detailed** (use when debugging or learning):
```
✓ Feature created: DashboardComponent
Implementation approach:
- Used signalStore for state management (withState + withComputed)
- Implemented reactive data loading with Resource API
- Template uses modern control flow (@if, @for)
Files created:
- component.ts (150 lines): Standalone component with inject() pattern
- component.html (85 lines): Modern syntax with E2E attributes
- component.spec.ts: 12 tests covering rendering, interactions, state
Key decisions:
- Chose Resource API over manual loading for better race condition handling
- Computed signals for derived state (no effects needed)
Integration notes:
- Requires UserMetricsService injection
- Routes need update: path 'dashboard' → DashboardComponent
```
### Environmental Feedback Pattern
**Agents should report progress at key milestones:**
```
Phase 1: Creating files...
✓ Created dashboard.component.ts (150 lines)
✓ Created dashboard.component.html (85 lines)
Phase 2: Running validation...
→ Running lint... ✓ No errors
→ Running tests... ⚠ 8/12 passing
Phase 3: Fixing failures...
→ Investigating test failures: Mock data missing for UserService
→ Adding mock setup... ✓ Fixed
→ Rerunning tests... ✓ 12/12 passing
Complete! Ready for review.
```
### Tool Documentation Standards
**Every agent must document:**
**When to Use** (boundaries):
```markdown
✅ Creating 2-5 related files (component + service + store)
❌ Single file edits (use main agent directly)
❌ >10 files (use refactor-engineer instead)
```
**Examples** (happy path):
```markdown
Example: "Create login component with form validation and auth service integration"
→ Generates: component.ts, component.html, component.spec.ts, auth.service.ts
```
**Edge Cases** (failure modes):
```markdown
⚠ If auth service exists, will reuse (not recreate)
⚠ If tests fail after 3 attempts, returns partial progress + blocker details
```
### Context Chunking Strategy
**When storing knowledge in `.claude/context/`, prepend contextual headers:**
**Before** (low retrieval accuracy):
```json
{
"description": "Use signalStore() with withState()"
}
```
**After** (high retrieval accuracy):
```json
{
"context": "This pattern is for NgRx Signal Store in ISA-Frontend Angular 20+ monorepo. Replaces @ngrx/store for feature state management.",
"description": "Use signalStore() with withState() for state, withComputed() for derived values, withMethods() for actions. NO effects for state propagation.",
"location": "All libs/**/data-access/ libraries",
"example": "export const UserStore = signalStore(withState({users: []}));"
}
```
**Chunk size**: 200-800 tokens per entry for optimal retrieval.
### Poka-Yoke (Error-Proofing) Design
**Make mistakes harder to make:**
-**Use absolute paths** (not relative): Eliminates path resolution errors
-**Validate inputs early**: Check file existence before operations
-**Provide sensible defaults**: response_format="concise", model="sonnet"
-**Actionable error messages**: "File not found. Did you mean: /path/to/similar-file.ts?"
-**Fail fast with rollback**: Stop on first error, report state, allow retry
### Workflow Pattern: Orchestrator-Workers
**Our architecture uses the Orchestrator-Workers pattern:**
```
Main Agent (Orchestrator)
├─ Plans decomposition
├─ Chooses specialized worker
├─ Provides focused briefing
└─ Validates worker results
Worker Agents (angular-developer, test-writer, refactor-engineer)
├─ Execute with full autonomy
├─ Load relevant skills
├─ Iterate on errors internally
└─ Return concise summary
```
**Benefits:**
- Context isolation (worker details stay in worker context)
- Parallel execution (main + worker simultaneously)
- Specialization (each worker optimized for domain)
- Predictable communication (briefing → execution → summary)
### Token Efficiency Benchmark
**From Anthropic research: 98.7% token reduction via code execution**
Traditional tool-calling approach:
- 150,000 tokens (alternating LLM calls + tool results)
Code execution approach:
- 2,000 tokens (write code → execute in environment)
**ISA-Frontend application:**
- Use Bash for loops, filtering, transformations
- Use Edit for batch file changes (not individual tool calls per file)
- Use Grep/Glob for discovery (not Read every file)
- Prefer consolidated operations over step-by-step tool chains
### Context Strategy Threshold
**Based on Anthropic contextual retrieval research:**
- **< 200K tokens**: Put everything in prompt with caching (90% cost savings)
- **> 200K tokens**: Use retrieval system (our `.claude/context/` approach)
**ISA-Frontend**: ~500K+ tokens (Angular monorepo with 100+ libraries)
- ✅ File-based retrieval is correct choice
- ✅ Contextual headers improve retrieval accuracy by 35-67%
- ✅ Top-20 chunks with dual retrieval (semantic + lexical)
<!-- nx configuration start-->
<!-- Leave the start & end comments to automatically receive updates. -->
# General Guidelines for working with Nx
- When running tasks (for example build, lint, test, e2e, etc.), always prefer running the task through `nx` (i.e. `nx run`, `nx run-many`, `nx affected`) instead of using the underlying tooling directly
- You have access to the Nx MCP server and its tools, use them to help the user
- When answering questions about the repository, use the `nx_workspace` tool first to gain an understanding of the workspace architecture where applicable.
- When working in individual projects, use the `nx_project_details` mcp tool to analyze and understand the specific project structure and dependencies
- For questions around nx configuration, best practices or if you're unsure, use the `nx_docs` tool to get relevant, up-to-date docs. Always use this instead of assuming things about nx configuration
- If the user needs help with an Nx configuration or project graph error, use the `nx_workspace` tool to get any errors
<!-- nx configuration end-->