## Major Changes **Agent System Overhaul:** - ✨ Added 3 specialized implementation agents (angular-developer, test-writer, refactor-engineer) - 🗑️ Removed 7 redundant agents (debugger, error-detective, deployment-engineer, prompt-engineer, search-specialist, technical-writer, ui-ux-designer) - 📝 Updated all 9 agent descriptions with action-focused, PROACTIVELY-triggered patterns - 🔧 Net reduction: 16 → 9 agents (44% reduction) **Description Pattern Standardization:** - **Agents**: "[Action] + what. Use PROACTIVELY when [specific triggers]. [Features]." - **Skills**: "This skill should be used when [triggers]. [Capabilities]." - Removed ambiguous "use proactively" without conditions - Added measurable triggers (file counts, keywords, thresholds) **CLAUDE.md Enhancements:** - 📚 Added "Agent Design Principles" based on Anthropic research - ⚡ Added "Proactive Agent Invocation" rules for automatic delegation - 🎯 Added response format control (concise vs detailed) - 🔄 Added environmental feedback patterns - 🛡️ Added poka-yoke error-proofing guidelines - 📊 Added token efficiency benchmarks (98.7% reduction via code execution) - 🗂️ Added context chunking strategy for retrieval - 🏗️ Documented Orchestrator-Workers pattern **Context Management:** - 🔄 Converted context-manager from MCP memory to file-based (.claude/context/) - Added implementation-state tracking for session resumption - Team-shared context in git (not personal MCP storage) **Skills Updated (5):** - api-change-analyzer: Condensed, added trigger keywords - architecture-enforcer: Standardized "This skill should be used when" - circular-dependency-resolver: Added build failure triggers - git-workflow: Added missing trigger keywords - library-scaffolder: Condensed implementation details ## Expected Impact **Context Efficiency:** - 15,000-20,000 tokens saved per task (aggressive pruning) - 25,000-35,000 tokens saved per complex task (agent isolation) - 2-3x more work capacity per session **Automatic Invocation:** - Main agent now auto-invokes specialized agents based on keywords - Clear boundaries prevent wrong agent selection - Response format gives user control over detail level **Based on Anthropic Research:** - Building Effective Agents - Writing Tools for Agents - Code Execution with MCP - Contextual Retrieval
33 KiB
CLAUDE.md
This file contains meta-instructions for how Claude should work with the ISA-Frontend codebase.
🔴 CRITICAL: Mandatory Agent Usage
You MUST use these subagents for ALL research and knowledge management tasks:
docs-researcher: For ALL documentation (packages, libraries, READMEs)docs-researcher-advanced: Auto-escalate when docs-researcher failsExplore: For ALL code pattern searches and multi-file analysis
Violations of this rule degrade performance and context quality. NO EXCEPTIONS.
Communication Guidelines
Keep answers concise and focused:
- Provide direct, actionable responses without unnecessary elaboration
- Skip verbose explanations unless specifically requested
- Focus on what the user needs to know, not everything you know
- Use bullet points and structured formatting for clarity
- Only provide detailed explanations when complexity requires it
🔴 CRITICAL: Mandatory Skill Usage
Skills are project-specific tools that MUST be used proactively for their domains.
Skill vs Agent vs Direct Tools
| Tool Type | Purpose | When to Use | Context Management |
|---|---|---|---|
| Skills | Domain-specific workflows (Angular, testing, architecture) | Writing/reviewing code in skill's domain | Load skill → follow instructions → unload |
| Agents | Research & knowledge gathering | Finding docs, searching code, analysis | Use agent → extract findings → discard output |
| Direct Tools | Single file operations | Reading specific known files | Use tool → process → done |
Mandatory Skill Invocation Rules
ALWAYS invoke skills when:
| Trigger | Required Skill | Why |
|---|---|---|
| Writing Angular templates | angular-template |
Modern syntax (@if, @for, @defer) |
| Writing HTML with interactivity | html-template |
E2E attributes (data-what, data-which) + ARIA |
| Applying Tailwind classes | tailwind |
Design system consistency |
| Writing Angular code | logging |
Mandatory logging via @isa/core/logging |
| Creating new library | library-scaffolder |
Proper Nx setup + Vitest config |
| Regenerating API clients | swagger-sync-manager |
All 10 clients + validation |
| Migrating to standalone | standalone-component-migrator |
Complete migration workflow |
| Migrating tests to Vitest | test-migration-specialist |
Jest→Vitest conversion |
Fixing any types |
type-safety-engineer |
Add Zod schemas + type guards |
| Checking architecture | architecture-enforcer |
Import boundaries + circular deps |
| Resolving circular deps | circular-dependency-resolver |
Graph analysis + fix strategies |
| API changes analysis | api-change-analyzer |
Breaking change detection |
| Git workflow | git-workflow |
Branch naming + conventional commits |
Proactive Skill Usage Framework
"Proactive" means:
- Detect task domain automatically - Don't wait for user to say "use skill X"
- Invoke before starting work - Load skill first, then execute
- Apply throughout task - Keep skill active for entire domain work
- Validate with skill - Use skill to review your own output
Example - WRONG:
User: "Add a new Angular component with a form"
Assistant: [Writes component without skills]
User: "Did you use the Angular template skill?"
Assistant: "Oh sorry, let me reload with the skill"
Example - RIGHT:
User: "Add a new Angular component with a form"
Assistant: [Invokes angular-template skill]
Assistant: [Invokes html-template skill]
Assistant: [Invokes logging skill]
Assistant: [Writes component following all skill guidelines]
Skill Chaining & Coordination
Multiple skills often apply to same task:
| Task | Required Skill Chain | Order |
|---|---|---|
| New Angular component | angular-template → html-template → logging → tailwind |
Template syntax → HTML attributes → Logging → Styling |
| New library | library-scaffolder → architecture-enforcer |
Scaffold → Validate structure |
| API sync | api-change-analyzer → swagger-sync-manager |
Analyze changes → Regenerate clients |
| Component migration | standalone-component-migrator → test-migration-specialist |
Migrate component → Migrate tests |
Skill chaining rules:
- Load ALL applicable skills at task start (via Skill tool)
- Skills don't nest - they provide instructions you follow
- Skills stay active for entire task scope
- Validate final output against ALL loaded skills
Skill Context Management
Skills expand instructions into your context:
- ✅ DO: Load skill → internalize rules → follow throughout task
- ❌ DON'T: Re-read skill instructions multiple times
- ❌ DON'T: Quote skill instructions back to user
- ❌ DON'T: Keep skill "open" after task completion
After task completion:
- Verify work against skill requirements
- Summarize what was applied (1 sentence)
- Move on (skill context auto-clears next task)
Skill Failure Handling
| Issue | Action |
|---|---|
| Skill not found | Verify skill name; ask user to check available skills |
| Skill conflicts with user request | Note conflict; ask user for preference |
| Multiple skills give conflicting rules | Follow most specific skill for current file type |
| Skill instructions unclear | Use best judgment; document assumption in code comment |
Skills vs Agents - Decision Tree
Is this a code writing/reviewing task?
├─ YES → Check if skill exists for domain
│ ├─ Skill exists → Use Skill
│ └─ No skill → Use direct tools
└─ NO → Is this research/finding information?
├─ YES → Use Agent (docs-researcher/Explore)
└─ NO → Use direct tools (Read/Edit/Bash)
Researching and Investigating the Codebase
🔴 MANDATORY: You MUST use subagents for research. Direct file reading/searching.
Required Agent Usage
| Task Type | Required Agent | Escalation Path |
|---|---|---|
| Package/Library Documentation | docs-researcher |
→ docs-researcher-advanced if not found |
| Internal Library READMEs | docs-researcher |
Keep context clean |
| Code Pattern Search | Explore |
Set thoroughness level |
| Implementation Analysis | Explore |
Multiple file analysis |
| Single Specific File | Read tool directly | No agent needed |
Documentation Research System (Two-Tier)
- ALWAYS start with
docs-researcher(Haiku, 30-120s) for any documentation need - Auto-escalate to
docs-researcher-advanced(Sonnet, 2-7min) when:- Documentation not found
- Conflicting sources
- Need code inference
- Complex architectural questions
Enforcement Examples
❌ WRONG: Read libs/ui/buttons/README.md
✅ RIGHT: Task → docs-researcher → "Find documentation for @isa/ui/buttons"
❌ WRONG: Grep for "signalStore" patterns
✅ RIGHT: Task → Explore → "Find all signalStore implementations"
❌ WRONG: WebSearch for Zod documentation
✅ RIGHT: Task → docs-researcher → "Find Zod validation documentation"
Remember: Using subagents is NOT optional - it's mandatory for maintaining context efficiency and search quality.
🔴 CRITICAL: Context Management for Reliable Subagent Usage
Context bloat kills reliability. You MUST follow these rules:
Context Preservation Rules
- NEVER include full agent results in main conversation - Summarize findings in 1-2 sentences
- NEVER repeat information - Once extracted, don't include raw agent output again
- NEVER accumulate intermediate steps - Keep only final answers/decisions
- DISCARD immediately after use: Raw JSON responses, full file listings, irrelevant search results
- KEEP only: Key findings, extracted values, decision rationale
Agent Invocation Patterns
| Pattern | When to Use | Rules |
|---|---|---|
| Sequential | Agent 1 results inform Agent 2 | Wait for Agent 1 result before invoking Agent 2 |
| Parallel | Independent research needs | Max 2-3 agents in parallel; different domains only |
| Escalation | First agent insufficient | Invoke only if first agent returns "not found" or insufficient |
Result Handling & Synthesis
After each agent completes:
- Extract the specific answer needed (1-3 key points when possible)
- Discard raw output from conversation context
- If synthesizing multiple sources, create brief summary table/list
- Reference sources only if user asks "where did you find this?"
If result can't be summarized in 1-2 sentences:
- Use structured formats: Tables, bullet lists, code blocks (not prose walls)
- Group by category/concept, not by source
- Include only information relevant to the current task
- Ask yourself: "Does the user need all this detail, or am I including 'just in case'?" → If just in case, cut it
Example - WRONG:
Docs researcher returned: [huge JSON with 100 properties...]
The relevant ones are X, Y, Z...
Example - RIGHT (simple):
docs-researcher found: The API supports async/await with TypeScript strict mode.
Example - RIGHT (complex, structured):
docs-researcher found migration requires 3 steps:
1. Update imports (see migration guide section 2.1)
2. Change type definitions (example in docs)
3. Update tests (patterns shown)
Parallel Agent Execution
Use parallel execution (single message, multiple tool calls) ONLY when:
- Agents are researching different domains (e.g., Zod docs + Angular docs)
- Agents have no dependencies (neither result informs the other)
- Results will be independently useful to the user
NEVER parallel if: One agent's findings should guide the next agent's search.
Session Coordination
- One primary task focus per session phase
- Related agents run together (e.g., all docs research at start)
- Discard intermediate context between task phases
- Summarize phase results before moving to implementation phase
Edge Cases & Failure Handling
Agent Failures & Timeouts
| Failure Type | Action | Fallback |
|---|---|---|
| Timeout (>2min) | Retry once with simpler query | Use direct tools if critical |
| Error/Exception | Check query syntax, retry with fix | Escalate to advanced agent |
| Empty result | Verify target exists first | Try alternative search terms |
| Conflicting results | Run third agent as tiebreaker | Present both with confidence levels |
User Direction Changes
If user pivots mid-research:
- STOP current agent chain immediately
- Summarize what was found so far (1 sentence)
- Ask: "Should I continue the original research or focus on [new direction]?"
- Clear context from abandoned path
Model Selection (Haiku vs Sonnet)
| Use Haiku for | Use Sonnet for |
|---|---|
| Single file lookups | Multi-file synthesis |
| Known documentation paths | Complex pattern analysis |
| <5 min expected time | Architectural decisions |
| Well-defined searches | Ambiguous requirements |
Resume vs Fresh Agent
Use resume parameter when:
- Previous agent was interrupted by user
- Need to continue exact same search with more context
- Building on partial results from <5 min ago
Start fresh when:
- Different search angle needed
- Previous results >5 min old
- Switching between task types
Result Validation
Always validate when:
- Version-specific documentation (check version matches project)
- Third-party APIs (verify against actual response)
- Migration guides (confirm source/target versions)
Red flags requiring re-verification:
- "Deprecated" warnings in results
- Dates older than 6 months
- Conflicting information between sources
Context Overflow Management
If even structured results exceed reasonable size:
- Create an index/TOC of findings
- Show only the section relevant to immediate task
- Offer: "I found [X] additional areas. Which would help most?"
- Store details in agent memory for later retrieval
Confidence Communication
Always indicate confidence level when:
- Documentation is outdated (>1 year)
- Multiple conflicting sources exist
- Inferring from code (no docs found)
- Using fallback methods
Format: [High confidence], [Medium confidence], [Inferred from code]
Debug Mode & Special Scenarios
When to Show Raw Agent Results
ONLY expose raw results when:
- User explicitly asks "show me the raw output"
- Debugging why an implementation isn't working
- Agent results contradict user's expectation significantly
- Need to prove source of information for audit/compliance
Never for: Routine queries, successful searches, standard documentation lookups
Agent Chain Interruption
If agent chain fails midway (e.g., agent 2 of 5):
- Report: "Research stopped at [step] due to [reason]"
- Show completed findings (structured)
- Ask: "Continue with partial info or try alternative approach?"
- Never silently skip failed steps
Performance Degradation Handling
| Symptom | Likely Cause | Action |
|---|---|---|
| Agent >3min | Complex search | Switch to simpler query or Haiku model |
| Multiple timeouts | API overload | Wait 30s, retry with rate limiting |
| Consistent empties | Wrong domain | Verify project structure first |
Circular Dependency Detection
If Agent A needs B's result, and B needs A's:
- STOP - this indicates unclear requirements
- Use AskUserQuestion to clarify which should be determined first
- Document the decision in comments
Result Caching Strategy
Cache and reuse agent results when:
- Same exact query within 5 minutes
- Documentation lookups (valid for session)
- Project structure analysis (valid until file changes)
Always re-run when:
- Error states being debugged
- User explicitly requests "check again"
- Any file modifications occurred
Priority Conflicts
When user request conflicts with best practices:
- Execute user request first (they have context you don't)
- Note: "[Following user preference over standard pattern]"
- Document why standard approach might differ
- Never refuse based on "best practices" alone
🔴 CRITICAL: Tool Result Minimization
Tool results are the #1 source of context bloat. After each tool execution, aggressively minimize what stays in context:
Bash Tool Results
SUCCESS cases:
- ✅
✓ Command succeeded (exit 0) - ✅
✓ npm install completed (23 packages added) - ✅
✓ Tests passed: 45/45
FAILURE cases:
- ✅ Keep exit code + error lines only (max 10 lines)
- ✅ Strip ANSI codes, progress bars, verbose output
- ❌ NEVER include full command output for successful operations
Example transformations:
❌ WRONG: [300 lines of npm install output with dependency tree]
✅ RIGHT: ✓ npm install completed (23 packages added)
❌ WRONG: [50 lines of test output with passing test names]
✅ RIGHT: ✓ Tests passed: 45/45
❌ WRONG: [Build output with webpack chunks and file sizes]
✅ RIGHT: ✓ Build succeeded in 12.3s (3 chunks, 2.1MB)
Edit Tool Results
SUCCESS cases:
- ✅
✓ Modified /path/to/file.ts - ✅
✓ Updated 3 files: component.ts, service.ts, test.ts
FAILURE cases:
- ✅ Show error message + line number
- ❌ NEVER show full file diffs for successful edits
ONLY show full diff when:
- User explicitly asks "what changed?"
- Edit failed and debugging needed
- Major refactoring requiring review
Write Tool Results
- ✅
✓ Created /path/to/new-file.ts (245 lines) - ❌ NEVER echo back full file content after writing
Read Tool Results
Extract only relevant sections:
- ✅ Read file → extract function/class → discard rest
- ✅ Summarize: "File contains 3 components: A, B, C (lines 10-150)"
- ❌ NEVER keep full file in context after extraction
Show full file ONLY when:
- User explicitly requests it
- File < 50 lines
- Need complete context for complex refactoring
Grep/Glob Results
- ✅
Found in 5 files: auth.ts, user.ts, ... - ✅ Show matching lines if < 20 results
- ❌ NEVER include full file paths and line numbers for > 20 matches
Skill Application Results
After applying skill:
- ✅ Replace full skill content with:
Applied [skill-name]: [checklist] - ✅ Example:
Applied logging: ✓ Factory pattern ✓ Lazy evaluation ✓ Context added - ❌ NEVER keep full skill instructions after application
Skill compression format:
Applied angular-template:
✓ Modern control flow (@if, @for, @switch)
✓ Template references (ng-template)
✓ Lazy loading (@defer)
Agent Results
Summarization requirements (already covered in previous section):
- 1-2 sentences for simple queries
- Structured table/list for complex findings
- NEVER include raw JSON or full agent output
Session Cleanup
Use /clear between tasks when:
- Switching to unrelated task
- Previous task completed successfully
- Context exceeds 80K tokens
- Starting new feature/bug after finishing previous
Benefits of /clear:
- Prevents irrelevant context from degrading performance
- Resets working memory for fresh focus
- Maintains only persistent context (CLAUDE.md, skills)
Implementation Work: Agent vs Direct Implementation
Context-efficient implementation requires choosing the right execution mode.
Decision Matrix
| Task Type | Files | Complexity | Duration | Use |
|---|---|---|---|---|
| Single file edit | 1 | Low | < 5min | Main agent (direct) + aggressive pruning |
| Bug fix | 1-3 | Variable | < 10min | Main agent (direct) + aggressive pruning |
| New Angular code (component/service/store) | 2-5 | Medium | 10-20min | angular-developer agent |
| Test suite | Any | Medium | 10-20min | test-writer agent |
| Large refactor | 5+ | High | 20+ min | refactor-engineer agent |
| Migration work | 10+ | High | 30+ min | refactor-engineer agent |
🔴 Proactive Agent Invocation (Automatic)
You MUST automatically invoke specialized agents when task characteristics match, WITHOUT waiting for explicit user request.
Automatic triggers:
-
User says: "Create [component/service/store/pipe/directive/guard]..." → AUTOMATICALLY invoke
angular-developeragent → Example: "Create user dashboard component with metrics" → Use angular-developer -
User says: "Write tests for..." OR "Add test coverage..." → AUTOMATICALLY invoke
test-writeragent → Example: "Write tests for the checkout service" → Use test-writer -
User says: "Refactor all..." OR "Migrate [X] files..." OR "Update pattern across..." → AUTOMATICALLY invoke
refactor-engineeragent → Example: "Migrate all checkout components to standalone" → Use refactor-engineer -
Task analysis indicates > 4 files will be touched → AUTOMATICALLY suggest/use appropriate agent → Example: User asks to implement feature that needs component + service + store + routes → Use angular-developer
-
User says: "Remember to..." OR "TODO:" OR "Don't forget..." → AUTOMATICALLY invoke
context-managerto store task → Store immediately in.claude/context/tasks.json
Decision flow:
User request received
↓
Analyze task characteristics:
├─ Keywords match? (create, test, refactor, migrate)
├─ File count estimate? (1 = direct, 2-5 = angular-developer, 5+ = refactor-engineer)
├─ Task type? (implementation vs testing vs refactoring)
└─ Complexity? (simple = direct, medium = agent, high = agent + detailed response)
↓
IF agent match found:
├─ Brief user: "I'll use [agent-name] for this task"
├─ Invoke agent with Task tool
└─ Validate result
ELSE:
└─ Implement directly with aggressive pruning
Communication pattern:
✅ CORRECT:
User: "Create a user profile component with avatar upload"
Assistant: "I'll use the angular-developer agent for this Angular feature implementation."
[Invokes angular-developer agent]
❌ WRONG:
User: "Create a user profile component with avatar upload"
Assistant: "I can help you create a component. What fields should it have?"
[Doesn't invoke agent, implements directly, wastes main context]
Override mechanism:
If user says "do it directly" or "don't use an agent", honor their preference:
User: "Create a simple component, do it directly please"
Assistant: "Understood, implementing directly."
[Does NOT invoke angular-developer]
Response format default:
- Use
response_format: "concise"by default (context efficiency) - Use
response_format: "detailed"when:- User is learning/exploring
- Debugging complex issues
- User explicitly asks for details
- Task is unusual/non-standard
When Main Agent Implements Directly
Use direct implementation for simple, focused tasks:
- Apply aggressive pruning (see Tool Result Minimization above)
- Use context-manager proactively:
- Store implementation state in MCP memory
- Enable session resumption without context replay
- Compress conversation every 5-7 exchanges:
- Summarize: "Completed X, found Y, next: Z"
- Discard intermediate exploration
- Use
/clearafter completion to reset for next task
Example flow:
User: "Fix the auth bug in login.ts"
Assistant: [Reads file, identifies issue, applies fix]
Assistant: ✓ Modified login.ts
Assistant: ✓ Tests passed: 12/12
[Context used: ~3,000 tokens]
When to Hand Off to Subagent
Delegate to specialized agents for complex/multi-file work:
Triggers:
- Task will take > 10 minutes
- Touching > 4 files
- Repetitive work (CRUD generation, migrations)
- User wants to multitask in main thread
- Implementation requires iterative debugging
Available Implementation Agents:
angular-developer
Use for: Angular code implementation (components, services, stores, pipes, directives, guards)
- Auto-loads: angular-template, html-template, logging, tailwind
- Handles: Components, services, stores, pipes, directives, guards, tests
- Output: 2-5 files created/modified
Briefing template:
Implement Angular [type]:
- Type: [component/service/store/pipe/directive/guard]
- Purpose: [description]
- Location: [path]
- Requirements: [list]
- Integration: [dependencies]
- Data flow: [if applicable]
test-writer
Use for: Test suite generation/expansion
- Auto-loads: test-migration-specialist patterns, Vitest config
- Handles: Unit tests, integration tests, mocking
- Output: Test files with comprehensive coverage
Briefing template:
Generate tests for:
- Target: [file path]
- Coverage: [unit/integration/e2e]
- Scenarios: [list of test cases]
- Mocking: [dependencies to mock]
refactor-engineer
Use for: Large-scale refactoring/migrations
- Auto-loads: architecture-enforcer, circular-dependency-resolver
- Handles: Multi-file refactoring, pattern migrations, architectural changes
- Output: 5+ files modified, validation report
Briefing template:
Refactor [scope]:
- Pattern: [old pattern] → [new pattern]
- Files: [list or glob pattern]
- Constraints: [architectural rules]
- Validation: [how to verify success]
Agent Coordination Pattern
Main agent responsibilities:
- Planning: Decompose request, choose agent
- Briefing: Provide focused, complete requirements
- Validation: Review agent output (summary only)
- Integration: Ensure changes work together
Implementation agent responsibilities:
- Execution: Load skills, implement changes
- Testing: Run tests, fix errors
- Reporting: Return summary + key files modified
- Context isolation: Keep implementation details in own context
Handoff protocol:
Main Agent:
↓ Brief agent with requirements
Implementation Agent:
↓ Execute (skills loaded, iterative debugging)
↓ Return summary: "✓ Created 3 files, ✓ Tests pass (12/12)"
Main Agent:
↓ Validate summary
↓ Continue with next task
[Implementation details stayed in agent context]
Parallel Work Pattern
When user has multiple independent tasks:
- Main agent handles simple task directly
- Specialized agent handles complex task in parallel
- Both complete, results integrate
Example:
User: "Fix auth bug AND create new dashboard component"
Main Agent: Fix auth bug directly (simple, 1 file)
angular-developer: Create dashboard (complex, 4 files)
Both complete independently
Context Savings Calculation
Direct implementation (simple task):
- Tool results (pruned): ~1,000 tokens
- Conversation: ~2,000 tokens
- Total: ~3,000 tokens
Subagent delegation (complex task):
- Briefing: ~1,500 tokens
- Summary result: ~500 tokens
- Total in main context: ~2,000 tokens
- (Agent's 15,000 tokens stay isolated)
Net savings for complex task: ~13,000 tokens
Agent Design Principles (Anthropic Best Practices)
Based on research from Anthropic's engineering blog, these principles guide our agent architecture.
Core Principles
1. Simplicity First
- Start with simple solutions before adding agents
- Only use agents when tasks are open-ended with unpredictable steps
- Avoid over-engineering with unnecessary frameworks
2. Tool Quality > Prompt Quality
- Anthropic spent MORE time optimizing tools than prompts for SWE-bench
- Small design choices (absolute vs relative paths) eliminate systematic errors
- Invest heavily in tool documentation and testing
3. Ground Agents in Reality
- Provide environmental feedback at each step (tool results, test output)
- Build feedback loops with checkpoints
- Validate progress before continuing
Response Format Control
All implementation agents support response format parameter:
briefing:
response_format: "concise" # default, ~500 tokens
# or "detailed" # ~2000 tokens with explanations
Concise (default for context efficiency):
✓ Feature created: DashboardComponent
✓ Files: component.ts (150 lines), template (85 lines), tests (12/12 passing)
✓ Skills applied: angular-template, html-template, logging
Detailed (use when debugging or learning):
✓ Feature created: DashboardComponent
Implementation approach:
- Used signalStore for state management (withState + withComputed)
- Implemented reactive data loading with Resource API
- Template uses modern control flow (@if, @for)
Files created:
- component.ts (150 lines): Standalone component with inject() pattern
- component.html (85 lines): Modern syntax with E2E attributes
- component.spec.ts: 12 tests covering rendering, interactions, state
Key decisions:
- Chose Resource API over manual loading for better race condition handling
- Computed signals for derived state (no effects needed)
Integration notes:
- Requires UserMetricsService injection
- Routes need update: path 'dashboard' → DashboardComponent
Environmental Feedback Pattern
Agents should report progress at key milestones:
Phase 1: Creating files...
✓ Created dashboard.component.ts (150 lines)
✓ Created dashboard.component.html (85 lines)
Phase 2: Running validation...
→ Running lint... ✓ No errors
→ Running tests... ⚠ 8/12 passing
Phase 3: Fixing failures...
→ Investigating test failures: Mock data missing for UserService
→ Adding mock setup... ✓ Fixed
→ Rerunning tests... ✓ 12/12 passing
Complete! Ready for review.
Tool Documentation Standards
Every agent must document:
When to Use (boundaries):
✅ Creating 2-5 related files (component + service + store)
❌ Single file edits (use main agent directly)
❌ >10 files (use refactor-engineer instead)
Examples (happy path):
Example: "Create login component with form validation and auth service integration"
→ Generates: component.ts, component.html, component.spec.ts, auth.service.ts
Edge Cases (failure modes):
⚠ If auth service exists, will reuse (not recreate)
⚠ If tests fail after 3 attempts, returns partial progress + blocker details
Context Chunking Strategy
When storing knowledge in .claude/context/, prepend contextual headers:
Before (low retrieval accuracy):
{
"description": "Use signalStore() with withState()"
}
After (high retrieval accuracy):
{
"context": "This pattern is for NgRx Signal Store in ISA-Frontend Angular 20+ monorepo. Replaces @ngrx/store for feature state management.",
"description": "Use signalStore() with withState() for state, withComputed() for derived values, withMethods() for actions. NO effects for state propagation.",
"location": "All libs/**/data-access/ libraries",
"example": "export const UserStore = signalStore(withState({users: []}));"
}
Chunk size: 200-800 tokens per entry for optimal retrieval.
Poka-Yoke (Error-Proofing) Design
Make mistakes harder to make:
- ✅ Use absolute paths (not relative): Eliminates path resolution errors
- ✅ Validate inputs early: Check file existence before operations
- ✅ Provide sensible defaults: response_format="concise", model="sonnet"
- ✅ Actionable error messages: "File not found. Did you mean: /path/to/similar-file.ts?"
- ✅ Fail fast with rollback: Stop on first error, report state, allow retry
Workflow Pattern: Orchestrator-Workers
Our architecture uses the Orchestrator-Workers pattern:
Main Agent (Orchestrator)
├─ Plans decomposition
├─ Chooses specialized worker
├─ Provides focused briefing
└─ Validates worker results
Worker Agents (angular-developer, test-writer, refactor-engineer)
├─ Execute with full autonomy
├─ Load relevant skills
├─ Iterate on errors internally
└─ Return concise summary
Benefits:
- Context isolation (worker details stay in worker context)
- Parallel execution (main + worker simultaneously)
- Specialization (each worker optimized for domain)
- Predictable communication (briefing → execution → summary)
Token Efficiency Benchmark
From Anthropic research: 98.7% token reduction via code execution
Traditional tool-calling approach:
- 150,000 tokens (alternating LLM calls + tool results)
Code execution approach:
- 2,000 tokens (write code → execute in environment)
ISA-Frontend application:
- Use Bash for loops, filtering, transformations
- Use Edit for batch file changes (not individual tool calls per file)
- Use Grep/Glob for discovery (not Read every file)
- Prefer consolidated operations over step-by-step tool chains
Context Strategy Threshold
Based on Anthropic contextual retrieval research:
- < 200K tokens: Put everything in prompt with caching (90% cost savings)
- > 200K tokens: Use retrieval system (our
.claude/context/approach)
ISA-Frontend: ~500K+ tokens (Angular monorepo with 100+ libraries)
- ✅ File-based retrieval is correct choice
- ✅ Contextual headers improve retrieval accuracy by 35-67%
- ✅ Top-20 chunks with dual retrieval (semantic + lexical)
General Guidelines for working with Nx
- When running tasks (for example build, lint, test, e2e, etc.), always prefer running the task through
nx(i.e.nx run,nx run-many,nx affected) instead of using the underlying tooling directly - You have access to the Nx MCP server and its tools, use them to help the user
- When answering questions about the repository, use the
nx_workspacetool first to gain an understanding of the workspace architecture where applicable. - When working in individual projects, use the
nx_project_detailsmcp tool to analyze and understand the specific project structure and dependencies - For questions around nx configuration, best practices or if you're unsure, use the
nx_docstool to get relevant, up-to-date docs. Always use this instead of assuming things about nx configuration - If the user needs help with an Nx configuration or project graph error, use the
nx_workspacetool to get any errors