Compare commits

..

2 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
de3d15c099 Fix refresh error: Replace const reassignment with property deletion
Co-authored-by: catlog22 <28037070+catlog22@users.noreply.github.com>
2025-12-08 06:29:27 +00:00
copilot-swe-agent[bot]
9232947fb0 Initial plan 2025-12-08 06:26:15 +00:00
2460 changed files with 91932 additions and 754023 deletions

View File

@@ -1,25 +0,0 @@
# 中文回复准则
## 核心原则
- 所有回复使用简体中文
- 技术术语保留英文,首次出现可添加中文解释
- 代码变量名保持英文,注释使用中文
## 格式规范
- 中英文/数字间加空格:`使用 TypeScript 开发``共 3 个文件`
- 使用中文标点:,。!?:;
- 代码/命令用反引号:`npm install`
## Git Commit
- 使用中文提交信息
- 格式:`类型: 简短描述`
- 类型feat/fix/refactor/docs/test/chore
## 保持英文
- 代码文件内容
- 错误信息和日志
- 文件路径和命令

View File

@@ -1,266 +0,0 @@
生成符合 RESTful 规范的完整 Swagger/OpenAPI API 文档。
## 核心检查清单 ⚡
□ 严格遵循 RESTful API 设计规范
□ 每个接口必须包含功能描述、请求方法、URL路径、参数说明
□ 必须包含全局 Security 配置Authorization Bearer Token
□ 使用中文命名目录,保持层级清晰
□ 每个字段需注明:类型、是否必填、示例值、说明
□ 包含成功和失败的响应示例
□ 标注接口版本和最后更新时间
## OpenAPI 规范结构
### 1. 文档信息 (info)
```yaml
openapi: 3.0.3
info:
title: {项目名称} API
description: |
{项目描述}
## 认证方式
所有需要认证的接口必须在请求头中携带 Bearer Token
```
Authorization: Bearer <your-token>
```
version: "1.0.0"
contact:
name: API 支持
email: api-support@example.com
license:
name: MIT
```
### 2. 服务器配置 (servers)
```yaml
servers:
- url: https://api.example.com/v1
description: 生产环境
- url: https://staging-api.example.com/v1
description: 测试环境
- url: http://localhost:3000/v1
description: 开发环境
```
### 3. 全局安全配置 (security)
```yaml
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
description: |
JWT Token 认证
获取方式:调用 POST /auth/login 接口
有效期24小时
刷新:调用 POST /auth/refresh 接口
security:
- bearerAuth: []
```
### 4. 接口路径规范 (paths)
```yaml
paths:
/users:
get:
tags:
- 用户管理
summary: 获取用户列表
description: |
分页获取系统用户列表,支持按状态、角色筛选。
**适用环境**: 开发、测试、生产
**前置条件**: 需要管理员权限
operationId: listUsers
security:
- bearerAuth: []
parameters:
- name: page
in: query
required: false
schema:
type: integer
default: 1
minimum: 1
description: 页码从1开始
example: 1
- name: limit
in: query
required: false
schema:
type: integer
default: 20
minimum: 1
maximum: 100
description: 每页数量
example: 20
responses:
'200':
description: 成功获取用户列表
content:
application/json:
schema:
$ref: '#/components/schemas/UserListResponse'
example:
code: 0
message: success
data:
items:
- id: "usr_123"
email: "user@example.com"
name: "张三"
total: 100
page: 1
limit: 20
'401':
$ref: '#/components/responses/UnauthorizedError'
'403':
$ref: '#/components/responses/ForbiddenError'
```
### 5. 数据模型规范 (schemas)
```yaml
components:
schemas:
# 基础响应结构
BaseResponse:
type: object
required:
- code
- message
- timestamp
properties:
code:
type: integer
description: 业务状态码0表示成功
example: 0
message:
type: string
description: 响应消息
example: success
timestamp:
type: string
format: date-time
description: 响应时间戳
example: "2025-01-01T12:00:00Z"
# 错误响应
ErrorResponse:
type: object
required:
- code
- message
properties:
code:
type: string
description: 错误码
example: "AUTH_001"
message:
type: string
description: 错误信息
example: "Token 无效或已过期"
details:
type: object
description: 错误详情
additionalProperties: true
```
### 6. 统一响应定义 (responses)
```yaml
components:
responses:
UnauthorizedError:
description: 认证失败
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "AUTH_001"
message: "Token 无效或已过期"
ForbiddenError:
description: 权限不足
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "AUTH_003"
message: "权限不足,需要管理员角色"
NotFoundError:
description: 资源不存在
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "BIZ_002"
message: "资源不存在"
ValidationError:
description: 参数验证失败
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "PARAM_001"
message: "参数格式错误"
details:
field: "email"
reason: "邮箱格式不正确"
```
## 接口文档必填项
每个接口必须包含:
1. **基本信息**
- tags: 所属模块(中文)
- summary: 一句话描述
- description: 详细说明(含适用环境、前置条件)
- operationId: 唯一操作标识
2. **安全配置**
- security: 认证要求
3. **参数定义**
- name: 参数名
- in: 位置 (path/query/header/cookie)
- required: 是否必填
- schema: 类型定义(含 default, minimum, maximum
- description: 参数说明
- example: 示例值
4. **响应定义**
- 200: 成功响应(含完整示例)
- 400: 参数错误
- 401: 认证失败
- 403: 权限不足
- 404: 资源不存在(如适用)
- 500: 服务器错误
5. **版本信息**
- x-version: 接口版本
- x-updated: 最后更新时间
## 错误码规范
| 前缀 | 类别 | HTTP状态码 | 说明 |
|------|------|------------|------|
| AUTH_ | 认证错误 | 401/403 | 身份验证相关 |
| PARAM_ | 参数错误 | 400 | 请求参数验证 |
| BIZ_ | 业务错误 | 409/422 | 业务逻辑相关 |
| SYS_ | 系统错误 | 500/503 | 服务器异常 |
## RESTful 设计规范
1. **URL 命名**: 使用复数名词,小写,连字符分隔
2. **HTTP 方法**: GET(查询)、POST(创建)、PUT(更新)、DELETE(删除)、PATCH(部分更新)
3. **状态码**: 正确使用 2xx/3xx/4xx/5xx
4. **分页**: 使用 page/limit 或 offset/limit
5. **筛选**: 使用查询参数
6. **版本**: URL 路径 (/v1/) 或 请求头

View File

@@ -1,122 +0,0 @@
# Rule Template: API Rules (Backend/Fullstack Only)
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
- {API_FRAMEWORK}: API framework (Express, FastAPI, etc)
## Output Format
```markdown
---
paths:
- "**/api/**/*.{FILE_EXT}"
- "**/routes/**/*.{FILE_EXT}"
- "**/endpoints/**/*.{FILE_EXT}"
- "**/controllers/**/*.{FILE_EXT}"
- "**/handlers/**/*.{FILE_EXT}"
---
# {TECH_STACK_NAME} API Rules
## Endpoint Design
[REST/GraphQL conventions from Exa research]
### URL Structure
- Resource naming (plural nouns)
- Nesting depth limits
- Query parameter conventions
- Version prefixing
### HTTP Methods
- GET: Read operations
- POST: Create operations
- PUT/PATCH: Update operations
- DELETE: Remove operations
### Status Codes
- 2xx: Success responses
- 4xx: Client errors
- 5xx: Server errors
## Request Validation
[Input validation patterns]
### Schema Validation
```{lang}
// Example validation schema
```
### Required Fields
- Validation approach
- Error messages format
- Sanitization rules
## Response Format
[Standard response structures]
### Success Response
```json
{
"data": {},
"meta": {}
}
```
### Pagination
```json
{
"data": [],
"pagination": {
"page": 1,
"limit": 20,
"total": 100
}
}
```
## Error Responses
[Error handling for APIs]
### Error Format
```json
{
"error": {
"code": "ERROR_CODE",
"message": "Human readable message",
"details": {}
}
}
```
### Common Error Codes
- VALIDATION_ERROR
- NOT_FOUND
- UNAUTHORIZED
- FORBIDDEN
## Authentication & Authorization
[Auth patterns]
- Token handling
- Permission checks
- Rate limiting
## Documentation
[API documentation standards]
- OpenAPI/Swagger
- Inline documentation
- Example requests/responses
```
## Content Guidelines
- Focus on API-specific patterns
- Include request/response examples
- Cover security considerations
- Reference framework conventions

View File

@@ -1,122 +0,0 @@
# Rule Template: Component Rules (Frontend/Fullstack Only)
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
- {UI_FRAMEWORK}: UI framework (React, Vue, etc)
## Output Format
```markdown
---
paths:
- "**/components/**/*.{FILE_EXT}"
- "**/ui/**/*.{FILE_EXT}"
- "**/views/**/*.{FILE_EXT}"
- "**/pages/**/*.{FILE_EXT}"
---
# {TECH_STACK_NAME} Component Rules
## Component Structure
[Organization patterns from Exa research]
### File Organization
```
components/
├── common/ # Shared components
├── features/ # Feature-specific
├── layout/ # Layout components
└── ui/ # Base UI elements
```
### Component Template
```{lang}
// Standard component structure
```
### Naming Conventions
- PascalCase for components
- Descriptive names
- Prefix conventions (if any)
## Props & State
[State management guidelines]
### Props Definition
```{lang}
// Props type/interface example
```
### Props Best Practices
- Required vs optional
- Default values
- Prop validation
- Prop naming
### Local State
- When to use local state
- State initialization
- State updates
### Shared State
- State management approach
- Context usage
- Store patterns
## Styling
[CSS/styling conventions]
### Approach
- [CSS Modules/Styled Components/Tailwind/etc]
### Style Organization
```{lang}
// Style example
```
### Naming Conventions
- Class naming (BEM, etc)
- CSS variable usage
- Theme integration
## Accessibility
[A11y requirements]
### Essential Requirements
- Semantic HTML
- ARIA labels
- Keyboard navigation
- Focus management
### Testing A11y
- Automated checks
- Manual testing
- Screen reader testing
## Performance
[Performance guidelines]
### Optimization Patterns
- Memoization
- Lazy loading
- Code splitting
- Virtual lists
### Avoiding Re-renders
- When to memoize
- Callback optimization
- State structure
```
## Content Guidelines
- Focus on component-specific patterns
- Include framework-specific examples
- Cover accessibility requirements
- Address performance considerations

View File

@@ -1,89 +0,0 @@
# Rule Template: Configuration Rules
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {CONFIG_FILES}: List of config file patterns
## Output Format
```markdown
---
paths:
- "*.config.*"
- ".*rc"
- ".*rc.{js,json,yaml,yml}"
- "package.json"
- "tsconfig*.json"
- "pyproject.toml"
- "Cargo.toml"
- "go.mod"
- ".env*"
---
# {TECH_STACK_NAME} Configuration Rules
## Project Setup
[Configuration guidelines from Exa research]
### Essential Config Files
- [List primary config files]
- [Purpose of each]
### Recommended Structure
```
project/
├── [config files]
├── src/
└── tests/
```
## Tooling
[Linters, formatters, bundlers]
### Linting
- Tool: [ESLint/Pylint/etc]
- Config file: [.eslintrc/pyproject.toml/etc]
- Key rules to enable
### Formatting
- Tool: [Prettier/Black/etc]
- Integration with editor
- Pre-commit hooks
### Build Tools
- Bundler: [Webpack/Vite/etc]
- Build configuration
- Optimization settings
## Environment
[Environment management]
### Environment Variables
- Naming conventions
- Required vs optional
- Secret handling
- .env file structure
### Development vs Production
- Environment-specific configs
- Feature flags
- Debug settings
## Dependencies
[Dependency management]
- Lock file usage
- Version pinning strategy
- Security updates
- Peer dependencies
```
## Content Guidelines
- Focus on config file best practices
- Include security considerations
- Cover development workflow setup
- Mention CI/CD integration where relevant

View File

@@ -1,60 +0,0 @@
# Rule Template: Core Principles
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
## Output Format
```markdown
---
paths: **/*.{FILE_EXT}
---
# {TECH_STACK_NAME} Core Principles
## Philosophy
[Synthesize core philosophy from Exa research]
- Key paradigms and mental models
- Design philosophy
- Community conventions
## Naming Conventions
[Language-specific naming rules]
- Variables and functions
- Classes and types
- Files and directories
- Constants and enums
## Code Organization
[Structure and module guidelines]
- File structure patterns
- Module boundaries
- Import organization
- Dependency management
## Type Safety
[Type system best practices - if applicable]
- Type annotation guidelines
- Generic usage patterns
- Type inference vs explicit types
- Null/undefined handling
## Documentation
[Documentation standards]
- Comment style
- JSDoc/docstring format
- README conventions
```
## Content Guidelines
- Focus on universal principles that apply to ALL files
- Keep rules actionable and specific
- Include rationale for each rule
- Reference official style guides where applicable

View File

@@ -1,70 +0,0 @@
# Rule Template: Implementation Patterns
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
## Output Format
```markdown
---
paths: src/**/*.{FILE_EXT}
---
# {TECH_STACK_NAME} Implementation Patterns
## Common Patterns
[With code examples from Exa research]
### Pattern 1: [Name]
```{lang}
// Example code
```
**When to use**: [Context]
**Benefits**: [Why this pattern]
### Pattern 2: [Name]
...
## Anti-Patterns to Avoid
[Common mistakes with examples]
### Anti-Pattern 1: [Name]
```{lang}
// Bad example
```
**Problem**: [Why it's bad]
**Solution**: [Better approach]
## Error Handling
[Error handling conventions]
- Error types and hierarchy
- Try-catch patterns
- Error propagation
- Logging practices
## Async Patterns
[Asynchronous code conventions - if applicable]
- Promise handling
- Async/await usage
- Concurrency patterns
- Error handling in async code
## State Management
[State handling patterns]
- Local state patterns
- Shared state approaches
- Immutability practices
```
## Content Guidelines
- Focus on source code implementation
- Provide concrete code examples
- Show both good and bad patterns
- Include context for when to apply each pattern

View File

@@ -1,81 +0,0 @@
# Rule Template: Testing Rules
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
- {TEST_FRAMEWORK}: Primary testing framework
## Output Format
```markdown
---
paths:
- "**/*.{test,spec}.{FILE_EXT}"
- "tests/**/*.{FILE_EXT}"
- "__tests__/**/*.{FILE_EXT}"
- "**/test_*.{FILE_EXT}"
- "**/*_test.{FILE_EXT}"
---
# {TECH_STACK_NAME} Testing Rules
## Testing Framework
[Recommended frameworks from Exa research]
- Primary: {TEST_FRAMEWORK}
- Assertion library
- Mocking library
- Coverage tool
## Test Structure
[Organization patterns]
### File Naming
- Unit tests: `*.test.{ext}` or `*.spec.{ext}`
- Integration tests: `*.integration.test.{ext}`
- E2E tests: `*.e2e.test.{ext}`
### Test Organization
```{lang}
describe('[Component/Module]', () => {
describe('[method/feature]', () => {
it('should [expected behavior]', () => {
// Arrange
// Act
// Assert
});
});
});
```
## Mocking & Fixtures
[Best practices]
- Mock creation patterns
- Fixture organization
- Test data factories
- Cleanup strategies
## Assertions
[Assertion patterns]
- Common assertions
- Custom matchers
- Async assertions
- Error assertions
## Coverage Requirements
[Coverage guidelines]
- Minimum coverage thresholds
- What to cover vs skip
- Coverage report interpretation
```
## Content Guidelines
- Include framework-specific patterns
- Show test structure examples
- Cover both unit and integration testing
- Include async testing patterns

View File

@@ -1,89 +0,0 @@
# Tech Stack Rules Generation Agent Prompt
## Context Variables
- {TECH_STACK_NAME}: Normalized tech stack name (e.g., "typescript-react")
- {PRIMARY_LANG}: Primary language (e.g., "typescript")
- {FILE_EXT}: File extension pattern (e.g., "{ts,tsx}")
- {FRAMEWORK_TYPE}: frontend | backend | fullstack | library
- {COMPONENTS}: Array of tech components
- {OUTPUT_DIR}: .claude/rules/tech/{TECH_STACK_NAME}/
## Agent Instructions
Generate path-conditional rules for Claude Code automatic loading.
### Step 1: Execute Exa Research
Run 4-6 parallel queries based on tech stack:
**Base Queries** (always execute):
```
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} best practices principles 2025", tokensNum: 8000)
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} implementation patterns examples", tokensNum: 7000)
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} testing strategies conventions", tokensNum: 5000)
mcp__exa__web_search_exa(query: "{PRIMARY_LANG} configuration setup 2025", numResults: 5)
```
**Component Queries** (for each framework in COMPONENTS):
```
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} {component} integration patterns", tokensNum: 5000)
```
### Step 2: Read Rule Templates
Read each template file before generating content:
```
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-core.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-patterns.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-testing.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-config.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-api.txt) # Only if backend/fullstack
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-components.txt) # Only if frontend/fullstack
```
### Step 3: Generate Rule Files
Create directory and write files:
```bash
mkdir -p "{OUTPUT_DIR}"
```
**Always Generate**:
- core.md (from rule-core.txt template)
- patterns.md (from rule-patterns.txt template)
- testing.md (from rule-testing.txt template)
- config.md (from rule-config.txt template)
**Conditional**:
- api.md: Only if FRAMEWORK_TYPE == 'backend' or 'fullstack'
- components.md: Only if FRAMEWORK_TYPE == 'frontend' or 'fullstack'
### Step 4: Write Metadata
```json
{
"tech_stack": "{TECH_STACK_NAME}",
"primary_lang": "{PRIMARY_LANG}",
"file_ext": "{FILE_EXT}",
"framework_type": "{FRAMEWORK_TYPE}",
"components": ["{COMPONENTS}"],
"generated_at": "{ISO_TIMESTAMP}",
"source": "exa-research",
"files_generated": ["core.md", "patterns.md", "testing.md", "config.md", ...]
}
```
### Step 5: Report Completion
Provide summary:
- Files created with their path patterns
- Exa queries executed (count)
- Sources consulted (count)
## Critical Requirements
1. Every .md file MUST start with `paths` YAML frontmatter
2. Use {FILE_EXT} consistently across all rule files
3. Synthesize Exa research into actionable rules
4. Include code examples from Exa sources
5. Keep each file focused on its specific domain

View File

@@ -1,38 +0,0 @@
PURPOSE: Generate comprehensive multi-layer test enhancement suggestions
- Success: Cover L0-L3 layers with focus on API, integration, and error scenarios
- Scope: Files with coverage gaps identified in TEST_ANALYSIS_RESULTS.md
- Goal: Provide specific, actionable test case suggestions that increase coverage completeness
TASK:
• L1 (Unit Tests): Suggest edge cases, boundary conditions, error paths, state transitions
• L2.1 (Integration): Suggest module interaction patterns, dependency injection scenarios
• L2.2 (API Contracts): Suggest request/response test cases, validation, status codes, error responses
• L2.4 (External APIs): Suggest mock strategies, failure scenarios, timeout handling, retry logic
• L2.5 (Failure Modes): Suggest exception hierarchies, error propagation, recovery strategies
• Cross-cutting: Suggest performance test cases, security considerations
MODE: analysis
CONTEXT: @.workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md
Memory: Project type, test framework, existing test patterns, coverage gaps
EXPECTED: Markdown report with structured test enhancement suggestions organized by:
1. File-level test requirements (per file needing tests)
2. Layer-specific test cases (L1, L2.1, L2.2, L2.4, L2.5)
3. Each suggestion includes:
- Test type and layer (e.g., "L2.2 API Contract Test")
- Specific test case description (e.g., "POST /api/users - Invalid email format")
- Expected behavior (e.g., "Returns 400 with validation error message")
- Dependencies/mocks needed (e.g., "Mock email service")
- Success criteria (e.g., "Status 400, error.field === 'email'")
4. Test ordering/dependencies (which tests should run first)
5. Integration test strategies (how components interact)
6. Error scenario matrix (all failure modes covered)
CONSTRAINTS:
- Focus on identified coverage gaps from TEST_ANALYSIS_RESULTS.md
- Prioritize API tests, integration tests, and error scenarios
- No code generation - suggestions only with sufficient detail for implementation
- Consider project conventions and existing test patterns
- Each suggestion should be actionable and specific (not generic)
- Output format: Markdown with clear section headers

View File

@@ -1,176 +0,0 @@
Validate technical feasibility and identify implementation risks for proposed solution design.
## CORE CHECKLIST ⚡
□ Read context-package.json and gemini-solution-design.md
□ Assess complexity, validate technology choices
□ Evaluate performance and security implications
□ Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT
□ Write output to specified .workflow/active/{session_id}/.process/ path
## PREREQUISITE ANALYSIS
### Required Input Files
1. **context-package.json**: Task requirements, source files, tech stack
2. **gemini-solution-design.md**: Proposed solution design and architecture
3. **workflow-session.json**: Session state and context
4. **CLAUDE.md**: Project standards and conventions
### Analysis Dependencies
- Review Gemini's proposed solution design
- Validate against actual codebase capabilities
- Assess implementation complexity realistically
- Identify gaps between design and execution
## REQUIRED VALIDATION
### 1. Feasibility Assessment
- **Complexity Rating**: Rate technical complexity (1-5 scale)
- 1: Trivial - straightforward implementation
- 2: Simple - well-known patterns
- 3: Moderate - some challenges
- 4: Complex - significant challenges
- 5: Very Complex - high risk, major unknowns
- **Resource Requirements**: Estimate development effort
- Development time (hours/days/weeks)
- Required expertise level
- Infrastructure needs
- **Technology Compatibility**: Validate proposed tech stack
- Framework version compatibility
- Library maturity and support
- Integration with existing systems
### 2. Risk Analysis
- **Implementation Risks**: Technical challenges and blockers
- Unknown implementation patterns
- Missing capabilities or APIs
- Breaking changes to existing code
- **Integration Challenges**: System integration concerns
- Data format compatibility
- API contract changes
- Dependency conflicts
- **Performance Concerns**: Performance and scalability risks
- Resource consumption (CPU, memory, I/O)
- Latency and throughput impact
- Caching and optimization needs
- **Security Concerns**: Security vulnerabilities and threats
- Authentication/authorization gaps
- Data exposure risks
- Compliance violations
### 3. Implementation Validation
- **Development Approach**: Validate proposed implementation strategy
- Verify module dependency order
- Assess incremental development feasibility
- Evaluate testing approach
- **Quality Standards**: Validate quality requirements
- Test coverage achievability
- Performance benchmark realism
- Documentation completeness
- **Maintenance Implications**: Long-term sustainability
- Code maintainability assessment
- Technical debt evaluation
- Evolution and extensibility
### 4. Code Target Verification
Review Gemini's proposed code targets:
- **Validate existing targets**: Confirm file:function:lines exist
- **Assess new file targets**: Evaluate necessity and placement
- **Identify missing targets**: Suggest additional modification points
- **Refine target specifications**: Provide more precise line numbers if possible
### 5. Recommendations
- **Must-Have Requirements**: Critical requirements for success
- **Optimization Opportunities**: Performance and quality improvements
- **Security Controls**: Essential security measures
- **Risk Mitigation**: Strategies to reduce identified risks
## OUTPUT REQUIREMENTS
### Output File
**Path**: `.workflow/active/{session_id}/.process/codex-feasibility-validation.md`
**Format**: Follow structure from `~/.ccw/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
### Required Sections
Focus on these sections from the template:
- Executive Summary (with Codex perspective)
- Current State Analysis (validation findings)
- Implementation Strategy (feasibility assessment)
- Solution Optimization (risk mitigation)
- Confidence Scores (technical feasibility focus)
### Content Guidelines
- ✅ Focus on technical feasibility and risk assessment
- ✅ Verify code targets from Gemini's design
- ✅ Provide concrete risk mitigation strategies
- ✅ Quantify complexity and effort estimates
- ❌ Do NOT create task breakdowns
- ❌ Do NOT provide step-by-step implementation guides
- ❌ Do NOT include code examples
## VALIDATION METHODOLOGY
### Complexity Scoring
Rate each aspect on 1-5 scale:
- Technical Complexity
- Integration Complexity
- Performance Risk
- Security Risk
- Maintenance Burden
### Risk Classification
- **LOW**: Minor issues, easily addressable
- **MEDIUM**: Manageable challenges with clear mitigation
- **HIGH**: Significant concerns requiring major mitigation
- **CRITICAL**: Fundamental viability threats
### Feasibility Judgment
- **PROCEED**: Technically feasible with acceptable risk
- **PROCEED_WITH_MODIFICATIONS**: Feasible but needs adjustments
- **RECONSIDER**: High risk, major changes needed
- **REJECT**: Not feasible with current approach
## CONTEXT INTEGRATION
### Gemini Analysis Integration
- Review proposed architecture and design decisions
- Validate assumptions and technology choices
- Cross-check code targets against actual codebase
- Assess realism of performance targets
### Codebase Reality Check
- Verify existing code capabilities
- Identify actual technical constraints
- Assess team skill compatibility
- Evaluate infrastructure readiness
### Session Context
- Consider session history and previous decisions
- Align with project architecture standards
- Respect existing patterns and conventions
## EXECUTION MODE
**Mode**: Analysis with write permission for output file
**CLI Tool**: Codex with --skip-git-repo-check -s danger-full-access
**Timeout**: 60-90 minutes for complex tasks
**Output**: Single file codex-feasibility-validation.md
**Trigger**: Only for complex tasks (>6 modules)
## VERIFICATION CHECKLIST ✓
□ context-package.json and gemini-solution-design.md read
□ Complexity rated on 1-5 scale with justification
□ All risk categories assessed (technical, integration, performance, security)
□ Code targets verified and refined
□ Risk mitigation strategies provided
□ Resource requirements estimated
□ Final feasibility judgment (PROCEED/RECONSIDER/REJECT)
□ Output written to .workflow/active/{session_id}/.process/codex-feasibility-validation.md
Focus: Technical feasibility validation with realistic risk assessment and mitigation strategies.

View File

@@ -1,131 +0,0 @@
Analyze and design optimal solution with comprehensive architecture evaluation and design decisions.
## CORE CHECKLIST ⚡
□ Read context-package.json to understand task requirements, source files, tech stack
□ Analyze current architecture patterns and code structure
□ Propose solution design with key decisions and rationale
□ Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS
□ Write output to specified .workflow/active/{session_id}/.process/ path
## ANALYSIS PRIORITY
### Source Hierarchy
1. **PRIMARY**: Individual role analysis.md files (system-architect, ui-designer, data-architect, etc.)
- Technical details and implementation considerations
- Architecture Decision Records (ADRs)
- Design decision context and rationale
2. **SECONDARY**: role analysis documents
- Integrated requirements across roles
- Cross-role alignment and dependencies
- Unified feature specifications
3. **REFERENCE**: guidance-specification.md
- Discussion context and background
- Initial problem framing
## REQUIRED ANALYSIS
### 1. Current State Assessment
- Identify existing architectural patterns and code structure
- Map integration points and dependencies
- Evaluate technical debt and pain points
- Assess framework compatibility and constraints
### 2. Solution Design
- Propose core architecture principles and approach
- Design component architecture and data flow
- Specify API contracts and integration strategy
- Define technology stack with justification
### 3. Key Design Decisions
For each critical decision:
- **Decision**: What is being decided
- **Rationale**: Why this approach
- **Alternatives Considered**: Other options and their tradeoffs
- **Impact**: Implications on architecture, performance, maintainability
Minimum 2 key decisions required.
### 4. Code Modification Targets
Identify specific code locations for changes:
- **Existing files**: `file:function:lines` format (e.g., `src/auth/login.ts:validateUser:45-52`)
- **New files**: `file` only (e.g., `src/auth/PasswordReset.ts`)
- **Unknown lines**: `file:function:*` (e.g., `src/auth/service.ts:refreshToken:*`)
For each target:
- Type: Modify existing | Create new
- Modification/Purpose: What changes needed
- Rationale: Why this target
### 5. Critical Insights
- Strengths: What works well in current/proposed design
- Gaps: Missing capabilities or concerns
- Risks: Technical, integration, performance, security
- Optimization Opportunities: Performance, security, code quality
### 6. Feasibility Assessment
- Technical Complexity: Rating and analysis
- Performance Impact: Expected characteristics
- Resource Requirements: Development effort
- Maintenance Burden: Ongoing considerations
## OUTPUT REQUIREMENTS
### Output File
**Path**: `.workflow/active/{session_id}/.process/gemini-solution-design.md`
**Format**: Follow structure from `~/.ccw/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
### Required Sections
- Executive Summary with feasibility score
- Current State Analysis
- Proposed Solution Design with 2+ key decisions
- Implementation Strategy with code targets
- Solution Optimization (performance, security, quality)
- Critical Success Factors
- Confidence Scores with recommendation
### Content Guidelines
- ✅ Focus on solution improvements and key design decisions
- ✅ Include rationale, alternatives, and tradeoffs for decisions
- ✅ Provide specific code targets in correct format
- ✅ Quantify assessments with scores (X/5)
- ❌ Do NOT create task lists or implementation steps
- ❌ Do NOT include code examples or snippets
- ❌ Do NOT create project management timelines
## CONTEXT INTEGRATION
### Session Context
- Load context-package.json for task requirements
- Reference workflow-session.json for session state
- Review CLAUDE.md for project standards
### Brainstorm Context
If brainstorming artifacts exist:
- Prioritize individual role analysis.md files
- Use role analysis documents for integrated view
- Reference guidance-specification.md for context
### Codebase Context
- Identify similar patterns in existing code
- Evaluate success/failure of current approaches
- Ensure consistency with project architecture
## EXECUTION MODE
**Mode**: Analysis with write permission for output file
**CLI Tool**: Gemini wrapper with --approval-mode yolo
**Timeout**: 40-60 minutes based on complexity
**Output**: Single file gemini-solution-design.md
## VERIFICATION CHECKLIST ✓
□ context-package.json read and analyzed
□ All 7 required sections present in output
□ 2+ key design decisions with rationale and alternatives
□ Code targets specified in correct format
□ Feasibility scores provided (X/5)
□ Final recommendation (PROCEED/RECONSIDER/REJECT)
□ Output written to .workflow/active/{session_id}/.process/gemini-solution-design.md
Focus: Comprehensive solution design emphasizing architecture decisions and critical insights.

View File

@@ -1,286 +0,0 @@
IMPL_PLAN.md Template - Implementation Plan Document Structure
## Document Frontmatter
```yaml
---
identifier: WFS-{session-id}
source: "User requirements" | "File: path" | "Issue: ISS-001"
analysis: .workflow/active//{session-id}/.process/ANALYSIS_RESULTS.md
artifacts: .workflow/active//{session-id}/.brainstorming/
context_package: .workflow/active//{session-id}/.process/context-package.json # CCW smart context
workflow_type: "standard | tdd | design" # Indicates execution model
verification_history: # CCW quality gates
concept_verify: "passed | skipped | pending"
action_plan_verify: "pending"
phase_progression: "brainstorm → context → analysis → concept_verify → planning" # CCW workflow phases
---
```
## Document Structure
# Implementation Plan: {Project Title}
## 1. Summary
Core requirements, objectives, technical approach summary (2-3 paragraphs max).
**Core Objectives**:
- [Key objective 1]
- [Key objective 2]
**Technical Approach**:
- [High-level approach]
## 2. Context Analysis
### CCW Workflow Context
**Phase Progression**:
- ✅ Phase 1: Brainstorming (role analyses generated)
- ✅ Phase 2: Context Gathering (context-package.json: {N} files, {M} modules analyzed)
- ✅ Phase 3: Enhanced Analysis (ANALYSIS_RESULTS.md: Gemini/Qwen/Codex parallel insights)
- ✅ Phase 4: Concept Verification ({X} clarifications answered, role analyses updated | skipped)
- ⏳ Phase 5: Action Planning (current phase - generating IMPL_PLAN.md)
**Quality Gates**:
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
- plan-verify: ⏳ Pending (recommended before /workflow:execute)
**Context Package Summary**:
- **Focus Paths**: {list key directories from context-package.json}
- **Key Files**: {list primary files for modification}
- **Module Depth Analysis**: {from get_modules_by_depth.sh output}
- **Smart Context**: {total file count} files, {module count} modules, {dependency count} dependencies identified
### Project Profile
- **Type**: Greenfield/Enhancement/Refactor
- **Scale**: User count, data volume, complexity
- **Tech Stack**: Primary technologies
- **Timeline**: Duration and milestones
### Module Structure
```
[Directory tree showing key modules]
```
### Dependencies
**Primary**: [Core libraries and frameworks]
**APIs**: [External services]
**Development**: [Testing, linting, CI/CD tools]
### Patterns & Conventions
- **Architecture**: [Key patterns like DI, Event-Driven]
- **Component Design**: [Design patterns]
- **State Management**: [State strategy]
- **Code Style**: [Naming, TypeScript coverage]
## 3. Brainstorming Artifacts Reference
### Artifact Usage Strategy
**Primary Reference (role analyses)**:
- **What**: Role-specific analyses from brainstorming providing multi-perspective insights
- **When**: Every task references relevant role analyses for requirements and design decisions
- **How**: Extract requirements, architecture decisions, UI/UX patterns from applicable role documents
- **Priority**: Collective authoritative source - multiple role perspectives provide comprehensive coverage
- **CCW Value**: Maintains role-specific expertise while enabling cross-role integration during planning
**Context Intelligence (context-package.json)**:
- **What**: Smart context gathered by CCW's context-gather phase
- **Content**: Focus paths, dependency graph, existing patterns, module structure
- **Usage**: Tasks load this via `flow_control.preparatory_steps` for environment setup
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
**Technical Analysis (ANALYSIS_RESULTS.md)**:
- **What**: Gemini/Qwen/Codex parallel analysis results
- **Content**: Optimization strategies, risk assessment, architecture review, implementation patterns
- **Usage**: Referenced in task planning for technical guidance and risk mitigation
- **CCW Value**: Multi-model parallel analysis providing comprehensive technical intelligence
### Integrated Specifications (Highest Priority)
- **role analyses**: Comprehensive implementation blueprint
- Contains: Architecture design, UI/UX guidelines, functional/non-functional requirements, implementation roadmap, risk assessment
### Supporting Artifacts (Reference)
- **guidance-specification.md**: Role-specific discussion points and analysis framework
- **system-architect/analysis.md**: Detailed architecture specifications
- **ui-designer/analysis.md**: Layout and component specifications
- **product-manager/analysis.md**: Product vision and user stories
**Artifact Priority in Development**:
1. role analyses (primary reference for all tasks)
2. context-package.json (smart context for execution environment)
3. ANALYSIS_RESULTS.md (technical analysis and optimization strategies)
4. Role-specific analyses (fallback for detailed specifications)
## 4. Implementation Strategy
### Execution Strategy
**Execution Model**: [Sequential | Parallel | Phased | TDD Cycles]
**Rationale**: [Why this execution model fits the project]
**Parallelization Opportunities**:
- [List independent workstreams]
**Serialization Requirements**:
- [List critical dependencies]
### Architectural Approach
**Key Architecture Decisions**:
- [ADR references from role analyses]
- [Justification for architecture patterns]
**Integration Strategy**:
- [How modules communicate]
- [State management approach]
### Key Dependencies
**Task Dependency Graph**:
```
[High-level dependency visualization]
```
**Critical Path**: [Identify bottleneck tasks]
### Testing Strategy
**Testing Approach**:
- Unit testing: [Tools, scope]
- Integration testing: [Key integration points]
- E2E testing: [Critical user flows]
**Coverage Targets**:
- Lines: ≥70%
- Functions: ≥70%
- Branches: ≥65%
**Quality Gates**:
- [CI/CD gates]
- [Performance budgets]
## 5. Task Breakdown Summary
### Task Count
**{N} tasks** (flat hierarchy | two-level hierarchy, sequential | parallel execution)
### Task Structure
- **IMPL-1**: [Main task title]
- **IMPL-2**: [Main task title]
...
### Complexity Assessment
- **High**: [List with rationale]
- **Medium**: [List]
- **Low**: [List]
### Dependencies
[Reference Section 4.3 for dependency graph]
**Parallelization Opportunities**:
- [Specific task groups that can run in parallel]
## 6. Implementation Plan (Detailed Phased Breakdown)
### Execution Strategy
**Phase 1 (Weeks 1-2): [Phase Name]**
- **Tasks**: IMPL-1, IMPL-2
- **Deliverables**:
- [Specific deliverable 1]
- [Specific deliverable 2]
- **Success Criteria**:
- [Measurable criterion]
**Phase 2 (Weeks 3-N): [Phase Name]**
...
### Resource Requirements
**Development Team**:
- [Team composition and skills]
**External Dependencies**:
- [Third-party services, APIs]
**Infrastructure**:
- [Development, staging, production environments]
## 7. Risk Assessment & Mitigation
| Risk | Impact | Probability | Mitigation Strategy | Owner |
|------|--------|-------------|---------------------|-------|
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Role] |
**Critical Risks** (High impact + High probability):
- [Risk 1]: [Detailed mitigation plan]
**Monitoring Strategy**:
- [How risks will be monitored]
## 8. Success Criteria
**Functional Completeness**:
- [ ] All requirements from role analyses implemented
- [ ] All acceptance criteria from task.json files met
**Technical Quality**:
- [ ] Test coverage ≥70%
- [ ] Bundle size within budget
- [ ] Performance targets met
**Operational Readiness**:
- [ ] CI/CD pipeline operational
- [ ] Monitoring and logging configured
- [ ] Documentation complete
**Business Metrics**:
- [ ] [Key business metrics from role analyses]
## Template Usage Guidelines
### When Generating IMPL_PLAN.md
1. **Fill Frontmatter Variables**:
- Replace {session-id} with actual session ID
- Set workflow_type based on planning phase
- Update verification_history based on concept-verify results
2. **Populate CCW Workflow Context**:
- Extract file/module counts from context-package.json
- Document phase progression based on completed workflow steps
- Update quality gate status (passed/skipped/pending)
3. **Extract from Analysis Results**:
- Core objectives from ANALYSIS_RESULTS.md
- Technical approach and architecture decisions
- Risk assessment and mitigation strategies
4. **Reference Brainstorming Artifacts**:
- List detected artifacts with correct paths
- Document artifact priority and usage strategy
- Map artifacts to specific tasks based on domain
5. **Define Implementation Strategy**:
- Choose execution model (sequential/parallel/phased)
- Identify parallelization opportunities
- Document critical path and dependencies
6. **Break Down Tasks**:
- List all task IDs and titles
- Assess complexity (high/medium/low)
- Create dependency graph visualization
7. **Set Success Criteria**:
- Extract from role analyses
- Include measurable metrics
- Define quality gates
### Validation Checklist
Before finalizing IMPL_PLAN.md:
- [ ] All frontmatter fields populated correctly
- [ ] CCW workflow context reflects actual phase progression
- [ ] Brainstorming artifacts correctly referenced
- [ ] Task breakdown matches generated task JSONs
- [ ] Dependencies are acyclic and logical
- [ ] Success criteria are measurable
- [ ] Risk assessment includes mitigation strategies
- [ ] All {placeholder} variables replaced with actual values

View File

@@ -1,94 +0,0 @@
Template for generating conflict-patterns.md
## Purpose
Document recurring conflict patterns across workflow sessions with resolutions.
## File Location
`.claude/skills/workflow-progress/conflict-patterns.md`
## Update Strategy
- **Incremental mode**: Add new conflicts, update frequency counters for existing patterns
- **Full mode**: Regenerate entire conflict analysis from all sessions
## Structure
```markdown
# Workflow Conflict Patterns
## Architecture Conflicts
### {Conflict_Pattern_Title}
**Pattern**: {concise_pattern_description}
**Sessions**: {session_id_1}, {session_id_2}
**Resolution**: {resolution_strategy}
**Code Impact**:
- Modified: {file_path_1}, {file_path_2}
- Added: {file_path_3}
- Tests: {test_file_path}
**Frequency**: {count} sessions
**Severity**: {high|medium|low}
---
## Dependency Conflicts
### {Conflict_Pattern_Title}
**Pattern**: {concise_pattern_description}
**Sessions**: {session_id_list}
**Resolution**: {resolution_strategy}
**Package Changes**:
- Updated: {package_name}@{version}
- Locked: {dependency_name}
**Frequency**: {count} sessions
**Severity**: {high|medium|low}
---
## Testing Conflicts
### {Conflict_Pattern_Title}
...
---
## Performance Conflicts
### {Conflict_Pattern_Title}
...
```
## Data Sources
- IMPL_PLAN summaries: `.workflow/.archives/{session_id}/IMPL_PLAN.md`
- Context packages: `.workflow/.archives/{session_id}/.process/context-package.json` (reference only)
- Session lessons: `manifest.json` -> `archives[].lessons.challenges`
## Conflict Identification (Use CCW CLI)
**Command Pattern**:
```bash
ccw cli -p "
PURPOSE: Identify conflict patterns from workflow sessions
TASK: • Extract conflicts from IMPL_PLAN and lessons • Group by type (architecture/dependencies/testing/performance) • Identify recurring patterns (same conflict in different sessions) • Link resolutions to specific sessions
MODE: analysis
CONTEXT: @.workflow/.archives/*/IMPL_PLAN.md @.workflow/.archives/manifest.json
EXPECTED: Conflict patterns with frequency and resolution
CONSTRAINTS: analysis=READ-ONLY
" --tool gemini --mode analysis --rule workflow-skill-aggregation --cd .workflow/.archives
```
**Pattern Grouping**:
- **Architecture**: Design conflicts, incompatible strategies, interface mismatches
- **Dependencies**: Version conflicts, library incompatibilities, package issues
- **Testing**: Mock data inconsistencies, test environment issues, coverage gaps
- **Performance**: Bottlenecks, optimization conflicts, resource issues
## Formatting Rules
- Sort by frequency within each category
- Include code impact for traceability
- Mark high-frequency patterns (3+ sessions) as "RECURRING"
- Keep resolution descriptions actionable
- Use relative paths for file references

View File

@@ -1,94 +0,0 @@
Template for generating lessons-learned.md
## Purpose
Aggregate lessons learned from workflow sessions, categorized by functional domain and severity.
## File Location
`.claude/skills/workflow-progress/lessons-learned.md`
## Update Strategy
- **Incremental mode**: Merge new session lessons into existing categories, update frequencies
- **Full mode**: Regenerate entire lessons document from all sessions
## Structure
```markdown
# Workflow Lessons Learned
## Best Practices (Successes)
### {Domain_Category}
- {success_pattern_1} (sessions: {session_id_1}, {session_id_2})
- {success_pattern_2} (sessions: {session_id_3})
### {Domain_Category_2}
...
---
## Known Challenges
### High Priority
- **{challenge_title}**: {description}
- Affected sessions: {session_id_1}, {session_id_2}
- Resolution: {resolution_strategy}
### Medium Priority
- **{challenge_title}**: {description}
- Affected sessions: {session_id_3}
- Resolution: {resolution_strategy}
### Low Priority
...
---
## Watch Patterns
### Critical (3+ sessions)
1. **{pattern_name}**: {description}
- Frequency: {count} sessions
- Affected: {session_list}
- Mitigation: {mitigation_strategy}
### High Priority (2 sessions)
...
### Normal (1 session)
...
```
## Data Sources
- Lessons: `manifest.json` -> `archives[].lessons.{successes|challenges|watch_patterns}`
- Session metadata: `.workflow/.archives/{session_id}/workflow-session.json`
## Aggregation Rules (Use CCW CLI)
**Command Pattern**:
```bash
ccw cli -p "
PURPOSE: Aggregate workflow lessons from session data
TASK: • Group successes by functional domain • Categorize challenges by severity (HIGH/MEDIUM/LOW) • Identify watch patterns with frequency >= 2 • Mark CRITICAL patterns (3+ sessions)
MODE: analysis
CONTEXT: @.workflow/.archives/manifest.json
EXPECTED: Aggregated lessons with frequency counts
CONSTRAINTS: analysis=READ-ONLY
" --tool gemini --mode analysis --rule workflow-skill-aggregation --cd .workflow/.archives
```
**Severity Classification**:
- **HIGH**: Blocked development >4 hours OR repeated in 3+ sessions
- **MEDIUM**: Required significant rework OR repeated in 2 sessions
- **LOW**: Minor issues resolved quickly
**Pattern Identification**:
- Successes in 3+ sessions → "Best Practices"
- Challenges repeated 2+ times → "Known Issues"
- Watch patterns frequency >= 2 → "High Priority Warnings"
- Watch patterns frequency >= 3 → "CRITICAL"
## Formatting Rules
- Sort by frequency (most common first)
- Include session references for traceability
- Use bold for challenge titles
- Keep descriptions concise but actionable

View File

@@ -1,119 +0,0 @@
# Analysis Mode Protocol
## Mode Definition
**Mode**: `analysis` (READ-ONLY)
## Prompt Structure
```
PURPOSE: [development goal]
TASK: [specific implementation task]
MODE: [auto|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [templates | additional constraints]
```
## Operation Boundaries
### ALLOWED Operations
- **READ**: All CONTEXT files and analyze content
- **ANALYZE**: Code patterns, architecture, dependencies
- **GENERATE**: Text output, insights, recommendations
- **DOCUMENT**: Analysis results in output response only
### FORBIDDEN Operations
- **NO FILE CREATION**: Cannot create any files on disk
- **NO FILE MODIFICATION**: Cannot modify existing files
- **NO FILE DELETION**: Cannot delete any files
- **NO DIRECTORY OPERATIONS**: Cannot create/modify directories
**CRITICAL**: Absolutely NO file system operations - OUTPUT ONLY
## Execution Flow
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
2. **Read** and analyze CONTEXT files thoroughly
3. **Identify** patterns, issues, and dependencies
4. **Generate** insights and recommendations
5. **Validate** EXPECTED deliverables met
6. **Output** structured analysis (text response only)
## Core Requirements
**ALWAYS**:
- Analyze ALL CONTEXT files completely
- Apply RULES (templates + constraints) exactly
- Provide code evidence with `file:line` references
- List all related/analyzed files at output beginning
- Match EXPECTED deliverables precisely
**NEVER**:
- Assume behavior without code verification
- Ignore CONTEXT file patterns
- Skip RULES or templates
- Make unsubstantiated claims
- Create/modify/delete any files
## RULES Processing
- Parse RULES field to extract template content and constraints
- Recognize `|` as separator: `template content | additional constraints`
- Apply ALL template guidelines as mandatory
- Treat rule violations as task failures
## Error Handling
**File Not Found**: Report missing files, continue with available, note in output
**Invalid CONTEXT Pattern**: Report invalid pattern, request correction, do not guess
## Quality Standards
- **Thoroughness**: Analyze ALL files, check cross-file patterns, quantify metrics
- **Evidence-Based**: Quote code with `file:line`, link patterns, support claims with examples
- **Actionable**: Clear recommendations, prioritized by impact, incremental changes
---
## Output Format
### Format Priority
**If template defines output format** → Follow template format EXACTLY
**If template has no format** → Use default format below
### Default Analysis Output
```markdown
# Analysis: [TASK Title]
## Related Files
- `path/to/file1.ext` - [Brief description of relevance]
- `path/to/file2.ext` - [Brief description of relevance]
## Summary
[2-3 sentence overview]
## Key Findings
1. [Finding] - path/to/file:123
2. [Finding] - path/to/file:456
## Detailed Analysis
[Evidence-based analysis with code quotes]
## Recommendations
1. [Actionable recommendation]
2. [Actionable recommendation]
```
### Code References
**Format**: `path/to/file:line_number`
**Example**: `src/auth/jwt.ts:45` - Authentication uses deprecated algorithm
### Quality Checklist
- [ ] All CONTEXT files analyzed
- [ ] Code evidence with `file:line` references
- [ ] Specific, actionable recommendations
- [ ] No unsubstantiated claims
- [ ] EXPECTED deliverables met

View File

@@ -1,136 +0,0 @@
# Write Mode Protocol
## Prompt Structure
```
PURPOSE: [development goal]
TASK: [specific implementation task]
MODE: [auto|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [templates | additional constraints]
```
## Operation Boundaries
### MODE: write
- **READ**: All CONTEXT files and analyze content
- **CREATE**: New files (documentation, code, configuration)
- **MODIFY**: Existing files (update content, refactor code)
- **DELETE**: Files when explicitly required
**Restrictions**: Follow project conventions, cannot break existing functionality
**Constraint**: Must test every change
## Execution Flow
### MODE: write
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
2. **Read** CONTEXT files, find 3+ similar patterns
3. **Plan** implementation following RULES
4. **Execute** requested file operations
5. **Validate** changes
6. **Report** file changes
## Core Requirements
**ALWAYS**:
- Study CONTEXT files - find 3+ similar patterns before implementing
- Apply RULES exactly
- Test continuously (auto mode)
- Commit incrementally (auto mode)
- Match project style exactly
- List all created/modified files at output beginning
**NEVER**:
- Make assumptions without code verification
- Ignore existing patterns
- Skip tests (auto mode)
- Use clever tricks over boring solutions
- Break backward compatibility
- Exceed 3 failed attempts without stopping
**Three-Attempt Rule**: On 3rd failure, stop and report what attempted, what failed, root cause
| Error Type | Response |
|------------|----------|
| Syntax/Type | Review → Fix → Re-run tests |
| Runtime | Analyze stack → Add handling → Test |
| Test Failure | Debug → Review setup → Fix |
| Build Failure | Check messages → Fix incrementally |
---
## Output Format
### Format Priority
**If template defines output format** → Follow template format EXACTLY
**If template has no format** → Use default format below
### Task Implementation
```markdown
# Implementation: [TASK Title]
## Changes
- Created: `path/to/file1.ext` (X lines)
- Modified: `path/to/file2.ext` (+Y/-Z lines)
- Deleted: `path/to/file3.ext`
## Summary
[2-3 sentence overview]
## Key Decisions
1. [Decision] - Rationale and reference to similar pattern
2. [Decision] - path/to/reference:line
## Implementation Details
[Evidence-based description with code references]
## Testing
- Tests written: X new tests
- Tests passing: Y/Z tests
## Validation
✅ Tests: X passing
✅ Build: Success
## Next Steps
[Recommendations if any]
```
### Partial Completion
```markdown
# Task Status: Partially Completed
## Completed
- [What worked]
- Files: `path/to/completed.ext`
## Blocked
- **Issue**: [What failed]
- **Root Cause**: [Analysis]
- **Attempted**: [Solutions tried - attempt X of 3]
## Required
[What's needed to proceed]
## Recommendation
[Suggested next steps]
```
### Code References
**Format**: `path/to/file:line_number`
**Example**: `src/auth/jwt.ts:45` - Implemented following pattern from `src/auth/session.ts:78`
### Quality Checklist
- [ ] All tests pass
- [ ] Build succeeds
- [ ] All EXPECTED deliverables met
- [ ] Code follows existing patterns
- [ ] File changes listed at beginning

View File

@@ -1,151 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Conflict Resolution Schema",
"description": "Schema for conflict detection, strategy generation, and resolution output",
"type": "object",
"required": ["conflicts", "summary"],
"properties": {
"conflicts": {
"type": "array",
"items": {
"type": "object",
"required": ["id", "brief", "severity", "category", "strategies", "recommended"],
"properties": {
"id": {
"type": "string",
"pattern": "^CON-\\d{3}$",
"description": "Conflict ID (CON-001, CON-002, ...)"
},
"brief": {
"type": "string",
"description": "一句话冲突摘要(中文)"
},
"severity": {
"enum": ["Critical", "High", "Medium"],
"description": "冲突严重程度"
},
"category": {
"enum": ["Architecture", "API", "Data", "Dependency", "ModuleOverlap"],
"description": "冲突类型"
},
"affected_files": {
"type": "array",
"items": { "type": "string" },
"description": "受影响的文件路径"
},
"description": {
"type": "string",
"description": "详细冲突描述"
},
"impact": {
"type": "object",
"properties": {
"scope": { "type": "string", "description": "影响的模块/组件" },
"compatibility": { "enum": ["Yes", "No", "Partial"] },
"migration_required": { "type": "boolean" },
"estimated_effort": { "type": "string", "description": "人天估计" }
}
},
"overlap_analysis": {
"type": "object",
"description": "仅当 category=ModuleOverlap 时需要",
"properties": {
"new_module": {
"type": "object",
"properties": {
"name": { "type": "string" },
"scenarios": { "type": "array", "items": { "type": "string" } },
"responsibilities": { "type": "string" }
}
},
"existing_modules": {
"type": "array",
"items": {
"type": "object",
"properties": {
"file": { "type": "string" },
"name": { "type": "string" },
"scenarios": { "type": "array", "items": { "type": "string" } },
"overlap_scenarios": { "type": "array", "items": { "type": "string" } },
"responsibilities": { "type": "string" }
}
}
}
}
},
"strategies": {
"type": "array",
"minItems": 2,
"maxItems": 4,
"items": {
"type": "object",
"required": ["name", "approach", "complexity", "risk", "effort", "pros", "cons"],
"properties": {
"name": { "type": "string", "description": "策略名称(中文)" },
"approach": { "type": "string", "description": "实现方法简述" },
"complexity": { "enum": ["Low", "Medium", "High"] },
"risk": { "enum": ["Low", "Medium", "High"] },
"effort": { "type": "string", "description": "时间估计" },
"pros": { "type": "array", "items": { "type": "string" }, "description": "优点" },
"cons": { "type": "array", "items": { "type": "string" }, "description": "缺点" },
"clarification_needed": {
"type": "array",
"items": { "type": "string" },
"description": "需要用户澄清的问题(尤其是 ModuleOverlap"
},
"modifications": {
"type": "array",
"items": {
"type": "object",
"required": ["file", "section", "change_type", "old_content", "new_content", "rationale"],
"properties": {
"file": { "type": "string", "description": "相对项目根目录的完整路径" },
"section": { "type": "string", "description": "Markdown heading 用于定位" },
"change_type": { "enum": ["update", "add", "remove"] },
"old_content": { "type": "string", "description": "原始内容片段20-100字符用于唯一匹配" },
"new_content": { "type": "string", "description": "修改后的内容" },
"rationale": { "type": "string", "description": "修改理由" }
}
}
}
}
}
},
"recommended": {
"type": "integer",
"minimum": 0,
"description": "推荐策略索引0-based"
},
"modification_suggestions": {
"type": "array",
"minItems": 2,
"maxItems": 5,
"items": { "type": "string" },
"description": "自定义处理建议2-5条中文"
}
}
}
},
"summary": {
"type": "object",
"required": ["total", "critical", "high", "medium"],
"properties": {
"total": { "type": "integer" },
"critical": { "type": "integer" },
"high": { "type": "integer" },
"medium": { "type": "integer" }
}
}
},
"_quality_standards": {
"modifications": [
"old_content: 20-100字符确保 Edit 工具能唯一匹配",
"new_content: 保持 markdown 格式",
"change_type: update(替换), add(插入), remove(删除)"
],
"user_facing_text": "brief, name, pros, cons, modification_suggestions 使用中文",
"technical_fields": "severity, category, complexity, risk 使用英文"
}
}

View File

@@ -1,127 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Debug Log Entry Schema",
"description": "NDJSON log entry for hypothesis-driven debugging workflow",
"type": "object",
"required": [
"sessionId",
"runId",
"hypothesisId",
"location",
"message",
"data",
"timestamp"
],
"properties": {
"sessionId": {
"type": "string",
"pattern": "^DBG-[a-z0-9-]+-\\d{4}-\\d{2}-\\d{2}$",
"description": "Debug session identifier (e.g., 'DBG-stack-length-not-found-2025-12-18')"
},
"runId": {
"type": "string",
"pattern": "^run-\\d+$",
"description": "Reproduction run number (e.g., 'run-1', 'run-2')"
},
"hypothesisId": {
"type": "string",
"pattern": "^H\\d+$",
"description": "Hypothesis identifier being tested (e.g., 'H1', 'H2')"
},
"location": {
"type": "string",
"description": "Code location in format 'file:function:line' or 'file:line'"
},
"message": {
"type": "string",
"description": "Human-readable description of what's being logged"
},
"data": {
"type": "object",
"additionalProperties": true,
"description": "Captured values for hypothesis validation",
"properties": {
"keys_sample": {
"type": "array",
"items": {"type": "string"},
"description": "Sample of dictionary/object keys (first 30)"
},
"value": {
"description": "Captured value (any type)"
},
"expected_value": {
"description": "Expected value for comparison"
},
"actual_type": {
"type": "string",
"description": "Actual type of captured value"
},
"count": {
"type": "integer",
"description": "Count of items (for arrays/collections)"
},
"is_null": {
"type": "boolean",
"description": "Whether value is null/None"
},
"is_empty": {
"type": "boolean",
"description": "Whether collection is empty"
},
"comparison_result": {
"type": "string",
"enum": ["match", "mismatch", "partial_match"],
"description": "Result of value comparison"
}
}
},
"timestamp": {
"type": "integer",
"description": "Unix timestamp in milliseconds"
},
"severity": {
"type": "string",
"enum": ["debug", "info", "warning", "error"],
"default": "info",
"description": "Log severity level"
},
"stack_trace": {
"type": "string",
"description": "Stack trace if capturing exception context"
},
"parent_entry_id": {
"type": "string",
"description": "Reference to parent log entry for nested contexts"
}
},
"examples": [
{
"sessionId": "DBG-stack-length-not-found-2025-12-18",
"runId": "run-1",
"hypothesisId": "H1",
"location": "rmxprt_api/core/rmxprt_parameter.py:sync_from_machine:642",
"message": "Inspect stator keys from machine.to_dict and compare Stack Length vs Length",
"data": {
"keys_sample": ["Length", "Outer Diameter", "Inner Diameter", "Slot"],
"stack_length_value": "未找到",
"length_value": "120mm",
"comparison_result": "mismatch"
},
"timestamp": 1734523456789
},
{
"sessionId": "DBG-registered-zero-2025-12-18",
"runId": "run-1",
"hypothesisId": "H2",
"location": "rmxprt_api/utils/param_core.py:update_variables_from_result_model:670",
"message": "Check result parameters count and sample keys",
"data": {
"count": 0,
"is_empty": true,
"sections_parsed": ["Stator", "Rotor", "General"],
"expected_count": 145
},
"timestamp": 1734523457123
}
]
}

View File

@@ -1,219 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "discovery-finding-schema",
"title": "Discovery Finding Schema",
"description": "Schema for perspective-based issue discovery results",
"type": "object",
"required": ["perspective", "discovery_id", "analysis_timestamp", "cli_tool_used", "summary", "findings"],
"properties": {
"perspective": {
"type": "string",
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"],
"description": "Discovery perspective"
},
"discovery_id": {
"type": "string",
"pattern": "^DSC-\\d{8}-\\d{6}$",
"description": "Parent discovery session ID"
},
"analysis_timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of analysis"
},
"cli_tool_used": {
"type": "string",
"enum": ["gemini", "qwen", "codex"],
"description": "CLI tool that performed the analysis"
},
"model": {
"type": "string",
"description": "Specific model version used",
"examples": ["gemini-2.5-pro", "qwen-max"]
},
"analysis_duration_ms": {
"type": "integer",
"minimum": 0,
"description": "Analysis duration in milliseconds"
},
"summary": {
"type": "object",
"required": ["total_findings"],
"properties": {
"total_findings": { "type": "integer", "minimum": 0 },
"critical": { "type": "integer", "minimum": 0 },
"high": { "type": "integer", "minimum": 0 },
"medium": { "type": "integer", "minimum": 0 },
"low": { "type": "integer", "minimum": 0 },
"files_analyzed": { "type": "integer", "minimum": 0 }
},
"description": "Summary statistics (FLAT structure, NOT nested)"
},
"findings": {
"type": "array",
"items": {
"type": "object",
"required": ["id", "title", "perspective", "priority", "category", "description", "file", "line"],
"properties": {
"id": {
"type": "string",
"pattern": "^dsc-[a-z]+-\\d{3}-[a-f0-9]{8}$",
"description": "Unique finding ID: dsc-{perspective}-{seq}-{uuid8}",
"examples": ["dsc-bug-001-a1b2c3d4"]
},
"title": {
"type": "string",
"minLength": 10,
"maxLength": 200,
"description": "Concise finding title"
},
"perspective": {
"type": "string",
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
},
"priority": {
"type": "string",
"enum": ["critical", "high", "medium", "low"],
"description": "Priority level (lowercase only)"
},
"category": {
"type": "string",
"description": "Perspective-specific category",
"examples": ["null-check", "edge-case", "missing-test", "complexity", "injection"]
},
"description": {
"type": "string",
"minLength": 20,
"description": "Detailed description of the finding"
},
"file": {
"type": "string",
"description": "File path relative to project root"
},
"line": {
"type": "integer",
"minimum": 1,
"description": "Line number of the finding"
},
"snippet": {
"type": "string",
"description": "Relevant code snippet"
},
"suggested_issue": {
"type": "object",
"required": ["title", "type", "priority"],
"properties": {
"title": {
"type": "string",
"description": "Suggested issue title for export"
},
"type": {
"type": "string",
"enum": ["bug", "feature", "enhancement", "refactor", "test", "docs"],
"description": "Issue type"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"description": "Priority 1-5 (1=critical, 5=low)"
},
"tags": {
"type": "array",
"items": { "type": "string" },
"description": "Suggested tags for the issue"
}
},
"description": "Pre-filled issue suggestion for export"
},
"external_reference": {
"type": ["object", "null"],
"properties": {
"source": { "type": "string" },
"url": { "type": "string", "format": "uri" },
"relevance": { "type": "string" }
},
"description": "External reference from Exa research (if applicable)"
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence score 0.0-1.0"
},
"impact": {
"type": "string",
"description": "Description of potential impact"
},
"recommendation": {
"type": "string",
"description": "Specific recommendation to address the finding"
},
"metadata": {
"type": "object",
"additionalProperties": true,
"description": "Additional metadata (CWE ID, OWASP category, etc.)"
}
}
},
"description": "Array of discovered findings"
},
"cross_references": {
"type": "array",
"items": {
"type": "object",
"properties": {
"finding_id": { "type": "string" },
"related_perspectives": {
"type": "array",
"items": { "type": "string" }
},
"reason": { "type": "string" }
}
},
"description": "Cross-references to findings in other perspectives"
}
},
"examples": [
{
"perspective": "bug",
"discovery_id": "DSC-20250128-143022",
"analysis_timestamp": "2025-01-28T14:35:00Z",
"cli_tool_used": "gemini",
"model": "gemini-2.5-pro",
"analysis_duration_ms": 45000,
"summary": {
"total_findings": 8,
"critical": 1,
"high": 2,
"medium": 3,
"low": 2,
"files_analyzed": 5
},
"findings": [
{
"id": "dsc-bug-001-a1b2c3d4",
"title": "Missing null check in user validation",
"perspective": "bug",
"priority": "high",
"category": "null-check",
"description": "User object is accessed without null check after database query, which may fail if user doesn't exist",
"file": "src/auth/validator.ts",
"line": 45,
"snippet": "const user = await db.findUser(id);\nreturn user.email; // user may be null",
"suggested_issue": {
"title": "Add null check in user validation",
"type": "bug",
"priority": 2,
"tags": ["bug", "auth"]
},
"external_reference": null,
"confidence": 0.85,
"impact": "Runtime error when user not found",
"recommendation": "Add null check: if (!user) throw new NotFoundError('User not found');"
}
],
"cross_references": []
}
]
}

View File

@@ -1,125 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "discovery-state-schema",
"title": "Discovery State Schema (Merged)",
"description": "Unified schema for issue discovery session (state + progress merged)",
"type": "object",
"required": ["discovery_id", "target_pattern", "phase", "created_at"],
"properties": {
"discovery_id": {
"type": "string",
"description": "Unique discovery session ID",
"pattern": "^DSC-\\d{8}-\\d{6}$",
"examples": ["DSC-20250128-143022"]
},
"target_pattern": {
"type": "string",
"description": "File/directory pattern being analyzed",
"examples": ["src/auth/**", "codex-lens/**/*.py"]
},
"phase": {
"type": "string",
"enum": ["initialization", "parallel", "aggregation", "complete"],
"description": "Current execution phase"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"updated_at": {
"type": "string",
"format": "date-time"
},
"target": {
"type": "object",
"description": "Target module information",
"properties": {
"files_count": {
"type": "object",
"properties": {
"source": { "type": "integer" },
"tests": { "type": "integer" },
"total": { "type": "integer" }
}
},
"project": {
"type": "object",
"properties": {
"name": { "type": "string" },
"version": { "type": "string" }
}
}
}
},
"perspectives": {
"type": "array",
"description": "Perspective analysis status (merged from progress)",
"items": {
"type": "object",
"required": ["name", "status"],
"properties": {
"name": {
"type": "string",
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
},
"status": {
"type": "string",
"enum": ["pending", "in_progress", "completed", "failed"]
},
"findings": {
"type": "integer",
"minimum": 0
}
}
}
},
"external_research": {
"type": "object",
"properties": {
"enabled": { "type": "boolean", "default": false },
"completed": { "type": "boolean", "default": false }
}
},
"results": {
"type": "object",
"description": "Aggregated results (final phase)",
"properties": {
"total_findings": { "type": "integer", "minimum": 0 },
"issues_generated": { "type": "integer", "minimum": 0 },
"priority_distribution": {
"type": "object",
"properties": {
"critical": { "type": "integer" },
"high": { "type": "integer" },
"medium": { "type": "integer" },
"low": { "type": "integer" }
}
}
}
}
},
"examples": [
{
"discovery_id": "DSC-20251228-182237",
"target_pattern": "codex-lens/**/*.py",
"phase": "complete",
"created_at": "2025-12-28T18:22:37+08:00",
"updated_at": "2025-12-28T18:35:00+08:00",
"target": {
"files_count": { "source": 48, "tests": 44, "total": 93 },
"project": { "name": "codex-lens", "version": "0.1.0" }
},
"perspectives": [
{ "name": "bug", "status": "completed", "findings": 15 },
{ "name": "test", "status": "completed", "findings": 11 },
{ "name": "quality", "status": "completed", "findings": 12 }
],
"external_research": { "enabled": false, "completed": false },
"results": {
"total_findings": 37,
"issues_generated": 15,
"priority_distribution": { "critical": 4, "high": 13, "medium": 16, "low": 6 }
}
}
]
}

View File

@@ -1,298 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Fix Plan Schema",
"description": "Bug fix plan from cli-lite-planning-agent or direct planning",
"type": "object",
"required": [
"summary",
"root_cause",
"strategy",
"tasks",
"estimated_time",
"recommended_execution",
"severity",
"risk_level",
"_metadata"
],
"properties": {
"summary": {
"type": "string",
"description": "2-3 sentence overview of the fix plan"
},
"root_cause": {
"type": "string",
"description": "Consolidated root cause statement from all diagnoses"
},
"strategy": {
"type": "string",
"enum": ["immediate_patch", "comprehensive_fix", "refactor"],
"description": "Fix strategy: immediate_patch (minimal change), comprehensive_fix (proper solution), refactor (structural improvement)"
},
"tasks": {
"type": "array",
"minItems": 1,
"maxItems": 5,
"items": {
"type": "object",
"required": ["id", "title", "scope", "action", "description", "implementation", "verification"],
"properties": {
"id": {
"type": "string",
"pattern": "^FIX[0-9]+$",
"description": "Task identifier (FIX1, FIX2, FIX3...)"
},
"title": {
"type": "string",
"description": "Task title (action verb + target, e.g., 'Fix token validation edge case')"
},
"scope": {
"type": "string",
"description": "Task scope: module path (src/auth/), feature name, or single file. Prefer module level."
},
"file": {
"type": "string",
"description": "Primary file (deprecated, use scope + modification_points instead)"
},
"action": {
"type": "string",
"enum": ["Fix", "Update", "Refactor", "Add", "Delete", "Configure"],
"description": "Primary action type"
},
"description": {
"type": "string",
"description": "What to fix (1-2 sentences)"
},
"modification_points": {
"type": "array",
"minItems": 1,
"items": {
"type": "object",
"required": ["file", "target", "change"],
"properties": {
"file": {
"type": "string",
"description": "File path within scope"
},
"target": {
"type": "string",
"description": "Function/class/line range (e.g., 'validateToken:45-60')"
},
"change": {
"type": "string",
"description": "Brief description of change"
}
}
},
"description": "All modification points for this fix task. Group related changes into one task."
},
"implementation": {
"type": "array",
"items": {"type": "string"},
"minItems": 2,
"maxItems": 5,
"description": "Step-by-step fix implementation guide"
},
"verification": {
"type": "array",
"items": {"type": "string"},
"minItems": 1,
"maxItems": 3,
"description": "Verification/test criteria to confirm fix works"
},
"reference": {
"type": "object",
"properties": {
"pattern": {
"type": "string",
"description": "Pattern name to follow"
},
"files": {
"type": "array",
"items": {"type": "string"},
"description": "Reference file paths to study"
},
"examples": {
"type": "string",
"description": "Specific guidance or example references"
}
},
"description": "Reference materials for fix implementation (optional)"
},
"depends_on": {
"type": "array",
"items": {
"type": "string",
"pattern": "^FIX[0-9]+$"
},
"description": "Task IDs this task depends on (e.g., ['FIX1'])"
},
"risk": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "Risk level of this specific fix task"
},
"cli_execution_id": {
"type": "string",
"pattern": "^[a-zA-Z0-9_-]+$",
"description": "Fixed CLI execution ID for this fix task (e.g., 'session-FIX1', 'bugfix-001-diagnosis')"
},
"cli_execution": {
"type": "object",
"properties": {
"strategy": {
"type": "string",
"enum": ["new", "resume", "fork", "merge_fork"],
"description": "CLI execution strategy: new (no deps), resume (1 dep, continue), fork (1 dep, branch), merge_fork (N deps, combine)"
},
"resume_from": {
"type": "string",
"description": "Parent task's cli_execution_id (for resume/fork strategies)"
},
"merge_from": {
"type": "array",
"items": {"type": "string"},
"description": "Multiple parents' cli_execution_ids (for merge_fork strategy)"
}
},
"description": "CLI execution strategy based on task dependencies"
}
}
},
"description": "Structured fix task breakdown (1-5 tasks)"
},
"flow_control": {
"type": "object",
"properties": {
"execution_order": {
"type": "array",
"items": {
"type": "object",
"properties": {
"phase": {
"type": "string",
"description": "Phase name (e.g., 'parallel-1', 'sequential-1')"
},
"tasks": {
"type": "array",
"items": {"type": "string"},
"description": "Task IDs in this phase"
},
"type": {
"type": "string",
"enum": ["parallel", "sequential"],
"description": "Execution type"
}
}
},
"description": "Ordered execution phases"
},
"exit_conditions": {
"type": "object",
"properties": {
"success": {
"type": "string",
"description": "Condition for successful fix completion"
},
"failure": {
"type": "string",
"description": "Condition that indicates fix failure"
}
},
"description": "Conditions for fix workflow termination"
}
},
"description": "Execution flow control (optional, auto-inferred from depends_on if not provided)"
},
"focus_paths": {
"type": "array",
"items": {"type": "string"},
"description": "Key file paths affected by this fix (aggregated from tasks)"
},
"test_strategy": {
"type": "object",
"properties": {
"scope": {
"type": "string",
"enum": ["unit", "integration", "e2e", "smoke", "full"],
"description": "Test scope to run after fix"
},
"specific_tests": {
"type": "array",
"items": {"type": "string"},
"description": "Specific test files or patterns to run"
},
"manual_verification": {
"type": "array",
"items": {"type": "string"},
"description": "Manual verification steps if automated tests not available"
}
},
"description": "Testing strategy for fix verification"
},
"rollback_plan": {
"type": "object",
"properties": {
"strategy": {
"type": "string",
"enum": ["git_revert", "feature_flag", "manual"],
"description": "Rollback strategy if fix fails"
},
"steps": {
"type": "array",
"items": {"type": "string"},
"description": "Rollback steps"
}
},
"description": "Rollback plan if fix causes issues (optional, recommended for high severity)"
},
"estimated_time": {
"type": "string",
"description": "Total estimated fix time (e.g., '30 minutes', '2 hours')"
},
"recommended_execution": {
"type": "string",
"enum": ["Agent", "Codex"],
"description": "Recommended execution method based on complexity"
},
"severity": {
"type": "string",
"enum": ["Low", "Medium", "High", "Critical"],
"description": "Bug severity level"
},
"risk_level": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "Risk level of implementing the fix"
},
"_metadata": {
"type": "object",
"required": ["timestamp", "source"],
"properties": {
"timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of planning"
},
"source": {
"type": "string",
"enum": ["cli-lite-planning-agent", "direct-planning"],
"description": "Planning source"
},
"planning_mode": {
"type": "string",
"enum": ["direct", "agent-based"],
"description": "Planning execution mode"
},
"diagnosis_angles": {
"type": "array",
"items": {"type": "string"},
"description": "Diagnosis angles used for context"
},
"duration_seconds": {
"type": "integer",
"description": "Planning duration in seconds"
}
}
}
}
}

View File

@@ -1,170 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issues JSONL Schema",
"description": "Schema for each line in issues.jsonl (flat storage)",
"type": "object",
"required": ["id", "title", "status", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Issue ID (GH-123, ISS-xxx, DSC-001)"
},
"title": {
"type": "string"
},
"status": {
"type": "string",
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
"default": "registered"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"default": 3,
"description": "1=critical, 2=high, 3=medium, 4=low, 5=trivial"
},
"context": {
"type": "string",
"description": "Issue context/description (markdown)"
},
"source": {
"type": "string",
"enum": ["github", "text", "discovery"],
"description": "Source of the issue"
},
"source_url": {
"type": "string",
"description": "Original source URL (for GitHub issues)"
},
"tags": {
"type": "array",
"items": { "type": "string" },
"description": "Issue tags"
},
"extended_context": {
"type": "object",
"description": "Minimal extended context for planning hints",
"properties": {
"location": {
"type": "string",
"description": "file:line format (e.g., 'src/auth.ts:42')"
},
"suggested_fix": {
"type": "string",
"description": "Suggested remediation"
},
"notes": {
"type": "string",
"description": "Additional notes (user clarifications or discovery hints)"
}
}
},
"affected_components": {
"type": "array",
"items": { "type": "string" },
"description": "Files/modules affected"
},
"feedback": {
"type": "array",
"description": "Execution feedback history (failures, clarifications, rejections) for planning phase reference",
"items": {
"type": "object",
"required": ["type", "stage", "content", "created_at"],
"properties": {
"type": {
"type": "string",
"enum": ["failure", "clarification", "rejection"],
"description": "Type of feedback"
},
"stage": {
"type": "string",
"enum": ["new", "plan", "execute"],
"description": "Which stage the feedback occurred (new=creation, plan=planning, execute=execution)"
},
"content": {
"type": "string",
"description": "JSON string for failures (with solution_id, task_id, error_type, message, stack_trace) or plain text for clarifications/rejections"
},
"created_at": {
"type": "string",
"format": "date-time",
"description": "Timestamp when feedback was created"
}
}
}
},
"lifecycle_requirements": {
"type": "object",
"properties": {
"test_strategy": {
"type": "string",
"enum": ["unit", "integration", "e2e", "manual", "auto"]
},
"regression_scope": {
"type": "string",
"enum": ["affected", "related", "full"]
},
"acceptance_type": {
"type": "string",
"enum": ["automated", "manual", "both"]
},
"commit_strategy": {
"type": "string",
"enum": ["per-task", "squash", "atomic"]
}
}
},
"bound_solution_id": {
"type": "string",
"description": "ID of the bound solution (null if none bound)"
},
"solution_count": {
"type": "integer",
"default": 0,
"description": "Number of candidate solutions"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"updated_at": {
"type": "string",
"format": "date-time"
},
"planned_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
}
},
"examples": [
{
"id": "DSC-001",
"title": "Fix: SQLite connection pool memory leak",
"status": "registered",
"priority": 1,
"context": "Connection pool cleanup only happens when MAX_POOL_SIZE is reached...",
"source": "discovery",
"tags": ["bug", "resource-leak", "critical"],
"extended_context": {
"location": "storage/sqlite_store.py:59",
"suggested_fix": "Implement periodic cleanup or weak references",
"notes": null
},
"affected_components": ["storage/sqlite_store.py"],
"lifecycle_requirements": {
"test_strategy": "unit",
"regression_scope": "affected",
"acceptance_type": "automated",
"commit_strategy": "per-task"
},
"bound_solution_id": null,
"solution_count": 0,
"created_at": "2025-12-28T18:22:37Z"
}
]
}

View File

@@ -1,421 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Multi-CLI Discussion Artifact Schema",
"description": "Visualization-friendly output for multi-CLI collaborative discussion agent",
"type": "object",
"required": ["metadata", "discussionTopic", "relatedFiles", "planning", "decision", "decisionRecords"],
"properties": {
"metadata": {
"type": "object",
"required": ["artifactId", "roundId", "timestamp", "contributingAgents"],
"properties": {
"artifactId": {
"type": "string",
"description": "Unique ID for this artifact (e.g., 'MCP-auth-refactor-2026-01-13-round-1')"
},
"roundId": {
"type": "integer",
"minimum": 1,
"description": "Discussion round number"
},
"timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp"
},
"contributingAgents": {
"type": "array",
"items": {
"$ref": "#/definitions/AgentIdentifier"
},
"description": "Agents that contributed to this artifact"
},
"durationSeconds": {
"type": "integer",
"description": "Total duration in seconds"
},
"exportFormats": {
"type": "array",
"items": {
"type": "string",
"enum": ["markdown", "html"]
},
"description": "Supported export formats"
}
}
},
"discussionTopic": {
"type": "object",
"required": ["title", "description", "status"],
"properties": {
"title": {
"$ref": "#/definitions/I18nLabel"
},
"description": {
"$ref": "#/definitions/I18nLabel"
},
"scope": {
"type": "object",
"properties": {
"included": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" },
"description": "What's in scope"
},
"excluded": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" },
"description": "What's explicitly out of scope"
}
}
},
"keyQuestions": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" },
"description": "Questions being explored"
},
"status": {
"type": "string",
"enum": ["exploring", "analyzing", "debating", "decided", "blocked"],
"description": "Discussion status"
},
"tags": {
"type": "array",
"items": { "type": "string" },
"description": "Tags for filtering (e.g., ['auth', 'security', 'api'])"
}
}
},
"relatedFiles": {
"type": "object",
"properties": {
"fileTree": {
"type": "array",
"items": { "$ref": "#/definitions/FileNode" },
"description": "File tree structure"
},
"dependencyGraph": {
"type": "array",
"items": { "$ref": "#/definitions/DependencyEdge" },
"description": "Dependency relationships"
},
"impactSummary": {
"type": "array",
"items": { "$ref": "#/definitions/FileImpact" },
"description": "File impact summary"
}
}
},
"planning": {
"type": "object",
"properties": {
"functional": {
"type": "array",
"items": { "$ref": "#/definitions/Requirement" },
"description": "Functional requirements"
},
"nonFunctional": {
"type": "array",
"items": { "$ref": "#/definitions/Requirement" },
"description": "Non-functional requirements"
},
"acceptanceCriteria": {
"type": "array",
"items": { "$ref": "#/definitions/AcceptanceCriterion" },
"description": "Acceptance criteria"
}
}
},
"decision": {
"type": "object",
"required": ["status", "confidenceScore"],
"properties": {
"status": {
"type": "string",
"enum": ["pending", "decided", "conflict"],
"description": "Decision status"
},
"summary": {
"$ref": "#/definitions/I18nLabel"
},
"selectedSolution": {
"$ref": "#/definitions/Solution"
},
"rejectedAlternatives": {
"type": "array",
"items": { "$ref": "#/definitions/RejectedSolution" }
},
"confidenceScore": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence score (0.0 to 1.0)"
}
}
},
"decisionRecords": {
"type": "object",
"properties": {
"timeline": {
"type": "array",
"items": { "$ref": "#/definitions/DecisionEvent" },
"description": "Timeline of decision events"
}
}
},
"_internal": {
"type": "object",
"description": "Internal analysis data (for debugging)",
"properties": {
"cli_analyses": {
"type": "array",
"items": { "$ref": "#/definitions/CLIAnalysis" }
},
"cross_verification": {
"$ref": "#/definitions/CrossVerification"
},
"convergence": {
"$ref": "#/definitions/ConvergenceMetrics"
}
}
}
},
"definitions": {
"I18nLabel": {
"type": "object",
"required": ["en", "zh"],
"properties": {
"en": { "type": "string" },
"zh": { "type": "string" }
},
"description": "Multi-language label for UI display"
},
"AgentIdentifier": {
"type": "object",
"required": ["name", "id"],
"properties": {
"name": {
"type": "string",
"enum": ["Gemini", "Codex", "Qwen", "Human", "System"]
},
"id": { "type": "string" }
}
},
"FileNode": {
"type": "object",
"required": ["path", "type"],
"properties": {
"path": { "type": "string" },
"type": {
"type": "string",
"enum": ["file", "directory"]
},
"modificationStatus": {
"type": "string",
"enum": ["added", "modified", "deleted", "unchanged"]
},
"impactScore": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"children": {
"type": "array",
"items": { "$ref": "#/definitions/FileNode" }
},
"codeSnippet": { "$ref": "#/definitions/CodeSnippet" }
}
},
"DependencyEdge": {
"type": "object",
"required": ["source", "target", "relationship"],
"properties": {
"source": { "type": "string" },
"target": { "type": "string" },
"relationship": { "type": "string" }
}
},
"FileImpact": {
"type": "object",
"required": ["filePath", "score", "reasoning"],
"properties": {
"filePath": { "type": "string" },
"line": { "type": "integer" },
"score": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"reasoning": { "$ref": "#/definitions/I18nLabel" }
}
},
"CodeSnippet": {
"type": "object",
"required": ["startLine", "endLine", "code"],
"properties": {
"startLine": { "type": "integer" },
"endLine": { "type": "integer" },
"code": { "type": "string" },
"language": { "type": "string" },
"comment": { "$ref": "#/definitions/I18nLabel" }
}
},
"Requirement": {
"type": "object",
"required": ["id", "description", "priority"],
"properties": {
"id": { "type": "string" },
"description": { "$ref": "#/definitions/I18nLabel" },
"priority": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"source": { "type": "string" }
}
},
"AcceptanceCriterion": {
"type": "object",
"required": ["id", "description", "isMet"],
"properties": {
"id": { "type": "string" },
"description": { "$ref": "#/definitions/I18nLabel" },
"isMet": { "type": "boolean" }
}
},
"Solution": {
"type": "object",
"required": ["id", "title", "description"],
"properties": {
"id": { "type": "string" },
"title": { "$ref": "#/definitions/I18nLabel" },
"description": { "$ref": "#/definitions/I18nLabel" },
"pros": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" }
},
"cons": {
"type": "array",
"items": { "$ref": "#/definitions/I18nLabel" }
},
"estimatedEffort": { "$ref": "#/definitions/I18nLabel" },
"risk": {
"type": "string",
"enum": ["critical", "high", "medium", "low"]
},
"affectedFiles": {
"type": "array",
"items": { "$ref": "#/definitions/FileImpact" }
},
"sourceCLIs": {
"type": "array",
"items": { "type": "string" }
}
}
},
"RejectedSolution": {
"allOf": [
{ "$ref": "#/definitions/Solution" },
{
"type": "object",
"required": ["rejectionReason"],
"properties": {
"rejectionReason": { "$ref": "#/definitions/I18nLabel" }
}
}
]
},
"DecisionEvent": {
"type": "object",
"required": ["eventId", "timestamp", "type", "contributor", "summary"],
"properties": {
"eventId": { "type": "string" },
"timestamp": {
"type": "string",
"format": "date-time"
},
"type": {
"type": "string",
"enum": ["proposal", "argument", "agreement", "disagreement", "decision", "reversal"]
},
"contributor": { "$ref": "#/definitions/AgentIdentifier" },
"summary": { "$ref": "#/definitions/I18nLabel" },
"evidence": {
"type": "array",
"items": { "$ref": "#/definitions/Evidence" }
},
"reversibility": {
"type": "string",
"enum": ["easily_reversible", "requires_refactoring", "irreversible"]
}
}
},
"Evidence": {
"type": "object",
"required": ["type", "content", "description"],
"properties": {
"type": {
"type": "string",
"enum": ["link", "code_snippet", "log_output", "benchmark", "reference"]
},
"content": {},
"description": { "$ref": "#/definitions/I18nLabel" }
}
},
"CLIAnalysis": {
"type": "object",
"required": ["tool", "perspective", "feasibility_score"],
"properties": {
"tool": {
"type": "string",
"enum": ["gemini", "codex", "qwen"]
},
"perspective": { "type": "string" },
"feasibility_score": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"findings": {
"type": "array",
"items": { "type": "string" }
},
"implementation_approaches": { "type": "array" },
"technical_concerns": {
"type": "array",
"items": { "type": "string" }
},
"code_locations": {
"type": "array",
"items": { "$ref": "#/definitions/FileImpact" }
}
}
},
"CrossVerification": {
"type": "object",
"properties": {
"agreements": {
"type": "array",
"items": { "type": "string" }
},
"disagreements": {
"type": "array",
"items": { "type": "string" }
},
"resolution": { "type": "string" }
}
},
"ConvergenceMetrics": {
"type": "object",
"properties": {
"score": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"new_insights": { "type": "boolean" },
"recommendation": {
"type": "string",
"enum": ["continue", "converged", "user_input_needed"]
}
}
}
}
}

View File

@@ -1,444 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Plan Object Schema",
"description": "Implementation plan from cli-lite-planning-agent or direct planning",
"type": "object",
"required": [
"summary",
"approach",
"tasks",
"estimated_time",
"recommended_execution",
"complexity",
"_metadata"
],
"properties": {
"summary": {
"type": "string",
"description": "2-3 sentence overview of the implementation plan"
},
"approach": {
"type": "string",
"description": "High-level implementation strategy and methodology"
},
"tasks": {
"type": "array",
"minItems": 1,
"maxItems": 10,
"items": {
"type": "object",
"required": ["id", "title", "scope", "action", "description", "implementation", "acceptance"],
"properties": {
"id": {
"type": "string",
"pattern": "^T[0-9]+$",
"description": "Task identifier (T1, T2, T3...)"
},
"title": {
"type": "string",
"description": "Task title (action verb + target module/feature)"
},
"scope": {
"type": "string",
"description": "Task scope: module path (src/auth/), feature name, or single file. Prefer module/feature level over single file."
},
"file": {
"type": "string",
"description": "Primary file (deprecated, use scope + modification_points instead)"
},
"action": {
"type": "string",
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"],
"description": "Primary action type"
},
"description": {
"type": "string",
"description": "What to implement (1-2 sentences)"
},
"modification_points": {
"type": "array",
"minItems": 1,
"items": {
"type": "object",
"required": ["file", "target", "change"],
"properties": {
"file": {
"type": "string",
"description": "File path within scope"
},
"target": {
"type": "string",
"description": "Function/class/line range (e.g., 'validateToken:45-60')"
},
"change": {
"type": "string",
"description": "Brief description of change"
}
}
},
"description": "All modification points for this task. Group related changes (same feature/module) into one task with multiple modification_points."
},
"implementation": {
"type": "array",
"items": {"type": "string"},
"minItems": 2,
"maxItems": 7,
"description": "Step-by-step implementation guide"
},
"reference": {
"type": "object",
"properties": {
"pattern": {
"type": "string",
"description": "Pattern name to follow"
},
"files": {
"type": "array",
"items": {"type": "string"},
"description": "Reference file paths to study"
},
"examples": {
"type": "string",
"description": "Specific guidance or example references"
}
},
"description": "Reference materials for implementation (optional)"
},
"acceptance": {
"type": "array",
"items": {"type": "string"},
"minItems": 1,
"maxItems": 4,
"description": "Verification criteria (quantified, testable)"
},
"depends_on": {
"type": "array",
"items": {
"type": "string",
"pattern": "^T[0-9]+$"
},
"description": "Task IDs this task depends on (e.g., ['T1', 'T2'])"
},
"cli_execution_id": {
"type": "string",
"pattern": "^[a-zA-Z0-9_-]+$",
"description": "Fixed CLI execution ID for this task (e.g., 'session-T1', 'IMPL-001-analysis')"
},
"cli_execution": {
"type": "object",
"properties": {
"strategy": {
"type": "string",
"enum": ["new", "resume", "fork", "merge_fork"],
"description": "CLI execution strategy: new (no deps), resume (1 dep, continue), fork (1 dep, branch), merge_fork (N deps, combine)"
},
"resume_from": {
"type": "string",
"description": "Parent task's cli_execution_id (for resume/fork strategies)"
},
"merge_from": {
"type": "array",
"items": {"type": "string"},
"description": "Multiple parents' cli_execution_ids (for merge_fork strategy)"
}
},
"description": "CLI execution strategy based on task dependencies"
},
"rationale": {
"type": "object",
"properties": {
"chosen_approach": {
"type": "string",
"description": "The selected implementation approach and why it was chosen"
},
"alternatives_considered": {
"type": "array",
"items": {"type": "string"},
"description": "Alternative approaches that were considered but not chosen"
},
"decision_factors": {
"type": "array",
"items": {"type": "string"},
"description": "Key factors that influenced the decision (performance, maintainability, cost, etc.)"
},
"tradeoffs": {
"type": "string",
"description": "Known tradeoffs of the chosen approach"
}
},
"description": "Design rationale explaining WHY this approach was chosen (required for Medium/High complexity)"
},
"verification": {
"type": "object",
"properties": {
"unit_tests": {
"type": "array",
"items": {"type": "string"},
"description": "List of unit test names/descriptions to create"
},
"integration_tests": {
"type": "array",
"items": {"type": "string"},
"description": "List of integration test names/descriptions to create"
},
"manual_checks": {
"type": "array",
"items": {"type": "string"},
"description": "Manual verification steps with specific actions"
},
"success_metrics": {
"type": "array",
"items": {"type": "string"},
"description": "Quantified metrics for success (e.g., 'Response time <200ms', 'Coverage >80%')"
}
},
"description": "Detailed verification steps beyond acceptance criteria (required for Medium/High complexity)"
},
"risks": {
"type": "array",
"items": {
"type": "object",
"required": ["description", "probability", "impact", "mitigation"],
"properties": {
"description": {
"type": "string",
"description": "Description of the risk"
},
"probability": {
"type": "string",
"enum": ["Low", "Medium", "High"],
"description": "Likelihood of the risk occurring"
},
"impact": {
"type": "string",
"enum": ["Low", "Medium", "High"],
"description": "Impact severity if the risk occurs"
},
"mitigation": {
"type": "string",
"description": "Strategy to mitigate or prevent the risk"
},
"fallback": {
"type": "string",
"description": "Alternative approach if mitigation fails"
}
}
},
"description": "Risk assessment and mitigation strategies (required for High complexity)"
},
"code_skeleton": {
"type": "object",
"properties": {
"interfaces": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"definition": {"type": "string"},
"purpose": {"type": "string"}
}
},
"description": "Key interface/type definitions"
},
"key_functions": {
"type": "array",
"items": {
"type": "object",
"properties": {
"signature": {"type": "string"},
"purpose": {"type": "string"},
"returns": {"type": "string"}
}
},
"description": "Critical function signatures"
},
"classes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"purpose": {"type": "string"},
"methods": {
"type": "array",
"items": {"type": "string"}
}
}
},
"description": "Key class structures"
}
},
"description": "Code skeleton with interface/function signatures (required for High complexity)"
}
}
},
"description": "Structured task breakdown (1-10 tasks)"
},
"data_flow": {
"type": "object",
"properties": {
"diagram": {
"type": "string",
"description": "ASCII/text representation of data flow (e.g., 'A → B → C')"
},
"stages": {
"type": "array",
"items": {
"type": "object",
"required": ["stage", "input", "output", "component"],
"properties": {
"stage": {
"type": "string",
"description": "Stage name (e.g., 'Extraction', 'Processing', 'Storage')"
},
"input": {
"type": "string",
"description": "Input data format/type"
},
"output": {
"type": "string",
"description": "Output data format/type"
},
"component": {
"type": "string",
"description": "Component/module handling this stage"
},
"transformations": {
"type": "array",
"items": {"type": "string"},
"description": "Data transformations applied in this stage"
}
}
},
"description": "Detailed data flow stages"
},
"dependencies": {
"type": "array",
"items": {"type": "string"},
"description": "External dependencies or data sources"
}
},
"description": "Global data flow design showing how data moves through the system (required for High complexity)"
},
"design_decisions": {
"type": "array",
"items": {
"type": "object",
"required": ["decision", "rationale"],
"properties": {
"decision": {
"type": "string",
"description": "The design decision made"
},
"rationale": {
"type": "string",
"description": "Why this decision was made"
},
"tradeoff": {
"type": "string",
"description": "What was traded off for this decision"
},
"alternatives": {
"type": "array",
"items": {"type": "string"},
"description": "Alternatives that were considered"
}
}
},
"description": "Global design decisions that affect the entire plan"
},
"flow_control": {
"type": "object",
"properties": {
"execution_order": {
"type": "array",
"items": {
"type": "object",
"properties": {
"phase": {
"type": "string",
"description": "Phase name (e.g., 'parallel-1', 'sequential-1')"
},
"tasks": {
"type": "array",
"items": {"type": "string"},
"description": "Task IDs in this phase"
},
"type": {
"type": "string",
"enum": ["parallel", "sequential"],
"description": "Execution type"
}
}
},
"description": "Ordered execution phases"
},
"exit_conditions": {
"type": "object",
"properties": {
"success": {
"type": "string",
"description": "Condition for successful completion"
},
"failure": {
"type": "string",
"description": "Condition that indicates failure"
}
},
"description": "Conditions for workflow termination"
}
},
"description": "Execution flow control (optional, auto-inferred from depends_on if not provided)"
},
"focus_paths": {
"type": "array",
"items": {"type": "string"},
"description": "Key file paths affected by this plan (aggregated from tasks)"
},
"estimated_time": {
"type": "string",
"description": "Total estimated implementation time (e.g., '30 minutes', '2 hours')"
},
"recommended_execution": {
"type": "string",
"enum": ["Agent", "Codex"],
"description": "Recommended execution method based on complexity"
},
"complexity": {
"type": "string",
"enum": ["Low", "Medium", "High"],
"description": "Task complexity level"
},
"_metadata": {
"type": "object",
"required": ["timestamp", "source"],
"properties": {
"timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of planning"
},
"source": {
"type": "string",
"enum": ["cli-lite-planning-agent", "direct-planning"],
"description": "Planning source"
},
"planning_mode": {
"type": "string",
"enum": ["direct", "agent-based"],
"description": "Planning execution mode"
},
"exploration_angles": {
"type": "array",
"items": {"type": "string"},
"description": "Exploration angles used for context"
},
"duration_seconds": {
"type": "integer",
"description": "Planning duration in seconds"
}
}
}
}
}

View File

@@ -1,51 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Plan Verification Agent Schema",
"description": "Defines dimensions, severity rules, and CLI templates for plan verification agent",
"dimensions": {
"A": { "name": "User Intent Alignment", "tier": 1, "severity": "CRITICAL",
"checks": ["Goal Alignment", "Scope Drift", "Success Criteria Match", "Intent Conflicts"] },
"B": { "name": "Requirements Coverage", "tier": 1, "severity": "CRITICAL",
"checks": ["Orphaned Requirements", "Unmapped Tasks", "NFR Coverage Gaps"] },
"C": { "name": "Consistency Validation", "tier": 1, "severity": "CRITICAL",
"checks": ["Requirement Conflicts", "Architecture Drift", "Terminology Drift", "Data Model Inconsistency"] },
"D": { "name": "Dependency Integrity", "tier": 2, "severity": "HIGH",
"checks": ["Circular Dependencies", "Missing Dependencies", "Broken Dependencies", "Logical Ordering"] },
"E": { "name": "Synthesis Alignment", "tier": 2, "severity": "HIGH",
"checks": ["Priority Conflicts", "Success Criteria Mismatch", "Risk Mitigation Gaps"] },
"F": { "name": "Task Specification Quality", "tier": 3, "severity": "MEDIUM",
"checks": ["Ambiguous Focus Paths", "Underspecified Acceptance", "Missing Artifacts", "Weak Flow Control"] },
"G": { "name": "Duplication Detection", "tier": 4, "severity": "LOW",
"checks": ["Overlapping Task Scope", "Redundant Coverage"] },
"H": { "name": "Feasibility Assessment", "tier": 4, "severity": "LOW",
"checks": ["Complexity Misalignment", "Resource Conflicts", "Skill Gap Risks"] },
"I": { "name": "Constraints Compliance", "tier": 1, "severity": "CRITICAL",
"checks": ["Consolidated Constraints Violation", "Phase Constraint Ignored", "User Constraint Override"] },
"J": { "name": "N+1 Context Validation", "tier": 2, "severity": "HIGH",
"checks": ["Deferred Item Included", "Decision Contradiction", "Revisit Flag Ignored"] }
},
"tiers": {
"1": { "dimensions": ["A", "B", "C", "I"], "priority": "CRITICAL", "limit": null, "rule": "analysis-review-architecture" },
"2": { "dimensions": ["D", "E", "J"], "priority": "HIGH", "limit": 15, "rule": "analysis-diagnose-bug-root-cause" },
"3": { "dimensions": ["F"], "priority": "MEDIUM", "limit": 20, "rule": "analysis-analyze-code-patterns" },
"4": { "dimensions": ["G", "H"], "priority": "LOW", "limit": 15, "rule": "analysis-analyze-code-patterns" }
},
"severity_rules": {
"CRITICAL": ["User intent violation", "Synthesis authority violation", "Zero coverage", "Circular/broken deps", "Constraint violation"],
"HIGH": ["NFR gaps", "Priority conflicts", "Missing risk mitigation", "Deferred item included", "Decision contradiction"],
"MEDIUM": ["Terminology drift", "Missing refs", "Weak flow control"],
"LOW": ["Style improvements", "Minor redundancy"]
},
"quality_gate": {
"BLOCK_EXECUTION": { "condition": "critical > 0", "emoji": "🛑" },
"PROCEED_WITH_FIXES": { "condition": "critical == 0 && high > 0", "emoji": "⚠️" },
"PROCEED_WITH_CAUTION": { "condition": "critical == 0 && high == 0 && medium > 0", "emoji": "✅" },
"PROCEED": { "condition": "only low or none", "emoji": "✅" }
},
"token_budget": { "total_findings": 50, "early_exit": "CRITICAL > 0 in Tier 1 → skip Tier 3-4" }
}

View File

@@ -1,141 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Guidelines Schema",
"description": "Schema for project-guidelines.json - user-maintained rules and constraints",
"type": "object",
"required": ["conventions", "constraints", "_metadata"],
"properties": {
"conventions": {
"type": "object",
"description": "Coding conventions and standards",
"required": ["coding_style", "naming_patterns", "file_structure", "documentation"],
"properties": {
"coding_style": {
"type": "array",
"items": { "type": "string" },
"description": "Coding style rules (e.g., 'Use strict TypeScript mode', 'Prefer const over let')"
},
"naming_patterns": {
"type": "array",
"items": { "type": "string" },
"description": "Naming conventions (e.g., 'Use camelCase for variables', 'Use PascalCase for components')"
},
"file_structure": {
"type": "array",
"items": { "type": "string" },
"description": "File organization rules (e.g., 'One component per file', 'Tests alongside source files')"
},
"documentation": {
"type": "array",
"items": { "type": "string" },
"description": "Documentation requirements (e.g., 'JSDoc for public APIs', 'README for each module')"
}
}
},
"constraints": {
"type": "object",
"description": "Technical constraints and boundaries",
"required": ["architecture", "tech_stack", "performance", "security"],
"properties": {
"architecture": {
"type": "array",
"items": { "type": "string" },
"description": "Architecture constraints (e.g., 'No circular dependencies', 'Services must be stateless')"
},
"tech_stack": {
"type": "array",
"items": { "type": "string" },
"description": "Technology constraints (e.g., 'No new dependencies without review', 'Use native fetch over axios')"
},
"performance": {
"type": "array",
"items": { "type": "string" },
"description": "Performance requirements (e.g., 'API response < 200ms', 'Bundle size < 500KB')"
},
"security": {
"type": "array",
"items": { "type": "string" },
"description": "Security requirements (e.g., 'Sanitize all user input', 'No secrets in code')"
}
}
},
"quality_rules": {
"type": "array",
"description": "Enforceable quality rules",
"items": {
"type": "object",
"required": ["rule", "scope"],
"properties": {
"rule": {
"type": "string",
"description": "The quality rule statement"
},
"scope": {
"type": "string",
"description": "Where the rule applies (e.g., 'all', 'src/**', 'tests/**')"
},
"enforced_by": {
"type": "string",
"description": "How the rule is enforced (e.g., 'eslint', 'pre-commit', 'code-review')"
}
}
}
},
"learnings": {
"type": "array",
"description": "Project learnings captured from workflow sessions",
"items": {
"type": "object",
"required": ["date", "insight"],
"properties": {
"date": {
"type": "string",
"format": "date",
"description": "Date the learning was captured (YYYY-MM-DD)"
},
"session_id": {
"type": "string",
"description": "WFS session ID where the learning originated"
},
"insight": {
"type": "string",
"description": "The learning or insight captured"
},
"context": {
"type": "string",
"description": "Additional context about when/why this learning applies"
},
"category": {
"type": "string",
"enum": ["architecture", "performance", "security", "testing", "workflow", "other"],
"description": "Category of the learning"
}
}
}
},
"_metadata": {
"type": "object",
"required": ["created_at", "version"],
"properties": {
"created_at": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of creation"
},
"version": {
"type": "string",
"description": "Schema version (e.g., '1.0.0')"
},
"last_updated": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of last update"
},
"updated_by": {
"type": "string",
"description": "Who/what last updated the file (e.g., 'user', 'workflow:session:solidify')"
}
}
}
}
}

View File

@@ -1,221 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Tech Schema",
"description": "Schema for project-tech.json - auto-generated technical analysis (stack, architecture, components)",
"type": "object",
"required": [
"project_name",
"initialized_at",
"overview",
"features",
"statistics",
"_metadata"
],
"properties": {
"project_name": {
"type": "string",
"description": "Project name extracted from git repo or directory"
},
"initialized_at": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of initialization"
},
"overview": {
"type": "object",
"required": [
"description",
"technology_stack",
"architecture",
"key_components"
],
"properties": {
"description": {
"type": "string",
"description": "Brief project description (e.g., 'TypeScript web application with React frontend')"
},
"technology_stack": {
"type": "object",
"required": ["languages", "frameworks", "build_tools", "test_frameworks"],
"properties": {
"languages": {
"type": "array",
"items": {
"type": "object",
"required": ["name", "file_count", "primary"],
"properties": {
"name": {
"type": "string",
"description": "Language name (e.g., TypeScript, Python)"
},
"file_count": {
"type": "integer",
"description": "Number of source files in this language"
},
"primary": {
"type": "boolean",
"description": "True if this is the primary language"
}
}
}
},
"frameworks": {
"type": "array",
"items": {"type": "string"},
"description": "Detected frameworks (React, Express, Django, etc.)"
},
"build_tools": {
"type": "array",
"items": {"type": "string"},
"description": "Build tools and package managers (npm, cargo, maven, etc.)"
},
"test_frameworks": {
"type": "array",
"items": {"type": "string"},
"description": "Testing frameworks (jest, pytest, go test, etc.)"
}
}
},
"architecture": {
"type": "object",
"required": ["style", "layers", "patterns"],
"properties": {
"style": {
"type": "string",
"description": "Architecture style (MVC, microservices, layered, etc.)"
},
"layers": {
"type": "array",
"items": {"type": "string"},
"description": "Architectural layers (presentation, business-logic, data-access)"
},
"patterns": {
"type": "array",
"items": {"type": "string"},
"description": "Design patterns (Repository, Factory, Singleton, etc.)"
}
}
},
"key_components": {
"type": "array",
"items": {
"type": "object",
"required": ["name", "path", "description", "importance"],
"properties": {
"name": {
"type": "string",
"description": "Component name"
},
"path": {
"type": "string",
"description": "Relative path to component directory"
},
"description": {
"type": "string",
"description": "Brief description of component functionality"
},
"importance": {
"type": "string",
"enum": ["high", "medium", "low"],
"description": "Component importance level"
}
}
},
"description": "5-10 core modules/components"
}
}
},
"features": {
"type": "array",
"items": {
"type": "object",
"required": ["session_id", "title", "completed_at", "tags"],
"properties": {
"session_id": {
"type": "string",
"description": "WFS session identifier"
},
"title": {
"type": "string",
"description": "Feature title/description"
},
"completed_at": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of completion"
},
"tags": {
"type": "array",
"items": {"type": "string"},
"description": "Feature tags for categorization"
}
}
},
"description": "Completed workflow features (populated by /workflow:session:complete)"
},
"development_index": {
"type": "object",
"description": "Categorized development history (lite-plan/lite-execute)",
"properties": {
"feature": { "type": "array", "items": { "$ref": "#/$defs/devIndexEntry" } },
"enhancement": { "type": "array", "items": { "$ref": "#/$defs/devIndexEntry" } },
"bugfix": { "type": "array", "items": { "$ref": "#/$defs/devIndexEntry" } },
"refactor": { "type": "array", "items": { "$ref": "#/$defs/devIndexEntry" } },
"docs": { "type": "array", "items": { "$ref": "#/$defs/devIndexEntry" } }
}
},
"statistics": {
"type": "object",
"required": ["total_features", "total_sessions", "last_updated"],
"properties": {
"total_features": {
"type": "integer",
"description": "Count of completed features"
},
"total_sessions": {
"type": "integer",
"description": "Count of workflow sessions"
},
"last_updated": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of last update"
}
}
},
"_metadata": {
"type": "object",
"required": ["initialized_by", "analysis_timestamp", "analysis_mode"],
"properties": {
"initialized_by": {
"type": "string",
"description": "Agent or tool that performed initialization"
},
"analysis_timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of analysis"
},
"analysis_mode": {
"type": "string",
"enum": ["deep-scan", "quick-scan", "bash-fallback"],
"description": "Analysis mode used"
}
}
}
},
"$defs": {
"devIndexEntry": {
"type": "object",
"required": ["title", "sub_feature", "date", "description", "status"],
"properties": {
"title": { "type": "string", "maxLength": 60 },
"sub_feature": { "type": "string", "description": "Module/component area" },
"date": { "type": "string", "format": "date" },
"description": { "type": "string", "maxLength": 100 },
"status": { "type": "string", "enum": ["completed", "partial"] },
"session_id": { "type": "string", "description": "lite-plan session ID" }
}
}
}
}

View File

@@ -1,248 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Execution Queue Schema",
"description": "Execution queue supporting both task-level (T-N) and solution-level (S-N) granularity",
"type": "object",
"properties": {
"id": {
"type": "string",
"pattern": "^QUE-[0-9]{8}-[0-9]{6}$",
"description": "Queue ID in format QUE-YYYYMMDD-HHMMSS"
},
"status": {
"type": "string",
"enum": ["active", "paused", "completed", "archived"],
"default": "active"
},
"issue_ids": {
"type": "array",
"items": { "type": "string" },
"description": "Issues included in this queue"
},
"solutions": {
"type": "array",
"description": "Solution-level queue items (preferred for new queues)",
"items": {
"$ref": "#/definitions/solutionItem"
}
},
"tasks": {
"type": "array",
"description": "Task-level queue items (legacy format)",
"items": {
"$ref": "#/definitions/taskItem"
}
},
"conflicts": {
"type": "array",
"description": "Detected conflicts between items",
"items": {
"$ref": "#/definitions/conflict"
}
},
"execution_groups": {
"type": "array",
"description": "Parallel/Sequential execution groups",
"items": {
"$ref": "#/definitions/executionGroup"
}
},
"_metadata": {
"type": "object",
"properties": {
"version": { "type": "string", "default": "2.0" },
"queue_type": {
"type": "string",
"enum": ["solution", "task"],
"description": "Queue granularity level"
},
"total_solutions": { "type": "integer" },
"total_tasks": { "type": "integer" },
"pending_count": { "type": "integer" },
"ready_count": { "type": "integer" },
"executing_count": { "type": "integer" },
"completed_count": { "type": "integer" },
"failed_count": { "type": "integer" },
"last_queue_formation": { "type": "string", "format": "date-time" },
"last_updated": { "type": "string", "format": "date-time" }
}
}
},
"definitions": {
"solutionItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "status", "task_count", "files_touched"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^S-[0-9]+$",
"description": "Solution-level queue item ID (S-1, S-2, ...)"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Bound solution ID"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"task_count": {
"type": "integer",
"minimum": 1,
"description": "Number of tasks in this solution"
},
"files_touched": {
"type": "array",
"items": { "type": "string" },
"description": "All files modified by this solution"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel (P*) or Sequential (S*) group ID"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs this item depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"files_modified": { "type": "array", "items": { "type": "string" } },
"tasks_completed": { "type": "integer" },
"commit_hashes": { "type": "array", "items": { "type": "string" } }
}
},
"failure_reason": { "type": "string" }
}
},
"taskItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^T-[0-9]+$",
"description": "Task-level queue item ID (T-1, T-2, ...)"
},
"issue_id": { "type": "string" },
"solution_id": { "type": "string" },
"task_id": { "type": "string" },
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": { "type": "integer" },
"execution_group": { "type": "string" },
"depends_on": { "type": "array", "items": { "type": "string" } },
"semantic_priority": { "type": "number", "minimum": 0, "maximum": 1 },
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": { "type": "string" }
}
},
"conflict": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs involved (for solution-level queues)"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs involved (for task-level queues)"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" },
"description": "Execution order to resolve conflict"
},
"rationale": {
"type": "string",
"description": "Explanation of resolution decision"
},
"resolved": {
"type": "boolean",
"default": false
}
}
},
"executionGroup": {
"type": "object",
"required": ["id", "type"],
"properties": {
"id": {
"type": "string",
"pattern": "^[PS][0-9]+$",
"description": "Group ID (P1, P2 for parallel, S1, S2 for sequential)"
},
"type": {
"type": "string",
"enum": ["parallel", "sequential"]
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs in this group"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs in this group (legacy)"
},
"solution_count": {
"type": "integer",
"description": "Number of solutions in group"
},
"task_count": {
"type": "integer",
"description": "Number of tasks in group (legacy)"
}
}
}
}
}

View File

@@ -1,94 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Registry Schema",
"description": "Global registry of all issues and their solutions",
"type": "object",
"properties": {
"issues": {
"type": "array",
"description": "List of registered issues",
"items": {
"type": "object",
"required": ["id", "title", "status", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
},
"title": {
"type": "string"
},
"status": {
"type": "string",
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
"default": "registered"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"default": 3
},
"solution_count": {
"type": "integer",
"default": 0,
"description": "Number of candidate solutions"
},
"bound_solution_id": {
"type": "string",
"description": "ID of the bound solution (null if none bound)"
},
"source": {
"type": "string",
"enum": ["github", "text", "file"],
"description": "Source of the issue"
},
"source_url": {
"type": "string",
"description": "Original source URL (for GitHub issues)"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"updated_at": {
"type": "string",
"format": "date-time"
},
"planned_at": {
"type": "string",
"format": "date-time"
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
}
}
}
},
"_metadata": {
"type": "object",
"properties": {
"version": { "type": "string", "default": "1.0" },
"total_issues": { "type": "integer" },
"by_status": {
"type": "object",
"properties": {
"registered": { "type": "integer" },
"planning": { "type": "integer" },
"planned": { "type": "integer" },
"queued": { "type": "integer" },
"executing": { "type": "integer" },
"completed": { "type": "integer" },
"failed": { "type": "integer" }
}
},
"last_updated": { "type": "string", "format": "date-time" }
}
}
}
}

View File

@@ -1,166 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Solution Schema",
"description": "Schema for solution registered to an issue",
"type": "object",
"required": ["id", "tasks", "is_bound", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Unique solution identifier: SOL-{issue-id}-{4-char-uid} where uid is 4 alphanumeric chars",
"pattern": "^SOL-.+-[a-z0-9]{4}$",
"examples": ["SOL-GH-123-a7x9", "SOL-ISS-20251229-001-b2k4"]
},
"description": {
"type": "string",
"description": "High-level summary of the solution"
},
"approach": {
"type": "string",
"description": "Technical approach or strategy"
},
"tasks": {
"type": "array",
"description": "Task breakdown for this solution",
"items": {
"type": "object",
"required": ["id", "title", "scope", "action", "implementation", "acceptance"],
"properties": {
"id": {
"type": "string",
"pattern": "^T[0-9]+$"
},
"title": {
"type": "string",
"description": "Action verb + target"
},
"scope": {
"type": "string",
"description": "Module path or feature area"
},
"action": {
"type": "string",
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
},
"description": {
"type": "string",
"description": "1-2 sentences describing what to implement"
},
"modification_points": {
"type": "array",
"items": {
"type": "object",
"properties": {
"file": { "type": "string" },
"target": { "type": "string" },
"change": { "type": "string" }
}
}
},
"implementation": {
"type": "array",
"items": { "type": "string" },
"description": "Step-by-step implementation guide"
},
"test": {
"type": "object",
"description": "Test requirements",
"properties": {
"unit": { "type": "array", "items": { "type": "string" } },
"integration": { "type": "array", "items": { "type": "string" } },
"commands": { "type": "array", "items": { "type": "string" } },
"coverage_target": { "type": "number" }
}
},
"regression": {
"type": "array",
"items": { "type": "string" },
"description": "Regression check points"
},
"acceptance": {
"type": "object",
"description": "Acceptance criteria & verification",
"required": ["criteria", "verification"],
"properties": {
"criteria": { "type": "array", "items": { "type": "string" } },
"verification": { "type": "array", "items": { "type": "string" } },
"manual_checks": { "type": "array", "items": { "type": "string" } }
}
},
"commit": {
"type": "object",
"description": "Commit specification",
"properties": {
"type": { "type": "string", "enum": ["feat", "fix", "refactor", "test", "docs", "chore"] },
"scope": { "type": "string" },
"message_template": { "type": "string" },
"breaking": { "type": "boolean" }
}
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"default": [],
"description": "Task IDs this task depends on"
},
"estimated_minutes": {
"type": "integer",
"description": "Estimated time to complete"
},
"status": {
"type": "string",
"description": "Task status (optional, for tracking)"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"default": 3
}
}
}
},
"exploration_context": {
"type": "object",
"description": "ACE exploration results",
"properties": {
"project_structure": { "type": "string" },
"relevant_files": {
"type": "array",
"items": { "type": "string" }
},
"patterns": { "type": "string" },
"integration_points": { "type": "string" }
}
},
"analysis": {
"type": "object",
"description": "Solution risk assessment",
"properties": {
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
}
},
"score": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Solution quality score (0.0-1.0)"
},
"is_bound": {
"type": "boolean",
"default": false,
"description": "Whether this solution is bound to the issue"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"bound_at": {
"type": "string",
"format": "date-time",
"description": "When this solution was bound to the issue"
}
}
}

View File

@@ -1,158 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Plan Verification Findings Schema",
"description": "Schema for plan verification findings output from cli-explore-agent",
"type": "object",
"required": [
"session_id",
"timestamp",
"verification_tiers_completed",
"findings",
"summary"
],
"properties": {
"session_id": {
"type": "string",
"description": "Workflow session ID (e.g., WFS-20250127-143000)",
"pattern": "^WFS-[0-9]{8}-[0-9]{6}$"
},
"timestamp": {
"type": "string",
"description": "ISO 8601 timestamp when verification was completed",
"format": "date-time"
},
"verification_tiers_completed": {
"type": "array",
"description": "List of verification tiers completed (e.g., ['Tier 1', 'Tier 2'])",
"items": {
"type": "string",
"enum": ["Tier 1", "Tier 2", "Tier 3", "Tier 4"]
},
"minItems": 1,
"maxItems": 4
},
"findings": {
"type": "array",
"description": "Array of all findings across all dimensions",
"items": {
"type": "object",
"required": [
"id",
"dimension",
"dimension_name",
"severity",
"location",
"summary",
"recommendation"
],
"properties": {
"id": {
"type": "string",
"description": "Unique finding ID prefixed by severity (C1, H1, M1, L1)",
"pattern": "^[CHML][0-9]+$"
},
"dimension": {
"type": "string",
"description": "Verification dimension identifier",
"enum": ["A", "B", "C", "D", "E", "F", "G", "H"]
},
"dimension_name": {
"type": "string",
"description": "Human-readable dimension name",
"enum": [
"User Intent Alignment",
"Requirements Coverage Analysis",
"Consistency Validation",
"Dependency Integrity",
"Synthesis Alignment",
"Task Specification Quality",
"Duplication Detection",
"Feasibility Assessment"
]
},
"severity": {
"type": "string",
"description": "Severity level of the finding",
"enum": ["CRITICAL", "HIGH", "MEDIUM", "LOW"]
},
"location": {
"type": "array",
"description": "Array of locations where issue was found (e.g., 'IMPL_PLAN.md:L45', 'task:IMPL-1.2', 'synthesis:FR-03')",
"items": {
"type": "string"
},
"minItems": 1
},
"summary": {
"type": "string",
"description": "Concise summary of the issue (1-2 sentences)",
"minLength": 10,
"maxLength": 500
},
"recommendation": {
"type": "string",
"description": "Actionable recommendation to resolve the issue",
"minLength": 10,
"maxLength": 500
}
}
}
},
"summary": {
"type": "object",
"description": "Aggregate summary of verification results",
"required": [
"critical_count",
"high_count",
"medium_count",
"low_count",
"total_findings",
"coverage_percentage",
"recommendation"
],
"properties": {
"critical_count": {
"type": "integer",
"description": "Number of critical severity findings",
"minimum": 0
},
"high_count": {
"type": "integer",
"description": "Number of high severity findings",
"minimum": 0
},
"medium_count": {
"type": "integer",
"description": "Number of medium severity findings",
"minimum": 0
},
"low_count": {
"type": "integer",
"description": "Number of low severity findings",
"minimum": 0
},
"total_findings": {
"type": "integer",
"description": "Total number of findings",
"minimum": 0
},
"coverage_percentage": {
"type": "number",
"description": "Percentage of synthesis requirements covered by tasks (0-100)",
"minimum": 0,
"maximum": 100
},
"recommendation": {
"type": "string",
"description": "Quality gate recommendation",
"enum": [
"BLOCK_EXECUTION",
"PROCEED_WITH_FIXES",
"PROCEED_WITH_CAUTION",
"PROCEED"
]
}
}
}
}
}

View File

@@ -1,538 +0,0 @@
# CLI Tools Execution Specification
## Table of Contents
1. [Configuration Reference](#configuration-reference)
2. [Tool Selection](#tool-selection)
3. [Prompt Template](#prompt-template)
4. [CLI Execution](#cli-execution)
5. [Auto-Invoke Triggers](#auto-invoke-triggers)
6. [Best Practices](#best-practices)
---
## Configuration Reference
### Configuration File
**Path**: `~/.claude/cli-tools.json`
All tool availability, model selection, and routing are defined in this configuration file.
### Configuration Fields
| Field | Description |
|-------|-------------|
| `enabled` | Tool availability status |
| `primaryModel` | Default model for the tool |
| `secondaryModel` | Fallback model |
| `tags` | Capability tags for routing |
### Tool Types
| Type | Usage | Capabilities |
|------|-------|--------------|
| `builtin` | `--tool gemini` | Full (analysis + write tools) |
| `cli-wrapper` | `--tool doubao` | Full (analysis + write tools) |
| `api-endpoint` | `--tool g25` | **Analysis only** (no file write tools) |
> **Note**: `api-endpoint` tools only support analysis and code generation responses. They cannot create, modify, or delete files.
---
## Tool Selection
### Tag-Based Routing
Tools are selected based on **tags** defined in the configuration. Use tags to match task requirements to tool capabilities.
#### Common Tags
| Tag | Use Case |
|-----|----------|
| `analysis` | Code review, architecture analysis, exploration |
| `implementation` | Feature development, bug fixes |
| `documentation` | Doc generation, comments |
| `testing` | Test creation, coverage analysis |
| `refactoring` | Code restructuring |
| `security` | Security audits, vulnerability scanning |
### Selection Algorithm
```
1. Parse task intent → extract required capabilities
2. Load cli-tools.json → get enabled tools with tags
3. Match tags → filter tools supporting required capabilities
4. Select tool → choose by priority (explicit > tag-match > default)
5. Select model → use primaryModel, fallback to secondaryModel
```
### Selection Decision Tree
```
┌─ Explicit --tool specified?
│ └─→ YES: Use specified tool (validate enabled)
└─ NO: Tag-based selection
├─ Task requires tags?
│ └─→ Match tools with matching tags
│ └─→ Multiple matches? Use first enabled
└─ No tag match?
└─→ Use default tool (first enabled in config)
```
### Command Structure
```bash
# Explicit tool selection
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write|review>
# Model override
ccw cli -p "<PROMPT>" --tool <tool-id> --model <model-id> --mode <analysis|write>
# Code review (codex only)
ccw cli -p "<PROMPT>" --tool codex --mode review
# Tag-based auto-selection (future)
ccw cli -p "<PROMPT>" --tags <tag1,tag2> --mode <analysis|write>
```
### Tool Fallback Chain
When primary tool fails or is unavailable:
1. Check `secondaryModel` for same tool
2. Try next enabled tool with matching tags
3. Fall back to default enabled tool
---
## Prompt Template
### Universal Prompt Template
```bash
ccw cli -p "PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
TASK: • [step 1: specific action] • [step 2: specific action] • [step 3: specific action]
MODE: [analysis|write]
CONTEXT: @[file patterns] | Memory: [session/tech/module context]
EXPECTED: [deliverable format] + [quality criteria] + [structure requirements]
CONSTRAINTS: [domain constraints]" --tool <tool-id> --mode <analysis|write> --rule <category-template>
```
### Intent Capture Checklist (Before CLI Execution)
**⚠️ CRITICAL**: Before executing any CLI command, verify these intent dimensions:
**Intent Validation Questions**:
- [ ] Is the objective specific and measurable?
- [ ] Are success criteria defined?
- [ ] Is the scope clearly bounded?
- [ ] Are constraints and limitations stated?
- [ ] Is the expected output format clear?
- [ ] Is the action level (read/write) explicit?
### Template Structure
Every command MUST include these fields:
- **PURPOSE**
- Purpose: Goal + motivation + success
- Components: What + Why + Success Criteria + Constraints
- Bad Example: "Analyze code"
- Good Example: "Identify security vulnerabilities in auth module to pass compliance audit; success = all OWASP Top 10 addressed; scope = src/auth/** only"
- **TASK**
- Purpose: Actionable steps
- Components: Specific verbs + targets
- Bad Example: "• Review code • Find issues"
- Good Example: "• Scan for SQL injection in query builders • Check XSS in template rendering • Verify CSRF token validation"
- **MODE**
- Purpose: Permission level
- Components: analysis / write / auto
- Bad Example: (missing)
- Good Example: "analysis" or "write"
- **CONTEXT**
- Purpose: File scope + history
- Components: File patterns + Memory
- Bad Example: "@**/*"
- Good Example: "@src/auth/**/*.ts @shared/utils/security.ts \| Memory: Previous auth refactoring (WFS-001)"
- **EXPECTED**
- Purpose: Output specification
- Components: Format + Quality + Structure
- Bad Example: "Report"
- Good Example: "Markdown report with: severity levels (Critical/High/Medium/Low), file:line references, remediation code snippets, priority ranking"
- **CONSTRAINTS**
- Purpose: Domain-specific constraints
- Components: Scope limits, special requirements, focus areas
- Bad Example: (missing or too vague)
- Good Example: "Focus on authentication | Ignore test files | No breaking changes"
### CONTEXT Configuration
**Format**: `CONTEXT: [file patterns] | Memory: [memory context]`
#### File Patterns
- **`@**/*`**: All files (default)
- **`@src/**/*.ts`**: TypeScript in src
- **`@../shared/**/*`**: Sibling directory (requires `--includeDirs`)
- **`@CLAUDE.md`**: Specific file
#### Memory Context
Include when building on previous work:
```bash
# Cross-task reference
Memory: Building on auth refactoring (commit abc123), implementing refresh tokens
# Cross-module integration
Memory: Integration with auth module, using shared error patterns from @shared/utils/errors.ts
```
**Memory Sources**:
- **Related Tasks**: Previous refactoring, extensions, conflict resolution
- **Tech Stack Patterns**: Framework conventions, security guidelines
- **Cross-Module References**: Integration points, shared utilities, type dependencies
#### Pattern Discovery Workflow
For complex requirements, discover files BEFORE CLI execution:
```bash
# Step 1: Discover files (choose one method)
# Method A: ACE semantic search (recommended)
mcp__ace-tool__search_context(project_root_path="/path", query="React components with export")
# Method B: Ripgrep pattern search
rg "export.*Component" --files-with-matches --type ts
# Step 2: Build CONTEXT
CONTEXT: @components/Auth.tsx @types/auth.d.ts | Memory: Previous type refactoring
# Step 3: Execute CLI
ccw cli -p "..." --tool <tool-id> --mode analysis --cd src
```
### --rule Configuration
**Use `--rule` option to auto-load templates**:
```bash
ccw cli -p "..." --tool gemini --mode analysis --rule analysis-review-architecture
```
### Mode Protocol References
**`--rule` auto-loads Protocol based on mode**:
- `--mode analysis` → analysis-protocol.md
- `--mode write` → write-protocol.md
**Protocol Mapping**:
- **`analysis`** mode
- Permission: Read-only
- Constraint: No file create/modify/delete
- **`write`** mode
- Permission: Create/Modify/Delete files
- Constraint: Full workflow execution
### Template System
**Available `--rule` template names**:
**Universal**:
- `universal-rigorous-style` - Precise tasks
- `universal-creative-style` - Exploratory tasks
**Analysis**:
- `analysis-trace-code-execution` - Execution tracing
- `analysis-diagnose-bug-root-cause` - Bug diagnosis
- `analysis-analyze-code-patterns` - Code patterns
- `analysis-analyze-technical-document` - Document analysis
- `analysis-review-architecture` - Architecture review
- `analysis-review-code-quality` - Code review
- `analysis-analyze-performance` - Performance analysis
- `analysis-assess-security-risks` - Security assessment
**Planning**:
- `planning-plan-architecture-design` - Architecture design
- `planning-breakdown-task-steps` - Task breakdown
- `planning-design-component-spec` - Component design
- `planning-plan-migration-strategy` - Migration strategy
**Development**:
- `development-implement-feature` - Feature implementation
- `development-refactor-codebase` - Code refactoring
- `development-generate-tests` - Test generation
- `development-implement-component-ui` - UI component
- `development-debug-runtime-issues` - Runtime debugging
---
## CLI Execution
### MODE Options
- **`analysis`**
- Permission: Read-only
- Use For: Code review, architecture analysis, pattern discovery, exploration
- Specification: Safe for all tools
- **`write`**
- Permission: Create/Modify/Delete
- Use For: Feature implementation, bug fixes, documentation, code creation, file modifications
- Specification: Requires explicit `--mode write`
- **`review`**
- Permission: Read-only (code review output)
- Use For: Git-aware code review of uncommitted changes, branch diffs, specific commits
- Specification: **codex only** - uses `codex review` subcommand
- Tool Behavior:
- `codex`: Executes `codex review` for structured code review
- Other tools (gemini/qwen/claude): Accept mode but no operation change (treated as analysis)
- **Constraint**: Target flags (`--uncommitted`, `--base`, `--commit`) and prompt are mutually exclusive
- With prompt only: `ccw cli -p "Focus on security" --tool codex --mode review` (reviews uncommitted by default)
- With target flag only: `ccw cli --tool codex --mode review --commit abc123` (no prompt allowed)
### Command Options
- **`--tool <tool>`**
- Description: Tool from config (e.g., gemini, qwen, codex)
- Default: First enabled tool in config
- **`--mode <mode>`**
- Description: **REQUIRED**: analysis, write, review
- Default: **NONE** (must specify)
- Note: `review` mode triggers `codex review` subcommand for codex tool only
- **`--model <model>`**
- Description: Model override
- Default: Tool's primaryModel from config
- **`--cd <path>`**
- Description: Working directory
- Default: current
- **`--includeDirs <dirs>`**
- Description: Additional directories (comma-separated)
- Default: none
- **`--resume [id]`**
- Description: Resume previous session
- Default: -
- **`--rule <template>`**
- Description: Template name, auto-loads protocol + template appended to prompt
- Default: universal-rigorous-style
- Auto-selects protocol based on --mode
### Directory Configuration
#### Working Directory (`--cd`)
When using `--cd`:
- `@**/*` = Files within working directory tree only
- CANNOT reference parent/sibling via @ alone
- Must use `--includeDirs` for external directories
#### Include Directories (`--includeDirs`)
**TWO-STEP requirement for external files**:
1. Add `--includeDirs` parameter
2. Reference in CONTEXT with @ patterns
```bash
# Single directory
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool <tool-id> --mode analysis --cd src/auth --includeDirs ../shared
# Multiple directories
ccw cli -p "..." --tool <tool-id> --mode analysis --cd src/auth --includeDirs ../shared,../types,../utils
```
**Rule**: If CONTEXT contains `@../dir/**/*`, MUST include `--includeDirs ../dir`
**Benefits**: Excludes unrelated directories, reduces token usage
### Session Resume
**When to Use**:
- Multi-round planning (analysis → planning → implementation)
- Multi-model collaboration (tool A → tool B on same topic)
- Topic continuity (building on previous findings)
**Usage**:
```bash
ccw cli -p "Continue analyzing" --tool <tool-id> --mode analysis --resume # Resume last
ccw cli -p "Fix issues found" --tool <tool-id> --mode write --resume <id> # Resume specific
ccw cli -p "Merge findings" --tool <tool-id> --mode analysis --resume <id1>,<id2> # Merge multiple
```
- **`--resume`**: Last session
- **`--resume <id>`**: Specific session
- **`--resume <id1>,<id2>`**: Merge sessions (comma-separated)
**Context Assembly** (automatic):
```
=== PREVIOUS CONVERSATION ===
USER PROMPT: [Previous prompt]
ASSISTANT RESPONSE: [Previous output]
=== CONTINUATION ===
[Your new prompt]
```
### Command Examples
#### Task-Type Specific Templates
**Analysis Task** (Security Audit):
```bash
ccw cli -p "PURPOSE: Identify OWASP Top 10 vulnerabilities in authentication module to pass security audit; success = all critical/high issues documented with remediation
TASK: • Scan for injection flaws (SQL, command, LDAP) • Check authentication bypass vectors • Evaluate session management • Assess sensitive data exposure
MODE: analysis
CONTEXT: @src/auth/**/* @src/middleware/auth.ts | Memory: Using bcrypt for passwords, JWT for sessions
EXPECTED: Security report with: severity matrix, file:line references, CVE mappings where applicable, remediation code snippets prioritized by risk
CONSTRAINTS: Focus on authentication | Ignore test files
" --tool gemini --mode analysis --rule analysis-assess-security-risks --cd src/auth
```
**Implementation Task** (New Feature):
```bash
ccw cli -p "PURPOSE: Implement rate limiting for API endpoints to prevent abuse; must be configurable per-endpoint; backward compatible with existing clients
TASK: • Create rate limiter middleware with sliding window • Implement per-route configuration • Add Redis backend for distributed state • Include bypass for internal services
MODE: write
CONTEXT: @src/middleware/**/* @src/config/**/* | Memory: Using Express.js, Redis already configured, existing middleware pattern in auth.ts
EXPECTED: Production-ready code with: TypeScript types, unit tests, integration test, configuration example, migration guide
CONSTRAINTS: Follow existing middleware patterns | No breaking changes
" --tool gemini --mode write --rule development-implement-feature
```
**Bug Fix Task**:
```bash
ccw cli -p "PURPOSE: Fix memory leak in WebSocket connection handler causing server OOM after 24h; root cause must be identified before any fix
TASK: • Trace connection lifecycle from open to close • Identify event listener accumulation • Check cleanup on disconnect • Verify garbage collection eligibility
MODE: analysis
CONTEXT: @src/websocket/**/* @src/services/connection-manager.ts | Memory: Using ws library, ~5000 concurrent connections in production
EXPECTED: Root cause analysis with: memory profile, leak source (file:line), fix recommendation with code, verification steps
CONSTRAINTS: Focus on resource cleanup
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause --cd src
```
**Refactoring Task**:
```bash
ccw cli -p "PURPOSE: Refactor payment processing to use strategy pattern for multi-gateway support; no functional changes; all existing tests must pass
TASK: • Extract gateway interface from current implementation • Create strategy classes for Stripe, PayPal • Implement factory for gateway selection • Migrate existing code to use strategies
MODE: write
CONTEXT: @src/payments/**/* @src/types/payment.ts | Memory: Currently only Stripe, adding PayPal next sprint, must support future gateways
EXPECTED: Refactored code with: strategy interface, concrete implementations, factory class, updated tests, migration checklist
CONSTRAINTS: Preserve all existing behavior | Tests must pass
" --tool gemini --mode write --rule development-refactor-codebase
```
**Code Review Task** (codex review mode):
```bash
# Option 1: Custom prompt (reviews uncommitted changes by default)
ccw cli -p "Focus on security vulnerabilities and error handling" --tool codex --mode review
# Option 2: Target flag only (no prompt allowed with target flags)
ccw cli --tool codex --mode review --uncommitted
ccw cli --tool codex --mode review --base main
ccw cli --tool codex --mode review --commit abc123
```
> **Note**: `--mode review` only triggers special behavior for `codex` tool. Target flags (`--uncommitted`, `--base`, `--commit`) and prompt are **mutually exclusive** - use one or the other, not both.
---
### Permission Framework
**Single-Use Authorization**: Each execution requires explicit user instruction. Previous authorization does NOT carry over.
**Mode Hierarchy**:
- `analysis`: Read-only, safe for auto-execution
- `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write`
- `review`: Git-aware code review (codex only), read-only output - requires explicit `--mode review`
- **Exception**: User provides clear instructions like "modify", "create", "implement"
---
## Auto-Invoke Triggers
**Proactive CLI invocation** - Auto-invoke `ccw cli` when encountering these scenarios:
| Trigger Condition | Suggested Rule | When to Use |
|-------------------|----------------|-------------|
| **Self-repair fails** | `analysis-diagnose-bug-root-cause` | After 1+ failed fix attempts |
| **Ambiguous requirements** | `planning-breakdown-task-steps` | Task description lacks clarity |
| **Architecture decisions** | `planning-plan-architecture-design` | Complex feature needs design |
| **Pattern uncertainty** | `analysis-analyze-code-patterns` | Unsure of existing conventions |
| **Critical code paths** | `analysis-assess-security-risks` | Security/performance sensitive |
### Execution Principles
- **Default mode**: `--mode analysis` (read-only, safe for auto-execution)
- **No confirmation needed**: Invoke proactively when triggers match
- **Wait for results**: Complete analysis before next action
- **Tool selection**: Use context-appropriate tool or fallback chain (`gemini``qwen``codex`)
- **Rule flexibility**: Suggested rules are guidelines, not requirements - choose the most appropriate template for the situation
### Example: Bug Fix with Auto-Invoke
```bash
# After 1+ failed fix attempts, auto-invoke root cause analysis
ccw cli -p "PURPOSE: Identify root cause of [bug description]; success = actionable fix strategy
TASK: • Trace execution flow • Identify failure point • Analyze state at failure • Determine fix approach
MODE: analysis
CONTEXT: @src/module/**/* | Memory: Previous fix attempts failed at [location]
EXPECTED: Root cause analysis with: failure mechanism, stack trace interpretation, fix recommendation with code
CONSTRAINTS: Focus on [specific area]
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
```
---
## Best Practices
### Core Principles
- **Configuration-driven** - All tool selection from `cli-tools.json`
- **Tag-based routing** - Match task requirements to tool capabilities
- **Use tools early and often** - Tools are faster and more thorough
- **Unified CLI** - Always use `ccw cli -p` for consistent parameter handling
- **Default mode is analysis** - Omit `--mode` for read-only operations, explicitly use `--mode write` for file modifications
- **Use `--rule` for templates** - Auto-loads protocol + template appended to prompt
- **Write protection** - Require EXPLICIT `--mode write` for file operations
### Workflow Principles
- **Use CCW unified interface** for all executions
- **Always include template** - Use `--rule <template-name>` to load templates
- **Be specific** - Clear PURPOSE, TASK, EXPECTED fields
- **Include constraints** - File patterns, scope in CONSTRAINTS
- **Leverage memory context** when building on previous work
- **Discover patterns first** - Use rg/MCP before CLI execution
- **Default to full context** - Use `@**/*` unless specific files needed
### Planning Checklist
- [ ] **Purpose defined** - Clear goal and intent
- [ ] **Mode selected** - `--mode analysis|write|review`
- [ ] **Context gathered** - File references + memory (default `@**/*`)
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs`
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
- [ ] **Rule template** - `--rule <template-name>` loads template
- [ ] **Constraints** - Domain constraints in CONSTRAINTS field
### Execution Workflow
1. **Load configuration** - Read `cli-tools.json` for available tools
2. **Match by tags** - Select tool based on task requirements
3. **Validate enabled** - Ensure selected tool is enabled
4. **Execute with mode** - Always specify `--mode analysis|write|review`
5. **Fallback gracefully** - Use secondary model or next matching tool on failure

View File

@@ -1,70 +0,0 @@
# Coding Philosophy
## Core Beliefs
- **Pursue good taste** - Eliminate edge cases to make code logic natural and elegant
- **Embrace extreme simplicity** - Complexity is the root of all evil
- **Be pragmatic** - Code must solve real-world problems, not hypothetical ones
- **Data structures first** - Bad programmers worry about code; good programmers worry about data structures
- **Never break backward compatibility** - Existing functionality is sacred and inviolable
- **Incremental progress over big bangs** - Small changes that compile and pass tests
- **Learning from existing code** - Study and plan before implementing
- **Clear intent over clever code** - Be boring and obvious
- **Follow existing code style** - Match import patterns, naming conventions, and formatting of existing codebase
- **Minimize changes** - Only modify what's directly required; avoid refactoring, adding features, or "improving" code beyond the request
- **No unsolicited documentation** - NEVER generate reports, documentation files, or summaries without explicit user request. If required, save to .workflow/.scratchpad/
## Simplicity Means
- Single responsibility per function/class
- Avoid premature abstractions
- No clever tricks - choose the boring solution
- If you need to explain it, it's too complex
## Fix, Don't Hide
**Solve problems, don't silence symptoms** - Skipped tests, `@ts-ignore`, empty catch, `as any`, excessive timeouts = hiding bugs, not fixing them
**NEVER**:
- Make assumptions - verify with existing code
- Generate reports, summaries, or documentation files without explicit user request
- Use suppression mechanisms (`skip`, `ignore`, `disable`) without fixing root cause
**ALWAYS**:
- Plan complex tasks thoroughly before implementation
- Generate task decomposition for multi-module work (>3 modules or >5 subtasks)
- Track progress using TODO checklists for complex tasks
- Validate planning documents before starting development
- Commit working code incrementally
- Update plan documentation and progress tracking as you go
- Learn from existing implementations
- Stop after 3 failed attempts and reassess
- **Edit fallback**: When Edit tool fails 2+ times on same file, try Bash sed/awk first, then Write to recreate if still failing
## Learning the Codebase
- Find 3 similar features/components
- Identify common patterns and conventions
- Use same libraries/utilities when possible
- Follow existing test patterns
## Tooling
- Use project's existing build system
- Use project's test framework
- Use project's formatter/linter settings
- Don't introduce new tools without strong justification
## Content Uniqueness Rules
- **Each layer owns its abstraction level** - no content sharing between layers
- **Reference, don't duplicate** - point to other layers, never copy content
- **Maintain perspective** - each layer sees the system at its appropriate scale
- **Avoid implementation creep** - higher layers stay architectural
# Context Requirements
Before implementation, always:
- Identify 3+ existing similar patterns
- Map dependencies and integration points
- Understand testing framework and coding conventions

View File

@@ -1,76 +0,0 @@
## Context Acquisition (MCP Tools Priority)
**For task context gathering and analysis, ALWAYS prefer MCP tools**:
1. **mcp__ace-tool__search_context** - HIGHEST PRIORITY for code discovery
- Semantic search with real-time codebase index
- Use for: finding implementations, understanding architecture, locating patterns
- Example: `mcp__ace-tool__search_context(project_root_path="/path", query="authentication logic")`
2. **smart_search** - Fallback for structured search
- Use `smart_search(query="...")` for keyword/regex search
- Use `smart_search(action="find_files", pattern="*.ts")` for file discovery
- Supports modes: `auto`, `hybrid`, `exact`, `ripgrep`
3. **read_file** - Batch file reading
- Read multiple files in parallel: `read_file(path="file1.ts")`, `read_file(path="file2.ts")`
- Supports glob patterns: `read_file(path="src/**/*.config.ts")`
**Priority Order**:
```
ACE search_context (semantic) → smart_search (structured) → read_file (batch read) → shell commands (fallback)
```
**NEVER** use shell commands (`cat`, `find`, `grep`) when MCP tools are available.
### read_file - Read File Contents
**When**: Read files found by smart_search
**How**:
```javascript
read_file(path="/path/to/file.ts") // Single file
read_file(path="/src/**/*.config.ts") // Pattern matching
```
---
### edit_file - Modify Files
**When**: Built-in Edit tool fails or need advanced features
**How**:
```javascript
edit_file(path="/file.ts", old_string="...", new_string="...", mode="update")
edit_file(path="/file.ts", line=10, content="...", mode="insert_after")
```
**Modes**: `update` (replace text), `insert_after`, `insert_before`, `delete_line`
---
### write_file - Create/Overwrite Files
**When**: Create new files or completely replace content
**How**:
```javascript
write_file(path="/new-file.ts", content="...")
```
---
### Exa - External Search
**When**: Find documentation/examples outside codebase
**How**:
```javascript
mcp__exa__search(query="React hooks 2025 documentation")
mcp__exa__search(query="FastAPI auth example", numResults=10)
mcp__exa__search(query="latest API docs", livecrawl="always")
```
**Parameters**:
- `query` (required): Search query string
- `numResults` (optional): Number of results to return (default: 5)
- `livecrawl` (optional): `"always"` or `"fallback"` for live crawling

View File

@@ -1,64 +0,0 @@
# File Modification
Before modifying files, always:
- Try built-in Edit tool first
- Escalate to MCP tools when built-ins fail
- Use write_file only as last resort
## MCP Tools Usage
### edit_file - Modify Files
**When**: Built-in Edit fails, need dry-run preview, or need line-based operations
**How**:
```javascript
edit_file(path="/file.ts", oldText="old", newText="new") // Replace text
edit_file(path="/file.ts", oldText="old", newText="new", dryRun=true) // Preview diff
edit_file(path="/file.ts", oldText="old", newText="new", replaceAll=true) // Replace all
edit_file(path="/file.ts", mode="line", operation="insert_after", line=10, text="new line")
edit_file(path="/file.ts", mode="line", operation="delete", line=5, end_line=8)
```
**Modes**: `update` (replace text, default), `line` (line-based operations)
**Operations** (line mode): `insert_before`, `insert_after`, `replace`, `delete`
---
### write_file - Create/Overwrite Files
**When**: Create new files, completely replace content, or edit_file still fails
**How**:
```javascript
write_file(path="/new-file.ts", content="file content here")
write_file(path="/existing.ts", content="...", backup=true) // Create backup first
```
---
## Priority Logic
> **Note**: Search priority is defined in `context-tools.md` - smart_search has HIGHEST PRIORITY for all discovery tasks.
**Search & Discovery** (defer to context-tools.md):
1. **smart_search FIRST** for any code/file discovery
2. Built-in Grep only for single-file exact line search (location already confirmed)
3. Exa for external/public knowledge
**File Reading**:
1. Unknown location → **smart_search first**, then Read
2. Known confirmed file → Built-in Read directly
3. Pattern matching → smart_search (action="find_files")
**File Editing**:
1. Always try built-in Edit first
2. Fails 1+ times → edit_file (MCP)
3. Still fails → write_file (MCP)
## Decision Triggers
**Search tasks** → Always start with smart_search (per context-tools.md)
**Known file edits** → Start with built-in Edit, escalate to MCP if fails
**External knowledge** → Use Exa

View File

@@ -1,251 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"version": "1.0.0",
"description": "Test quality and code validation configuration for AI-generated code",
"code_validation": {
"description": "Pre-test validation for AI-generated code common errors",
"enabled": true,
"phases": {
"L0_compilation": {
"description": "TypeScript/JavaScript compilation check",
"enabled": true,
"commands": {
"typescript": "npx tsc --noEmit --skipLibCheck",
"javascript": "node --check"
},
"critical": true,
"failure_blocks_tests": true
},
"L0_imports": {
"description": "Import statement validation",
"enabled": true,
"checks": [
{
"id": "unresolved_imports",
"description": "Check for unresolved module imports",
"pattern": "Cannot find module|Module not found|Unable to resolve",
"severity": "critical"
},
{
"id": "circular_imports",
"description": "Check for circular dependencies",
"tool": "madge",
"command": "npx madge --circular --extensions ts,tsx,js,jsx",
"severity": "warning"
},
{
"id": "duplicate_imports",
"description": "Check for duplicate imports",
"eslint_rule": "import/no-duplicates",
"severity": "error"
},
{
"id": "unused_imports",
"description": "Check for unused imports",
"eslint_rule": "unused-imports/no-unused-imports",
"severity": "warning"
}
]
},
"L0_variables": {
"description": "Variable declaration validation",
"enabled": true,
"checks": [
{
"id": "redeclaration",
"description": "Check for variable redeclaration",
"pattern": "Cannot redeclare|Duplicate identifier|has already been declared",
"severity": "critical"
},
{
"id": "scope_conflict",
"description": "Check for scope conflicts",
"eslint_rule": "no-shadow",
"severity": "error"
},
{
"id": "undefined_vars",
"description": "Check for undefined variables",
"eslint_rule": "no-undef",
"severity": "critical"
},
{
"id": "unused_vars",
"description": "Check for unused variables",
"eslint_rule": "@typescript-eslint/no-unused-vars",
"severity": "warning"
}
]
},
"L0_types": {
"description": "TypeScript type validation",
"enabled": true,
"checks": [
{
"id": "type_mismatch",
"description": "Check for type mismatches",
"pattern": "Type .* is not assignable to type",
"severity": "critical"
},
{
"id": "missing_types",
"description": "Check for missing type definitions",
"pattern": "Could not find a declaration file",
"severity": "warning"
},
{
"id": "any_abuse",
"description": "Check for excessive any type usage",
"eslint_rule": "@typescript-eslint/no-explicit-any",
"severity": "warning",
"max_occurrences": 5
},
{
"id": "implicit_any",
"description": "Check for implicit any",
"pattern": "implicitly has an 'any' type",
"severity": "error"
}
]
}
},
"severity_thresholds": {
"critical": 0,
"error": 3,
"warning": 10
},
"max_retries": 2,
"auto_fix": {
"enabled": true,
"safe_fixes_only": true,
"fixable_categories": ["imports", "formatting", "unused_vars"]
}
},
"test_quality": {
"description": "Test file quality validation (IMPL-001.5)",
"enabled": true,
"coverage": {
"minimum_threshold": 80,
"branch_threshold": 70,
"function_threshold": 80,
"line_threshold": 80
},
"anti_patterns": {
"empty_test_body": {
"pattern": "it\\(['\"].*['\"],\\s*\\(\\)\\s*=>\\s*\\{\\s*\\}\\)",
"severity": "critical",
"description": "Test with empty body"
},
"missing_assertion": {
"pattern": "it\\(['\"].*['\"],.*\\{[^}]*\\}\\)(?![\\s\\S]*expect)",
"severity": "critical",
"description": "Test without expect() assertion"
},
"skipped_without_reason": {
"pattern": "(it|describe)\\.skip\\(['\"][^'\"]*['\"](?!.*\\/\\/ )",
"severity": "error",
"description": "Skipped test without comment explaining why"
},
"todo_test": {
"pattern": "(it|test)\\.todo\\(",
"severity": "warning",
"description": "TODO test placeholder"
},
"only_test": {
"pattern": "(it|describe)\\.only\\(",
"severity": "critical",
"description": "Focused test (will skip other tests)"
}
},
"required_test_types": {
"unit": {
"min_per_function": 1,
"must_include": ["happy_path"]
},
"negative": {
"min_per_public_api": 1,
"description": "Error handling tests for public APIs"
},
"edge_case": {
"required_scenarios": ["null", "undefined", "empty_string", "empty_array", "boundary_values"]
}
}
},
"ai_specific_checks": {
"description": "Checks specifically for AI-generated code patterns",
"enabled": true,
"checks": [
{
"id": "hallucinated_imports",
"description": "Check for imports of non-existent packages",
"validation": "npm_package_exists",
"severity": "critical"
},
{
"id": "inconsistent_naming",
"description": "Check for naming inconsistencies within file",
"pattern": "function (\\w+).*\\1(?!\\()",
"severity": "warning"
},
{
"id": "placeholder_code",
"description": "Check for AI placeholder comments",
"patterns": [
"// TODO: implement",
"// Add your code here",
"// Implementation pending",
"throw new Error\\(['\"]Not implemented['\"]\\)"
],
"severity": "error"
},
{
"id": "mock_in_production",
"description": "Check for mock/stub code in production files",
"patterns": [
"jest\\.mock\\(",
"sinon\\.",
"vi\\.mock\\("
],
"exclude_paths": ["**/*.test.*", "**/*.spec.*", "**/test/**", "**/__tests__/**"],
"severity": "critical"
}
]
},
"validation_commands": {
"typescript_check": {
"command": "npx tsc --noEmit --skipLibCheck",
"timeout": 60000,
"parse_errors": true
},
"eslint_check": {
"command": "npx eslint --format json",
"timeout": 60000,
"auto_fix_command": "npx eslint --fix"
},
"circular_deps_check": {
"command": "npx madge --circular --extensions ts,tsx,js,jsx",
"timeout": 30000
},
"package_validation": {
"command": "npm ls --json",
"timeout": 30000
}
},
"gate_decisions": {
"pass_criteria": {
"critical_issues": 0,
"error_issues": "<=3",
"warning_issues": "<=10"
},
"actions": {
"pass": "Proceed to IMPL-001.5 (Test Quality Gate)",
"soft_fail": "Auto-fix and retry (max 2 attempts)",
"hard_fail": "Block and report to user with fix suggestions"
}
}
}

View File

@@ -1,216 +0,0 @@
# Tool Strategy - When to Use What
> **Focus**: Decision triggers and selection logic, NOT syntax (already registered with Claude)
## Quick Decision Tree
```
Need context?
├─ Exa available? → Use Exa (fastest, most comprehensive)
├─ Large codebase (>500 files)? → codex_lens
├─ Known files (<5)? → Read tool
└─ Unknown files? → smart_search → Read tool
Need to modify files?
├─ Built-in Edit fails? → mcp__ccw-tools__edit_file
└─ Still fails? → mcp__ccw-tools__write_file
Need to search?
├─ Semantic/concept search? → smart_search (mode=semantic)
├─ Exact pattern match? → Grep tool
└─ Multiple search modes needed? → smart_search (mode=auto)
```
---
## 1. Context Gathering Tools
### Exa (`mcp__exa__get_code_context_exa`)
**Use When**:
- ✅ Researching external APIs, libraries, frameworks
- ✅ Need recent documentation (post-cutoff knowledge)
- ✅ Looking for implementation examples in public repos
- ✅ Comparing architectural patterns across projects
**Don't Use When**:
- ❌ Searching internal codebase (use smart_search/codex_lens)
- ❌ Files already in working directory (use Read)
**Trigger Indicators**:
- User mentions specific library/framework names
- Questions about "best practices", "how does X work"
- Need to verify current API signatures
---
### read_file (`mcp__ccw-tools__read_file`)
**Use When**:
- ✅ Reading multiple related files at once (batch reading)
- ✅ Need directory traversal with pattern matching
- ✅ Searching file content with regex (`contentPattern`)
- ✅ Want to limit depth/file count for large directories
**Don't Use When**:
- ❌ Single file read → Use built-in Read tool (faster)
- ❌ Unknown file locations → Use smart_search first
- ❌ Need semantic search → Use smart_search or codex_lens
**Trigger Indicators**:
- Need to read "all TypeScript files in src/"
- Need to find "files containing TODO comments"
- Want to read "up to 20 config files"
**Advantages over Built-in Read**:
- Batch operation (multiple files in one call)
- Pattern-based filtering (glob + content regex)
- Directory traversal with depth control
---
### codex_lens (`mcp__ccw-tools__codex_lens`)
**Use When**:
- ✅ Large codebase (>500 files) requiring repeated searches
- ✅ Need semantic understanding of code relationships
- ✅ Working across multiple sessions (persistent index)
- ✅ Symbol-level navigation needed
**Don't Use When**:
- ❌ Small project (<100 files) → Use smart_search (no indexing overhead)
- ❌ One-time search → Use smart_search or Grep
- ❌ Files change frequently → Indexing overhead not worth it
**Trigger Indicators**:
- "Find all implementations of interface X"
- "What calls this function across the codebase?"
- Multi-session workflow on same codebase
**Action Selection**:
- `init`: First time in new codebase
- `search`: Find code patterns
- `search_files`: Find files by path/name pattern
- `symbol`: Get symbols in specific file
- `status`: Check if index exists/is stale
- `clean`: Remove stale index
---
### smart_search (`mcp__ccw-tools__smart_search`)
**Use When**:
- ✅ Don't know exact file locations
- ✅ Need concept/semantic search ("authentication logic")
- ✅ Medium-sized codebase (100-500 files)
- ✅ One-time or infrequent searches
**Don't Use When**:
- ❌ Known exact file path → Use Read directly
- ❌ Large codebase + repeated searches → Use codex_lens
- ❌ Exact pattern match → Use Grep (faster)
**Mode Selection**:
- `auto`: Let tool decide (default, safest)
- `exact`: Know exact pattern, need fast results
- `fuzzy`: Typo-tolerant file/symbol names
- `semantic`: Concept-based ("error handling", "data validation")
- `graph`: Dependency/relationship analysis
**Trigger Indicators**:
- "Find files related to user authentication"
- "Where is the payment processing logic?"
- "Locate database connection setup"
---
## 2. File Modification Tools
### edit_file (`mcp__ccw-tools__edit_file`)
**Use When**:
- ✅ Built-in Edit tool failed 1+ times
- ✅ Need dry-run preview before applying
- ✅ Need line-based operations (insert_after, insert_before)
- ✅ Need to replace all occurrences
**Don't Use When**:
- ❌ Built-in Edit hasn't failed yet → Try built-in first
- ❌ Need to create new file → Use write_file
**Trigger Indicators**:
- Built-in Edit returns "old_string not found"
- Built-in Edit fails due to whitespace/formatting
- Need to verify changes before applying (dryRun=true)
**Mode Selection**:
- `mode=update`: Replace text (similar to built-in Edit)
- `mode=line`: Line-based operations (insert_after, insert_before, delete)
---
### write_file (`mcp__ccw-tools__write_file`)
**Use When**:
- ✅ Creating brand new files
- ✅ MCP edit_file still fails (last resort)
- ✅ Need to completely replace file content
- ✅ Need backup before overwriting
**Don't Use When**:
- ❌ File exists + small change → Use Edit tools
- ❌ Built-in Edit hasn't been tried → Try built-in Edit first
**Trigger Indicators**:
- All Edit attempts failed
- Need to create new file with specific content
- User explicitly asks to "recreate file"
---
## 3. Decision Logic
### File Reading Priority
```
1. Known single file? → Built-in Read
2. Multiple files OR pattern matching? → mcp__ccw-tools__read_file
3. Unknown location? → smart_search, then Read
4. Large codebase + repeated access? → codex_lens
```
### File Editing Priority
```
1. Always try built-in Edit first
2. Fails 1+ times? → mcp__ccw-tools__edit_file
3. Still fails? → mcp__ccw-tools__write_file (last resort)
```
### Search Tool Priority
```
1. External knowledge? → Exa
2. Exact pattern in small codebase? → Built-in Grep
3. Semantic/unknown location? → smart_search
4. Large codebase + repeated searches? → codex_lens
```
---
## 4. Anti-Patterns
**Don't**:
- Use codex_lens for one-time searches in small projects
- Use smart_search when file path is already known
- Use write_file before trying Edit tools
- Use Exa for internal codebase searches
- Use read_file for single file when Read tool works
**Do**:
- Start with simplest tool (Read, Edit, Grep)
- Escalate to MCP tools when built-ins fail
- Use semantic search (smart_search) for exploratory tasks
- Use indexed search (codex_lens) for large, stable codebases
- Use Exa for external/public knowledge

View File

@@ -1,19 +0,0 @@
# Windows Platform Guidelines
## Path Format
- **MCP Tools**: `D:\\path\\file.txt`
- **Bash**: `D:/path/file.txt` or `/d/path/file.txt`
- **Relative**: `./src/index.ts`
## Bash Rules (Prevent Garbage Files)
1. **Null redirect**: `command > NUL 2>&1`
2. **Quote all**: `echo "$VAR"`, `cat "file name.txt"`
3. **Variable assignment**: `export VAR=value && command`
4. **Regex escape**: `grep -F "State<T>"` or `grep "State\<T\>"`
5. **Pipe output**: `command 2>&1 | ...` (avoid bare command output)
## Tool Priority
MCP Tools > PowerShell > Git Bash > cmd

View File

@@ -1,942 +0,0 @@
# Workflow Architecture
## Overview
This document defines the complete workflow system architecture using a **JSON-only data model**, **marker-based session management**, and **unified file structure** with dynamic task decomposition.
## Core Architecture
### JSON-Only Data Model
**JSON files (.task/IMPL-*.json) are the only authoritative source of task state. All markdown documents are read-only generated views.**
- **Task State**: Stored exclusively in JSON files
- **Documents**: Generated on-demand from JSON data
- **No Synchronization**: Eliminates bidirectional sync complexity
- **Performance**: Direct JSON access without parsing overhead
### Key Design Decisions
- **JSON files are the single source of truth** - All markdown documents are read-only generated views
- **Marker files for session tracking** - Ultra-simple active session management
- **Unified file structure definition** - Same structure template for all workflows, created on-demand
- **Dynamic task decomposition** - Subtasks created as needed during execution
- **On-demand file creation** - Directories and files created only when required
- **Agent-agnostic task definitions** - Complete context preserved for autonomous execution
## Session Management
### Directory-Based Session Management
**Simple Location-Based Tracking**: Sessions in `.workflow/active/` directory
```bash
.workflow/
├── active/
│ ├── WFS-oauth-integration/ # Active session directory
│ ├── WFS-user-profile/ # Active session directory
│ └── WFS-bug-fix-123/ # Active session directory
└── archives/
└── WFS-old-feature/ # Archived session (completed)
```
### Session Operations
#### Detect Active Session(s)
```bash
active_sessions=$(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
count=$(echo "$active_sessions" | wc -l)
if [ -z "$active_sessions" ]; then
echo "No active session"
elif [ "$count" -eq 1 ]; then
session_name=$(basename "$active_sessions")
echo "Active session: $session_name"
else
echo "Multiple sessions found:"
echo "$active_sessions" | while read session_dir; do
session=$(basename "$session_dir")
echo " - $session"
done
echo "Please specify which session to work with"
fi
```
#### Archive Session
```bash
mv .workflow/active/WFS-feature .workflow/archives/WFS-feature
```
### Session State Tracking
Each session directory contains `workflow-session.json`:
```json
{
"session_id": "WFS-[topic-slug]",
"project": "feature description",
"type": "simple|medium|complex",
"current_phase": "PLAN|IMPLEMENT|REVIEW",
"status": "active|paused|completed",
"progress": {
"completed_phases": ["PLAN"],
"current_tasks": ["IMPL-1", "IMPL-2"]
}
}
```
## Task System
### Hierarchical Task Structure
**Maximum Depth**: 2 levels (IMPL-N.M format)
```
IMPL-1 # Main task
IMPL-1.1 # Subtask of IMPL-1 (dynamically created)
IMPL-1.2 # Another subtask of IMPL-1
IMPL-2 # Another main task
IMPL-2.1 # Subtask of IMPL-2 (dynamically created)
```
**Task Status Rules**:
- **Container tasks**: Parent tasks with subtasks (cannot be directly executed)
- **Leaf tasks**: Only these can be executed directly
- **Status inheritance**: Parent status derived from subtask completion
### Enhanced Task JSON Schema
All task files use this unified 6-field schema with optional artifacts enhancement:
```json
{
"id": "IMPL-1.2",
"title": "Implement JWT authentication",
"status": "pending|active|completed|blocked|container",
"context_package_path": ".workflow/WFS-session/.process/context-package.json",
"meta": {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor"
},
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"focus_paths": ["src/auth", "tests/auth", "config/auth.json"],
"acceptance": ["JWT validation works", "OAuth flow complete"],
"parent": "IMPL-1",
"depends_on": ["IMPL-1.1"],
"inherited": {
"from": "IMPL-1",
"context": ["Authentication system design completed"]
},
"shared_context": {
"auth_strategy": "JWT with refresh tokens"
},
"artifacts": [
{
"type": "role_analyses",
"source": "brainstorm_clarification",
"path": ".workflow/WFS-session/.brainstorming/*/analysis*.md",
"priority": "highest",
"contains": "role_specific_requirements_and_design"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "check_patterns",
"action": "Analyze existing patterns",
"command": "bash(rg 'auth' [focus_paths] | head -10)",
"output_to": "patterns"
},
{
"step": "analyze_architecture",
"action": "Review system architecture",
"command": "gemini \"analyze patterns: [patterns]\"",
"output_to": "design"
},
{
"step": "check_deps",
"action": "Check dependencies",
"command": "bash(echo [depends_on] | xargs cat)",
"output_to": "context"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Set up authentication infrastructure",
"description": "Install JWT library and create auth config following [design] patterns from [parent]",
"modification_points": [
"Add JWT library dependencies to package.json",
"Create auth configuration file using [parent] patterns"
],
"logic_flow": [
"Install jsonwebtoken library via npm",
"Configure JWT secret and expiration from [inherited]",
"Export auth config for use by [jwt_generator]"
],
"depends_on": [],
"output": "auth_config"
},
{
"step": 2,
"title": "Implement JWT generation",
"description": "Create JWT token generation logic using [auth_config] and [inherited] validation patterns",
"modification_points": [
"Add JWT generation function in auth service",
"Implement token signing with [auth_config]"
],
"logic_flow": [
"User login → validate credentials with [inherited]",
"Generate JWT payload with user data",
"Sign JWT using secret from [auth_config]",
"Return signed token"
],
"depends_on": [1],
"output": "jwt_generator"
},
{
"step": 3,
"title": "Implement JWT validation middleware",
"description": "Create middleware to validate JWT tokens using [auth_config] and [shared] rules",
"modification_points": [
"Create validation middleware using [jwt_generator]",
"Add token verification using [shared] rules",
"Implement user attachment to request object"
],
"logic_flow": [
"Protected route → extract JWT from Authorization header",
"Validate token signature using [auth_config]",
"Check token expiration and [shared] rules",
"Decode payload and attach user to request",
"Call next() or return 401 error"
],
"command": "bash(npm test -- middleware.test.ts)",
"depends_on": [1, 2],
"output": "auth_middleware"
}
],
"target_files": [
"src/auth/login.ts:handleLogin:75-120",
"src/middleware/auth.ts:validateToken",
"src/auth/PasswordReset.ts"
]
}
}
```
### Focus Paths & Context Management
#### Context Package Path (Top-Level Field)
The **context_package_path** field provides the location of the smart context package:
- **Location**: Top-level field (not in `artifacts` array)
- **Path**: `.workflow/WFS-session/.process/context-package.json`
- **Purpose**: References the comprehensive context package containing project structure, dependencies, and brainstorming artifacts catalog
- **Usage**: Loaded in `pre_analysis` steps via `Read({{context_package_path}})`
#### Focus Paths Format
The **focus_paths** field specifies concrete project paths for task implementation:
- **Array of strings**: `["folder1", "folder2", "specific_file.ts"]`
- **Concrete paths**: Use actual directory/file names without wildcards
- **Mixed types**: Can include both directories and specific files
- **Relative paths**: From project root (e.g., `src/auth`, not `./src/auth`)
#### Artifacts Field ⚠️ NEW FIELD
Optional field referencing brainstorming outputs for task execution:
```json
"artifacts": [
{
"type": "role_analyses|topic_framework|individual_role_analysis",
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
"path": ".workflow/WFS-session/.brainstorming/document.md",
"priority": "highest|high|medium|low"
}
]
```
**Types & Priority**: role_analyses (highest) → topic_framework (medium) → individual_role_analysis (low)
#### Flow Control Configuration
The **flow_control** field manages task execution through structured sequential steps. For complete format specifications and usage guidelines, see [Flow Control Format Guide](#flow-control-format-guide) below.
**Quick Reference**:
- **pre_analysis**: Context gathering steps (supports multiple command types)
- **implementation_approach**: Implementation steps array with dependency management
- **target_files**: Target files for modification (file:function:lines format)
- **Variable references**: Use `[variable_name]` to reference step outputs
- **Tool integration**: Supports Gemini, Codex, Bash commands, and MCP tools
## Flow Control Format Guide
The `[FLOW_CONTROL]` marker indicates that a task or prompt contains flow control steps for sequential execution. There are **two distinct formats** used in different scenarios:
### Format Comparison Matrix
| Aspect | Inline Format | JSON Format |
|--------|--------------|-------------|
| **Used In** | Brainstorm workflows | Implementation tasks |
| **Agent** | conceptual-planning-agent | code-developer, test-fix-agent, doc-generator |
| **Location** | Task() prompt (markdown) | .task/IMPL-*.json file |
| **Persistence** | Temporary (prompt-only) | Persistent (file storage) |
| **Complexity** | Simple (3-5 steps) | Complex (10+ steps) |
| **Dependencies** | None | Full `depends_on` support |
| **Purpose** | Load brainstorming context | Implement task with preparation |
### Inline Format (Brainstorm)
**Marker**: `[FLOW_CONTROL]` written directly in Task() prompt
**Structure**: Markdown list format
**Used By**: Brainstorm commands (`auto-parallel.md`, role commands)
**Agent**: `conceptual-planning-agent`
**Example**:
```markdown
[FLOW_CONTROL]
### Flow Control Steps
**AGENT RESPONSIBILITY**: Execute these pre_analysis steps sequentially with context accumulation:
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework
2. **load_role_template**
- Action: Load role-specific planning template
- Command: bash($(cat "~/.ccw/workflows/cli-templates/planning-roles/{role}.md"))
- Output: role_template
3. **load_session_metadata**
- Action: Load session metadata and topic description
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json 2>/dev/null || echo '{}')
- Output: session_metadata
```
**Characteristics**:
- 3-5 simple context loading steps
- Written directly in prompt (not persistent)
- No dependency management between steps
- Used for temporary context preparation
- Variables: `[variable_name]` for output references
### JSON Format (Implementation)
**Marker**: `[FLOW_CONTROL]` used in TodoWrite or documentation to indicate task has flow control
**Structure**: Complete JSON structure in task file
**Used By**: Implementation tasks (IMPL-*.json)
**Agents**: `code-developer`, `test-fix-agent`, `doc-generator`
**Example**:
```json
"flow_control": {
"pre_analysis": [
{
"step": "load_role_analyses",
"action": "Load role analysis documents from brainstorming",
"commands": [
"bash(ls .workflow/WFS-{session}/.brainstorming/*/analysis*.md 2>/dev/null || echo 'not found')",
"Glob(.workflow/WFS-{session}/.brainstorming/*/analysis*.md)",
"Read(each discovered role analysis file)"
],
"output_to": "role_analyses",
"on_error": "skip_optional"
},
{
"step": "local_codebase_exploration",
"action": "Explore codebase using local search",
"commands": [
"bash(rg '^(function|class|interface).*auth' --type ts -n --max-count 15)",
"bash(find . -name '*auth*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Setup infrastructure",
"description": "Install JWT library and create config following [role_analyses]",
"modification_points": [
"Add JWT library dependencies to package.json",
"Create auth configuration file"
],
"logic_flow": [
"Install jsonwebtoken library via npm",
"Configure JWT secret from [role_analyses]",
"Export auth config for use by [jwt_generator]"
],
"depends_on": [],
"output": "auth_config"
},
{
"step": 2,
"title": "Implement JWT generation",
"description": "Create JWT token generation logic using [auth_config]",
"modification_points": [
"Add JWT generation function in auth service",
"Implement token signing with [auth_config]"
],
"logic_flow": [
"User login → validate credentials",
"Generate JWT payload with user data",
"Sign JWT using secret from [auth_config]",
"Return signed token"
],
"depends_on": [1],
"output": "jwt_generator"
}
],
"target_files": [
"src/auth/login.ts:handleLogin:75-120",
"src/middleware/auth.ts:validateToken"
]
}
```
**Characteristics**:
- Persistent storage in .task/IMPL-*.json files
- Complete dependency management (`depends_on` arrays)
- Two-phase structure: `pre_analysis` + `implementation_approach`
- Error handling strategies (`on_error` field)
- Target file specifications
- Variables: `[variable_name]` for cross-step references
### JSON Format Field Specifications
#### pre_analysis Field
**Purpose**: Context gathering phase before implementation
**Structure**: Array of step objects with sequential execution
**Step Fields**:
- **step**: Step identifier (string, e.g., "load_role_analyses")
- **action**: Human-readable description of the step
- **command** or **commands**: Single command string or array of command strings
- **output_to**: Variable name for storing step output
- **on_error**: Error handling strategy (`skip_optional`, `fail`, `retry_once`, `manual_intervention`)
**Command Types Supported**:
- **Bash commands**: `bash(command)` - Any shell command
- **Tool calls**: `Read(file)`, `Glob(pattern)`, `Grep(pattern)`
- **MCP tools**: `mcp__exa__get_code_context_exa()`, `mcp__exa__web_search_exa()`
- **CLI commands**: `gemini`, `qwen`, `codex --full-auto exec`
**Example**:
```json
{
"step": "load_context",
"action": "Load project context and patterns",
"commands": [
"bash(ccw tool exec get_modules_by_depth '{}')",
"Read(CLAUDE.md)"
],
"output_to": "project_structure",
"on_error": "skip_optional"
}
```
#### implementation_approach Field
**Purpose**: Define implementation steps with dependency management
**Structure**: Array of step objects (NOT object format)
**Step Fields (All Required)**:
- **step**: Unique step number (1, 2, 3, ...) - serves as step identifier
- **title**: Brief step title
- **description**: Comprehensive implementation description with context variable references
- **modification_points**: Array of specific code modification targets
- **logic_flow**: Array describing business logic execution sequence
- **depends_on**: Array of step numbers this step depends on (e.g., `[1]`, `[1, 2]`) - empty array `[]` for independent steps
- **output**: Output variable name that can be referenced by subsequent steps via `[output_name]`
**Optional Fields**:
- **command**: Command for step execution (supports any shell command or CLI tool)
- When omitted: Agent interprets modification_points and logic_flow to execute
- When specified: Command executes the step directly
**Execution Modes**:
- **Default (without command)**: Agent executes based on modification_points and logic_flow
- **With command**: Specified command handles execution
**Command Field Usage**:
- **Default approach**: Omit command field - let agent execute autonomously
- **CLI tools (codex/gemini/qwen)**: Add ONLY when user explicitly requests CLI tool usage
- **Simple commands**: Can include bash commands, test commands, validation scripts
- **Complex workflows**: Use command for multi-step operations or tool coordination
**Command Format Examples** (only when explicitly needed):
```json
// Simple Bash
"command": "bash(npm install package)"
"command": "bash(npm test)"
// Validation
"command": "bash(test -f config.ts && grep -q 'JWT_SECRET' config.ts)"
// Codex (user requested)
"command": "codex -C path --full-auto exec \"task\" --skip-git-repo-check -s danger-full-access"
// Codex Resume (user requested, maintains context)
"command": "codex --full-auto exec \"task\" resume --last --skip-git-repo-check -s danger-full-access"
// Gemini (user requested)
"command": "gemini \"analyze [context]\""
// Qwen (fallback for Gemini)
"command": "qwen \"analyze [context]\""
```
**Example Step**:
```json
{
"step": 2,
"title": "Implement JWT generation",
"description": "Create JWT token generation logic using [auth_config]",
"modification_points": [
"Add JWT generation function in auth service",
"Implement token signing with [auth_config]"
],
"logic_flow": [
"User login → validate credentials",
"Generate JWT payload with user data",
"Sign JWT using secret from [auth_config]",
"Return signed token"
],
"depends_on": [1],
"output": "jwt_generator"
}
```
#### target_files Field
**Purpose**: Specify files to be modified or created
**Format**: Array of strings
- **Existing files**: `"file:function:lines"` (e.g., `"src/auth/login.ts:handleLogin:75-120"`)
- **New files**: `"path/to/NewFile.ts"` (file path only)
### Tool Reference
**Available Command Types**:
**Gemini CLI**:
```bash
gemini "prompt"
gemini --approval-mode yolo "prompt" # For write mode
```
**Qwen CLI** (Gemini fallback):
```bash
qwen "prompt"
qwen --approval-mode yolo "prompt" # For write mode
```
**Codex CLI**:
```bash
codex -C directory --full-auto exec "task" --skip-git-repo-check -s danger-full-access
codex --full-auto exec "task" resume --last --skip-git-repo-check -s danger-full-access
```
**Built-in Tools**:
- `Read(file_path)` - Read file contents
- `Glob(pattern)` - Find files by pattern
- `Grep(pattern)` - Search content with regex
- `bash(command)` - Execute bash command
**MCP Tools**:
- `mcp__exa__get_code_context_exa(query="...")` - Get code context from Exa
- `mcp__exa__web_search_exa(query="...")` - Web search via Exa
**Bash Commands**:
```bash
bash(rg 'pattern' src/)
bash(find . -name "*.ts")
bash(npm test)
bash(git log --oneline | head -5)
```
### Variable System & Context Flow
**Variable Reference Syntax**:
Both formats use `[variable_name]` syntax for referencing outputs from previous steps.
**Variable Types**:
- **Step outputs**: `[step_output_name]` - Reference any pre_analysis step output
- **Task properties**: `[task_property]` - Reference any task context field
- **Previous results**: `[analysis_result]` - Reference accumulated context
- **Implementation outputs**: Reference outputs from previous implementation steps
**Examples**:
```json
// Reference pre_analysis output
"description": "Install JWT library following [role_analyses]"
// Reference previous step output
"description": "Create middleware using [auth_config] and [jwt_generator]"
// Reference task context
"command": "bash(cd [focus_paths] && npm test)"
```
**Context Accumulation Process**:
1. **Structure Analysis**: `get_modules_by_depth.sh` → project hierarchy
2. **Pattern Analysis**: Tool-specific commands → existing patterns
3. **Dependency Mapping**: Previous task summaries → inheritance context
4. **Task Context Generation**: Combined analysis → task.context fields
**Context Inheritance Rules**:
- **Parent → Child**: Container tasks pass context via `context.inherited`
- **Dependency → Dependent**: Previous task summaries via `context.depends_on`
- **Session → Task**: Global session context included in all tasks
- **Module → Feature**: Module patterns inform feature implementation
### Agent Processing Rules
**conceptual-planning-agent** (Inline Format):
- Parses markdown list from prompt
- Executes 3-5 simple loading steps
- No dependency resolution needed
- Accumulates context in variables
- Used only in brainstorm workflows
**code-developer, test-fix-agent** (JSON Format):
- Loads complete task JSON from file
- Executes `pre_analysis` steps sequentially
- Processes `implementation_approach` with dependency resolution
- Handles complex variable substitution
- Updates task status in JSON file
### Usage Guidelines
**Use Inline Format When**:
- Running brainstorm workflows
- Need 3-5 simple context loading steps
- No persistence required
- No dependencies between steps
- Temporary context preparation
**Use JSON Format When**:
- Implementing features or tasks
- Need 10+ complex execution steps
- Require dependency management
- Need persistent task definitions
- Complex variable flow between steps
- Error handling strategies needed
### Variable Reference Syntax
Both formats use `[variable_name]` syntax for referencing outputs:
**Inline Format**:
```markdown
2. **analyze_context**
- Action: Analyze using [topic_framework] and [role_template]
- Output: analysis_results
```
**JSON Format**:
```json
{
"step": 2,
"description": "Implement following [role_analyses] and [codebase_structure]",
"depends_on": [1],
"output": "implementation"
}
```
### Task Validation Rules
1. **ID Uniqueness**: All task IDs must be unique
2. **Hierarchical Format**: Must follow IMPL-N[.M] pattern (maximum 2 levels)
3. **Parent References**: All parent IDs must exist as JSON files
4. **Status Consistency**: Status values from defined enumeration
5. **Required Fields**: All 5 core fields must be present (id, title, status, meta, context, flow_control)
6. **Focus Paths Structure**: context.focus_paths must contain concrete paths (no wildcards)
7. **Flow Control Format**: pre_analysis must be array with required fields
8. **Dependency Integrity**: All task-level depends_on references must exist as JSON files
9. **Artifacts Structure**: context.artifacts (optional) must use valid type, priority, and path format
10. **Implementation Steps Array**: implementation_approach must be array of step objects
11. **Step Number Uniqueness**: All step numbers within a task must be unique and sequential (1, 2, 3, ...)
12. **Step Dependencies**: All step-level depends_on numbers must reference valid steps within same task
13. **Step Sequence**: Step numbers should match array order (first item step=1, second item step=2, etc.)
14. **Step Required Fields**: Each step must have step, title, description, modification_points, logic_flow, depends_on, output
15. **Step Optional Fields**: command field is optional - when omitted, agent executes based on modification_points and logic_flow
## Workflow Structure
### Unified File Structure
All workflows use the same file structure definition regardless of complexity. **Directories and files are created on-demand as needed**, not all at once during initialization.
#### Complete Structure Reference
```
.workflow/
├── [.scratchpad/] # Non-session-specific outputs (created when needed)
│ ├── analyze-*-[timestamp].md # One-off analysis results
│ ├── chat-*-[timestamp].md # Standalone chat sessions
│ ├── plan-*-[timestamp].md # Ad-hoc planning notes
│ ├── bug-index-*-[timestamp].md # Quick bug analyses
│ ├── code-analysis-*-[timestamp].md # Standalone code analysis
│ ├── execute-*-[timestamp].md # Ad-hoc implementation logs
│ └── codex-execute-*-[timestamp].md # Multi-stage execution logs
├── [design-run-*/] # Standalone UI design outputs (created when needed)
│ └── (timestamped)/ # Timestamped design runs without session
│ ├── .intermediates/ # Intermediate analysis files
│ │ ├── style-analysis/ # Style analysis data
│ │ │ ├── computed-styles.json # Extracted CSS values
│ │ │ └── design-space-analysis.json # Design directions
│ │ └── layout-analysis/ # Layout analysis data
│ │ ├── dom-structure-{target}.json # DOM extraction
│ │ └── inspirations/ # Layout research
│ │ └── {target}-layout-ideas.txt
│ ├── style-extraction/ # Final design systems
│ │ ├── style-1/ # design-tokens.json, style-guide.md
│ │ └── style-N/
│ ├── layout-extraction/ # Layout templates
│ │ └── layout-templates.json
│ ├── prototypes/ # Generated HTML/CSS prototypes
│ │ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
│ │ ├── compare.html # Interactive matrix view
│ │ └── index.html # Navigation page
│ └── .run-metadata.json # Run configuration
├── active/ # Active workflow sessions
│ └── WFS-[topic-slug]/
│ ├── workflow-session.json # Session metadata and state (REQUIRED)
│ ├── [.brainstorming/] # Optional brainstorming phase (created when needed)
│ ├── [.chat/] # CLI interaction sessions (created when analysis is run)
│ │ ├── chat-*.md # Saved chat sessions
│ │ └── analysis-*.md # Analysis results
│ ├── [.process/] # Planning analysis results (created by /workflow:plan)
│ │ └── ANALYSIS_RESULTS.md # Analysis results and planning artifacts
│ ├── IMPL_PLAN.md # Planning document (REQUIRED)
│ ├── TODO_LIST.md # Progress tracking (REQUIRED)
│ ├── [.summaries/] # Task completion summaries (created when tasks complete)
│ │ ├── IMPL-*-summary.md # Main task summaries
│ │ └── IMPL-*.*-summary.md # Subtask summaries
│ ├── [.review/] # Code review results (created by review commands)
│ │ ├── review-metadata.json # Review configuration and scope
│ │ ├── review-state.json # Review state machine
│ │ ├── review-progress.json # Real-time progress tracking
│ │ ├── dimensions/ # Per-dimension analysis results
│ │ ├── iterations/ # Deep-dive iteration results
│ │ ├── reports/ # Human-readable reports and CLI outputs
│ │ ├── REVIEW-SUMMARY.md # Final consolidated summary
│ │ └── dashboard.html # Interactive review dashboard
│ ├── [design-*/] # UI design outputs (created by ui-design workflows)
│ │ ├── .intermediates/ # Intermediate analysis files
│ │ │ ├── style-analysis/ # Style analysis data
│ │ │ │ ├── computed-styles.json # Extracted CSS values
│ │ │ │ └── design-space-analysis.json # Design directions
│ │ │ └── layout-analysis/ # Layout analysis data
│ │ │ ├── dom-structure-{target}.json # DOM extraction
│ │ │ └── inspirations/ # Layout research
│ │ │ └── {target}-layout-ideas.txt
│ │ ├── style-extraction/ # Final design systems
│ │ │ ├── style-1/ # design-tokens.json, style-guide.md
│ │ │ └── style-N/
│ │ ├── layout-extraction/ # Layout templates
│ │ │ └── layout-templates.json
│ │ ├── prototypes/ # Generated HTML/CSS prototypes
│ │ │ ├── {target}-style-{s}-layout-{l}.html # Final prototypes
│ │ │ ├── compare.html # Interactive matrix view
│ │ │ └── index.html # Navigation page
│ │ └── .run-metadata.json # Run configuration
│ └── .task/ # Task definitions (REQUIRED)
│ ├── IMPL-*.json # Main task definitions
│ └── IMPL-*.*.json # Subtask definitions (created dynamically)
└── archives/ # Completed workflow sessions
└── WFS-[completed-topic]/ # Archived session directories
```
#### Creation Strategy
- **Initial Setup**: Create only `workflow-session.json`, `IMPL_PLAN.md`, `TODO_LIST.md`, and `.task/` directory
- **On-Demand Creation**: Other directories created when first needed
- **Dynamic Files**: Subtask JSON files created during task decomposition
- **Scratchpad Usage**: `.scratchpad/` created when CLI commands run without active session
- **Design Usage**: `design-{timestamp}/` created by UI design workflows in `.workflow/` directly for standalone design runs
- **Review Usage**: `.review/` created by review commands (`/workflow:review-module-cycle`, `/workflow:review-session-cycle`) for comprehensive code quality analysis
- **Intermediate Files**: `.intermediates/` contains analysis data (style/layout) separate from final deliverables
- **Layout Templates**: `layout-extraction/layout-templates.json` contains structural templates for UI assembly
#### Scratchpad Directory (.scratchpad/)
**Purpose**: Centralized location for non-session-specific CLI outputs
**When to Use**:
1. **No Active Session**: CLI analysis/chat commands run without an active workflow session
2. **Unrelated Analysis**: Quick analysis not related to current active session
3. **Exploratory Work**: Ad-hoc investigation before creating formal workflow
4. **One-Off Queries**: Standalone questions or debugging without workflow context
**Output Routing Logic**:
- **IF** active session exists in `.workflow/active/` AND command is session-relevant:
- Save to `.workflow/active/WFS-[id]/.chat/[command]-[timestamp].md`
- **ELSE** (no session OR one-off analysis):
- Save to `.workflow/.scratchpad/[command]-[description]-[timestamp].md`
**File Naming Pattern**: `[command-type]-[brief-description]-[timestamp].md`
**Examples**:
*Workflow Commands (lightweight):*
- `/workflow:lite-plan "feature idea"` (exploratory) → `.scratchpad/lite-plan-feature-idea-20250105-143110.md`
- `/workflow:lite-fix "bug description"` (bug fixing) → `.scratchpad/lite-fix-bug-20250105-143130.md`
> **Note**: Direct CLI commands (`/cli:analyze`, `/cli:execute`, etc.) have been replaced by semantic invocation and workflow commands.
**Maintenance**:
- Periodically review and clean up old scratchpad files
- Promote useful analyses to formal workflow sessions if needed
- No automatic cleanup - manual management recommended
### File Naming Conventions
#### Session Identifiers
**Format**: `WFS-[topic-slug]`
**WFS Prefix Meaning**:
- `WFS` = **W**ork**F**low **S**ession
- Identifies directories as workflow session containers
- Distinguishes workflow sessions from other project directories
**Naming Rules**:
- Convert topic to lowercase with hyphens (e.g., "User Auth System" → `WFS-user-auth-system`)
- Add `-NNN` suffix only if conflicts exist (e.g., `WFS-payment-integration-002`)
- Maximum length: 50 characters including WFS- prefix
#### Document Naming
- `workflow-session.json` - Session state (required)
- `IMPL_PLAN.md` - Planning document (required)
- `TODO_LIST.md` - Progress tracking (auto-generated when needed)
- Chat sessions: `chat-analysis-*.md`
- Task summaries: `IMPL-[task-id]-summary.md`
### Document Templates
#### TODO_LIST.md Template
```markdown
# Tasks: [Session Topic]
## Task Progress
**IMPL-001**: [Main Task Group] → [📋](./.task/IMPL-001.json)
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
- [x] **IMPL-001.2**: [Subtask] → [📋](./.task/IMPL-001.2.json) | [](./.summaries/IMPL-001.2-summary.md)
- [x] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json) | [](./.summaries/IMPL-002-summary.md)
## Status Legend
- `▸` = Container task (has subtasks)
- `- [ ]` = Pending leaf task
- `- [x]` = Completed leaf task
- Maximum 2 levels: Main tasks and subtasks only
```
## Operations Guide
### Session Management
```bash
# Create minimal required structure
mkdir -p .workflow/active/WFS-topic-slug/.task
echo '{"session_id":"WFS-topic-slug",...}' > .workflow/active/WFS-topic-slug/workflow-session.json
echo '# Implementation Plan' > .workflow/active/WFS-topic-slug/IMPL_PLAN.md
echo '# Tasks' > .workflow/active/WFS-topic-slug/TODO_LIST.md
```
### Task Operations
```bash
# Create task
echo '{"id":"IMPL-1","title":"New task",...}' > .task/IMPL-1.json
# Update task status
jq '.status = "active"' .task/IMPL-1.json > temp && mv temp .task/IMPL-1.json
# Generate TODO list from JSON state
generate_todo_list_from_json .task/
```
### Directory Creation (On-Demand)
```bash
mkdir -p .brainstorming # When brainstorming is initiated
mkdir -p .chat # When analysis commands are run
mkdir -p .summaries # When first task completes
```
### Session Consistency Checks & Recovery
```bash
# Validate session directory structure
if [ -d ".workflow/active/" ]; then
for session_dir in .workflow/active/WFS-*; do
if [ ! -f "$session_dir/workflow-session.json" ]; then
echo "⚠️ Missing workflow-session.json in $session_dir"
fi
done
fi
```
**Recovery Strategies**:
- **Missing Session File**: Recreate workflow-session.json from template
- **Corrupted Session File**: Restore from template with basic metadata
- **Broken Task Hierarchy**: Reconstruct parent-child relationships from task JSON files
- **Orphaned Sessions**: Move incomplete sessions to archives/
## Complexity Classification
### Task Complexity Rules
**Complexity is determined by task count and decomposition needs:**
| Complexity | Task Count | Hierarchy Depth | Decomposition Behavior |
|------------|------------|----------------|----------------------|
| **Simple** | <5 tasks | 1 level (IMPL-N) | Direct execution, minimal decomposition |
| **Medium** | 5-15 tasks | 2 levels (IMPL-N.M) | Moderate decomposition, context coordination |
| **Complex** | >15 tasks | 2 levels (IMPL-N.M) | Frequent decomposition, multi-agent orchestration |
### Workflow Characteristics & Tool Guidance
#### Simple Workflows
- **Examples**: Bug fixes, small feature additions, configuration changes
- **Task Decomposition**: Usually single-level tasks, minimal breakdown needed
- **Agent Coordination**: Direct execution without complex orchestration
- **Tool Strategy**: `bash()` commands, `grep()` for pattern matching
#### Medium Workflows
- **Examples**: New features, API endpoints with integration, database schema changes
- **Task Decomposition**: Two-level hierarchy when decomposition is needed
- **Agent Coordination**: Context coordination between related tasks
- **Tool Strategy**: `gemini` for pattern analysis, `codex --full-auto` for implementation
#### Complex Workflows
- **Examples**: Major features, architecture refactoring, security implementations, multi-service deployments
- **Task Decomposition**: Frequent use of two-level hierarchy with dynamic subtask creation
- **Agent Coordination**: Multi-agent orchestration with deep context analysis
- **Tool Strategy**: `gemini` for architecture analysis, `codex --full-auto` for complex problem solving, `bash()` commands for flexible analysis
### Assessment & Upgrades
- **During Creation**: System evaluates requirements and assigns complexity
- **During Execution**: Can upgrade (Simple→Medium→Complex) but never downgrade
- **Override Allowed**: Users can specify higher complexity manually
## Agent Integration
### Agent Assignment
Based on task type and title keywords:
- **Planning tasks** → @action-planning-agent
- **Implementation** → @code-developer (code + tests)
- **Test execution/fixing** → @test-fix-agent
- **Review** → @universal-executor (optional, only when explicitly requested)
### Execution Context
Agents receive complete task JSON plus workflow context:
```json
{
"task": { /* complete task JSON */ },
"workflow": {
"session": "WFS-user-auth",
"phase": "IMPLEMENT"
}
}
```

View File

@@ -1,44 +0,0 @@
# Claude Instructions
- **Coding Philosophy**: @~/.ccw/workflows/coding-philosophy.md
## CLI Endpoints
- **CLI Tools Usage**: @~/.ccw/workflows/cli-tools-usage.md
- **CLI Endpoints Config**: @~/.claude/cli-tools.json
**Strictly follow the cli-tools.json configuration**
Available CLI endpoints are dynamically defined by the config file
## Tool Execution
- **Context Requirements**: @~/.ccw/workflows/context-tools.md
- **File Modification**: @~/.ccw/workflows/file-modification.md
### Agent Calls
- **Always use `run_in_background: false`** for Task tool agent calls: `Task({ subagent_type: "xxx", prompt: "...", run_in_background: false })` to ensure synchronous execution and immediate result visibility
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
### CLI Tool Calls (ccw cli)
- **Default: Use Bash `run_in_background: true`** - Unless otherwise specified, always execute CLI calls in background using Bash tool's background mode:
```
Bash({
command: "ccw cli -p '...' --tool gemini",
run_in_background: true // Bash tool parameter, not ccw cli parameter
})
```
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results
### CLI Analysis Calls
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
- Aggregate multiple analysis results before proposing solutions
### CLI Auto-Invoke Triggers
- **Reference**: See `cli-tools-usage.md` → [Auto-Invoke Triggers](#auto-invoke-triggers) for full specification
- **Key scenarios**: Self-repair fails, ambiguous requirements, architecture decisions, pattern uncertainty, critical code paths
- **Principles**: Default `--mode analysis`, no confirmation needed, wait for completion, flexible rule selection
## Code Diagnostics
- **Prefer `mcp__ide__getDiagnostics`** for code error checking over shell-based TypeScript compilation

View File

@@ -55,17 +55,6 @@ color: yellow
**Step-by-step execution**:
```
0. Load planning notes → Extract phase-level constraints (NEW)
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
Output: Consolidated constraints from all workflow phases
Structure:
- User Intent: Original GOAL, KEY_CONSTRAINTS
- Context Findings: Critical files, architecture notes, constraints
- Conflict Decisions: Resolved conflicts, modified artifacts
- Consolidated Constraints: Numbered list of ALL constraints (Phase 1-3)
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
1. Load session metadata → Extract user input
- User description: Original task/feature requirements
- Project scope: User-specified boundaries and goals
@@ -186,7 +175,7 @@ if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
- Add quantified requirements and measurable acceptance criteria
3. Create IMPL_PLAN.md
- Load template: Read(~/.ccw/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
- Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
- Follow template structure and validation checklist
- Populate all 8 sections with synthesized context
- Document CCW workflow phase progression
@@ -214,13 +203,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"id": "IMPL-N",
"title": "Descriptive task name",
"status": "pending|active|completed|blocked",
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
"cli_execution_id": "WFS-{session}-IMPL-N",
"cli_execution": {
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id",
"merge_from": ["id1", "id2"]
}
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
}
```
@@ -233,50 +216,6 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
- `title`: Descriptive task name summarizing the work
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
- `cli_execution`: CLI execution strategy based on task dependencies
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
| Dependency Pattern | Strategy | CLI Command Pattern |
|--------------------|----------|---------------------|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
**Strategy Selection Algorithm**:
```javascript
function computeCliStrategy(task, allTasks) {
const deps = task.context?.depends_on || []
const childCount = allTasks.filter(t =>
t.context?.depends_on?.includes(task.id)
).length
if (deps.length === 0) {
return { strategy: "new" }
} else if (deps.length === 1) {
const parentTask = allTasks.find(t => t.id === deps[0])
const parentChildCount = allTasks.filter(t =>
t.context?.depends_on?.includes(deps[0])
).length
if (parentChildCount === 1) {
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
} else {
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
}
} else {
const mergeFrom = deps.map(depId =>
allTasks.find(t => t.id === depId).cli_execution_id
)
return { strategy: "merge_fork", merge_from: mergeFrom }
}
}
```
#### Meta Object
@@ -286,13 +225,7 @@ function computeCliStrategy(task, allTasks) {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
"execution_group": "parallel-abc123|null",
"module": "frontend|backend|shared|null",
"execution_config": {
"method": "agent|cli",
"cli_tool": "codex|gemini|qwen|auto|null",
"enable_resume": true,
"previous_cli_id": "string|null"
}
"module": "frontend|backend|shared|null"
}
}
```
@@ -302,32 +235,6 @@ function computeCliStrategy(task, allTasks) {
- `agent`: Assigned agent for execution
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
- `execution_config`: CLI execution settings (MUST align with userConfig from task-generate-agent)
- `method`: Execution method - `agent` (direct) or `cli` (CLI only). Only two values in final task JSON.
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, `auto`, or `null` (for agent-only)
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
**execution_config Alignment Rules** (MANDATORY):
```
userConfig.executionMethod → meta.execution_config
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Execution: Agent executes pre_analysis, then directly implements implementation_approach
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Execution: Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: set method to "agent" OR "cli" per task based on complexity
- Simple tasks (≤3 files, straightforward logic) → { method: "agent", cli_tool: null, enable_resume: false }
- Complex tasks (>3 files, complex logic, refactoring) → { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Final task JSON always has method = "agent" or "cli", never "hybrid"
```
**IMPORTANT**: implementation_approach steps do NOT contain `command` fields. Execution routing is controlled by task-level `meta.execution_config.method` only.
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -346,7 +253,7 @@ userConfig.executionMethod → meta.execution_config
- `test_framework`: Existing test framework from project (required for test tasks)
- `coverage_target`: Target code coverage percentage (optional)
**Note**: CLI tool usage for test-fix tasks is now controlled via task-level `meta.execution_config.method`, not via `meta.use_codex`.
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
#### Context Object
@@ -485,7 +392,7 @@ userConfig.executionMethod → meta.execution_config
// Pattern: Project structure analysis
{
"step": "analyze_project_architecture",
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
"output_to": "project_architecture"
},
@@ -502,14 +409,14 @@ userConfig.executionMethod → meta.execution_config
// Pattern: Gemini CLI deep analysis
{
"step": "gemini_analyze_[aspect]",
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
"output_to": "analysis_result"
},
// Pattern: Qwen CLI analysis (fallback/alternative)
{
"step": "qwen_analyze_[aspect]",
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
"output_to": "analysis_result"
},
@@ -550,52 +457,65 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
- **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
##### Implementation Approach
**Execution Control**:
**Execution Modes**:
The `implementation_approach` defines sequential implementation steps. Execution routing is controlled by **task-level `meta.execution_config.method`**, NOT by step-level `command` fields.
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
**Two Execution Modes**:
1. **Agent Mode** (`meta.execution_config.method = "agent"`):
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
- Agent interprets `modification_points` and `logic_flow` autonomously
- Direct agent execution with full context awareness
- No external tool overhead
- **Use for**: Standard implementation tasks where agent capability is sufficient
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
2. **CLI Mode** (`meta.execution_config.method = "cli"`):
- Agent executes `pre_analysis`, then hands off full context to CLI via `buildCliHandoffPrompt()`
- CLI tool specified in `meta.execution_config.cli_tool` (codex/gemini/qwen)
- Leverages specialized CLI tools for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when userConfig.executionMethod = "cli"
2. **CLI Mode (Command Execution)** - `command` field **included**:
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
**Step Schema** (same for both modes):
```json
{
"step": 1,
"title": "Step title",
"description": "What to implement (may use [variable] placeholders from pre_analysis)",
"modification_points": ["Quantified changes: [list with counts]"],
"logic_flow": ["Implementation sequence"],
"depends_on": [0],
"output": "variable_name"
}
```
**Semantic CLI Tool Selection**:
**Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
Agent determines CLI tool usage per-step based on user semantics and task nature.
**IMPORTANT**: Do NOT add `command` field to implementation_approach steps. Execution routing is determined by task-level `meta.execution_config.method` only.
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
**Example**:
**User Semantic Triggers** (patterns to detect in task_description):
- "use Codex/codex" → Add `command` field with Codex CLI
- "use Gemini/gemini" → Add `command` field with Gemini CLI
- "use Qwen/qwen" → Add `command` field with Qwen CLI
- "CLI execution" / "automated" → Infer appropriate CLI tool
**Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution
- When step has `command` field, agent executes it via Bash
- When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
**Examples**:
```json
[
// === DEFAULT MODE: Agent Execution (no command field) ===
{
"step": 1,
"title": "Load and analyze role analyses",
@@ -632,6 +552,18 @@ The `implementation_approach` defines sequential implementation steps. Execution
],
"depends_on": [1],
"output": "implementation"
},
// === CLI MODE: Command Execution (optional command field) ===
{
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation"
}
]
```
@@ -661,7 +593,7 @@ The `implementation_approach` defines sequential implementation steps. Execution
**Template-Based Generation**:
```
1. Load template: Read(~/.ccw/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
1. Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
2. Populate all sections following template structure
3. Complete template validation checklist
4. Generate at .workflow/active/{session_id}/IMPL_PLAN.md
@@ -749,18 +681,16 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
### 2.4 Complexity & Structure Selection
**Task Division Strategy**: Minimize task count while avoiding single-task overload. Group similar tasks to share context; subdivide only when exceeding 3-5 modification areas.
Use `analysis_results.complexity` or task count to determine structure:
**Single Module Mode**:
- **Simple Tasks** (≤4 tasks): Flat structure
- **Medium Tasks** (5-8 tasks): Flat structure
- **Complex Tasks** (>8 tasks): Re-scope required (maximum 8 tasks hard limit)
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤6 tasks per module
- **Total limit**: No total limit (each module independently capped at 6 tasks)
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
@@ -821,60 +751,28 @@ Use `analysis_results.complexity` or task count to determine structure:
- Proper linking between documents
- Consistent navigation and references
### 3.3 N+1 Context Recording
**Purpose**: Record decisions and deferred items for N+1 planning continuity.
**When**: After task generation, update `## N+1 Context` in planning-notes.md.
**What to Record**:
- **Decisions**: Architecture/technology choices with rationale (mark `Revisit?` if may change)
- **Deferred**: Items explicitly moved to N+1 with reason
**Example**:
```markdown
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| JWT over Session | Stateless scaling | No |
| CROSS::B::api → IMPL-B1 | B1 defines base | Yes |
### Deferred
- [ ] Rate limiting - Requires Redis (N+1)
- [ ] API versioning - Low priority
```
### 3.4 Guidelines Checklist
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.ccw/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Use provided context package: Extract all information from structured context
- Respect memory-first rule: Use provided content (already loaded from memory/file)
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 8 tasks (single module) or 6 tasks per module (multi-module), request re-scope if exceeded
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**NEVER:**
- Load files directly (use provided context package instead)
- Assume default locations (always use session_id in paths)
- Create circular dependencies in task.depends_on
- Exceed 8 tasks (single module) or 6 tasks per module (multi-module) without re-scoping
- Exceed 12 tasks without re-scoping
- Skip artifact integration when artifacts_inventory is provided
- Ignore MCP capabilities when available
- Use fixed pre-analysis steps without task-specific adaptation

View File

@@ -1,391 +0,0 @@
---
name: cli-discuss-agent
description: |
Multi-CLI collaborative discussion agent with cross-verification and solution synthesis.
Orchestrates 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
color: magenta
allowed-tools: mcp__ace-tool__search_context(*), Bash(*), Read(*), Write(*), Glob(*), Grep(*)
---
You are a specialized CLI discussion agent that orchestrates multiple CLI tools to analyze tasks, cross-verify findings, and synthesize structured solutions.
## Core Capabilities
1. **Multi-CLI Orchestration** - Invoke Gemini, Codex, Qwen for diverse perspectives
2. **Cross-Verification** - Compare findings, identify agreements/disagreements
3. **Solution Synthesis** - Merge approaches, score and rank by consensus
4. **Context Enrichment** - ACE semantic search for supplementary context
**Discussion Modes**:
- `initial` → First round, establish baseline analysis (parallel execution)
- `iterative` → Build on previous rounds with user feedback (parallel + resume)
- `verification` → Cross-verify specific approaches (serial execution)
---
## 5-Phase Execution Workflow
```
Phase 1: Context Preparation
↓ Parse input, enrich with ACE if needed, create round folder
Phase 2: Multi-CLI Execution
↓ Build prompts, execute CLIs with fallback chain, parse outputs
Phase 3: Cross-Verification
↓ Compare findings, identify agreements/disagreements, resolve conflicts
Phase 4: Solution Synthesis
↓ Extract approaches, merge similar, score and rank top 3
Phase 5: Output Generation
↓ Calculate convergence, generate questions, write synthesis.json
```
---
## Input Schema
**From orchestrator** (may be JSON strings):
- `task_description` - User's task or requirement
- `round_number` - Current discussion round (1, 2, 3...)
- `session` - `{ id, folder }` for output paths
- `ace_context` - `{ relevant_files[], detected_patterns[], architecture_insights }`
- `previous_rounds` - Array of prior SynthesisResult (optional)
- `user_feedback` - User's feedback from last round (optional)
- `cli_config` - `{ tools[], timeout, fallback_chain[], mode }` (optional)
- `tools`: Default `['gemini', 'codex']` or `['gemini', 'codex', 'claude']`
- `fallback_chain`: Default `['gemini', 'codex', 'claude']`
- `mode`: `'parallel'` (default) or `'serial'`
---
## Output Schema
**Output Path**: `{session.folder}/rounds/{round_number}/synthesis.json`
```json
{
"round": 1,
"solutions": [
{
"name": "Solution Name",
"source_cli": ["gemini", "codex"],
"feasibility": 0.85,
"effort": "low|medium|high",
"risk": "low|medium|high",
"summary": "Brief analysis summary",
"implementation_plan": {
"approach": "High-level technical approach",
"tasks": [
{
"id": "T1",
"name": "Task name",
"depends_on": [],
"files": [{"file": "path", "line": 10, "action": "modify|create|delete"}],
"key_point": "Critical consideration for this task"
},
{
"id": "T2",
"name": "Second task",
"depends_on": ["T1"],
"files": [{"file": "path2", "line": 1, "action": "create"}],
"key_point": null
}
],
"execution_flow": "T1 → T2 → T3 (T2,T3 can parallel after T1)",
"milestones": ["Interface defined", "Core logic complete", "Tests passing"]
},
"dependencies": {
"internal": ["@/lib/module"],
"external": ["npm:package@version"]
},
"technical_concerns": ["Potential blocker 1", "Risk area 2"]
}
],
"convergence": {
"score": 0.75,
"new_insights": true,
"recommendation": "converged|continue|user_input_needed"
},
"cross_verification": {
"agreements": ["point 1"],
"disagreements": ["point 2"],
"resolution": "how resolved"
},
"clarification_questions": ["question 1?"]
}
```
**Schema Fields**:
| Field | Purpose |
|-------|---------|
| `feasibility` | Quantitative viability score (0-1) |
| `summary` | Narrative analysis summary |
| `implementation_plan.approach` | High-level technical strategy |
| `implementation_plan.tasks[]` | Discrete implementation tasks |
| `implementation_plan.tasks[].depends_on` | Task dependencies (IDs) |
| `implementation_plan.tasks[].key_point` | Critical consideration for task |
| `implementation_plan.execution_flow` | Visual task sequence |
| `implementation_plan.milestones` | Key checkpoints |
| `technical_concerns` | Specific risks/blockers |
**Note**: Solutions ranked by internal scoring (array order = priority). `pros/cons` merged into `summary` and `technical_concerns`.
---
## Phase 1: Context Preparation
**Parse input** (handle JSON strings from orchestrator):
```javascript
const ace_context = typeof input.ace_context === 'string'
? JSON.parse(input.ace_context) : input.ace_context || {}
const previous_rounds = typeof input.previous_rounds === 'string'
? JSON.parse(input.previous_rounds) : input.previous_rounds || []
```
**ACE Supplementary Search** (when needed):
```javascript
// Trigger conditions:
// - Round > 1 AND relevant_files < 5
// - Previous solutions reference unlisted files
if (shouldSupplement) {
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `Implementation patterns for ${task_keywords}`
})
}
```
**Create round folder**:
```bash
mkdir -p {session.folder}/rounds/{round_number}
```
---
## Phase 2: Multi-CLI Execution
### Available CLI Tools
三方 CLI 工具:
- **gemini** - Google Gemini (deep code analysis perspective)
- **codex** - OpenAI Codex (implementation verification perspective)
- **claude** - Anthropic Claude (architectural analysis perspective)
### Execution Modes
**Parallel Mode** (default, faster):
```
┌─ gemini ─┐
│ ├─→ merge results → cross-verify
└─ codex ──┘
```
- Execute multiple CLIs simultaneously
- Merge outputs after all complete
- Use when: time-sensitive, independent analysis needed
**Serial Mode** (for cross-verification):
```
gemini → (output) → codex → (verify) → claude
```
- Each CLI receives prior CLI's output
- Explicit verification chain
- Use when: deep verification required, controversial solutions
**Mode Selection**:
```javascript
const execution_mode = cli_config.mode || 'parallel'
// parallel: Promise.all([cli1, cli2, cli3])
// serial: await cli1 → await cli2(cli1.output) → await cli3(cli2.output)
```
### CLI Prompt Template
```bash
ccw cli -p "
PURPOSE: Analyze task from {perspective} perspective, verify technical feasibility
TASK:
• Analyze: \"{task_description}\"
• Examine codebase patterns and architecture
• Identify implementation approaches with trade-offs
• Provide file:line references for integration points
MODE: analysis
CONTEXT: @**/* | Memory: {ace_context_summary}
{previous_rounds_section}
{cross_verify_section}
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
CONSTRAINTS:
- Specific file:line references
- Quantify effort estimates
- Concrete pros/cons
" --tool {tool} --mode analysis {resume_flag}
```
### Resume Mechanism
**Session Resume** - Continue from previous CLI session:
```bash
# Resume last session
ccw cli -p "Continue analysis..." --tool gemini --resume
# Resume specific session
ccw cli -p "Verify findings..." --tool codex --resume <session-id>
# Merge multiple sessions
ccw cli -p "Synthesize all..." --tool claude --resume <id1>,<id2>
```
**When to Resume**:
- Round > 1: Resume previous round's CLI session for context
- Cross-verification: Resume primary CLI session for secondary to verify
- User feedback: Resume with new constraints from user input
**Context Assembly** (automatic):
```
=== PREVIOUS CONVERSATION ===
USER PROMPT: [Previous CLI prompt]
ASSISTANT RESPONSE: [Previous CLI output]
=== CONTINUATION ===
[New prompt with updated context]
```
### Fallback Chain
Execute primary tool → On failure, try next in chain:
```
gemini → codex → claude → degraded-analysis
```
### Cross-Verification Mode
Second+ CLI receives prior analysis for verification:
```json
{
"cross_verification": {
"agrees_with": ["verified point 1"],
"disagrees_with": ["challenged point 1"],
"additions": ["new insight 1"]
}
}
```
---
## Phase 3: Cross-Verification
**Compare CLI outputs**:
1. Group similar findings across CLIs
2. Identify multi-CLI agreements (2+ CLIs agree)
3. Identify disagreements (conflicting conclusions)
4. Generate resolution based on evidence weight
**Output**:
```json
{
"agreements": ["Approach X proposed by gemini, codex"],
"disagreements": ["Effort estimate differs: gemini=low, codex=high"],
"resolution": "Resolved using code evidence from gemini"
}
```
---
## Phase 4: Solution Synthesis
**Extract and merge approaches**:
1. Collect implementation_approaches from all CLIs
2. Normalize names, merge similar approaches
3. Combine pros/cons/affected_files from multiple sources
4. Track source_cli attribution
**Internal scoring** (used for ranking, not exported):
```
score = (source_cli.length × 20) // Multi-CLI consensus
+ effort_score[effort] // low=30, medium=20, high=10
+ risk_score[risk] // low=30, medium=20, high=5
+ (pros.length - cons.length) × 5 // Balance
+ min(affected_files.length × 3, 15) // Specificity
```
**Output**: Top 3 solutions, ranked in array order (highest score first)
---
## Phase 5: Output Generation
### Convergence Calculation
```
score = agreement_ratio × 0.5 // agreements / (agreements + disagreements)
+ avg_feasibility × 0.3 // average of CLI feasibility_scores
+ stability_bonus × 0.2 // +0.2 if no new insights vs previous rounds
recommendation:
- score >= 0.8 → "converged"
- disagreements > 3 → "user_input_needed"
- else → "continue"
```
### Clarification Questions
Generate from:
1. Unresolved disagreements (max 2)
2. Technical concerns raised (max 2)
3. Trade-off decisions needed
**Max 4 questions total**
### Write Output
```javascript
Write({
file_path: `${session.folder}/rounds/${round_number}/synthesis.json`,
content: JSON.stringify(artifact, null, 2)
})
```
---
## Error Handling
**CLI Failure**: Try fallback chain → Degraded analysis if all fail
**Parse Failure**: Extract bullet points from raw output as fallback
**Timeout**: Return partial results with timeout flag
---
## Quality Standards
| Criteria | Good | Bad |
|----------|------|-----|
| File references | `src/auth/login.ts:45` | "update relevant files" |
| Effort estimate | `low` / `medium` / `high` | "some time required" |
| Pros/Cons | Concrete, specific | Generic, vague |
| Solution source | Multi-CLI consensus | Single CLI only |
| Convergence | Score with reasoning | Binary yes/no |
---
## Key Reminders
**ALWAYS**:
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Execute multiple CLIs for cross-verification
2. Parse CLI outputs with fallback extraction
3. Include file:line references in affected_files
4. Calculate convergence score accurately
5. Write synthesis.json to round folder
6. Use `run_in_background: false` for CLI calls
7. Limit solutions to top 3
8. Limit clarification questions to 4
**NEVER**:
1. Execute implementation code (analysis only)
2. Return without writing synthesis.json
3. Skip cross-verification phase
4. Generate more than 4 clarification questions
5. Ignore previous round context
6. Assume solution without multi-CLI validation

View File

@@ -14,13 +14,13 @@ You are an intelligent CLI execution specialist that autonomously orchestrates c
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
3. **Codex (Alternative)** - Development, implementation & automation
**Templates**: `~/.ccw/workflows/cli-templates/prompts/`
**Templates**: `~/.claude/workflows/cli-templates/prompts/`
- `analysis/` - pattern.txt, architecture.txt, code-execution-tracing.txt, security.txt, quality.txt
- `development/` - feature.txt, refactor.txt, testing.txt, bug-diagnosis.txt
- `planning/` - task-breakdown.txt, architecture-planning.txt
- `memory/` - claude-module-unified.txt
**Reference**: See `~/.ccw/workflows/intelligent-tools-strategy.md` for complete usage guide
**Reference**: See `~/.claude/workflows/intelligent-tools-strategy.md` for complete usage guide
## 5-Phase Execution Workflow
@@ -61,38 +61,13 @@ Score = 0
**Extract Keywords**: domains (auth, api, database, ui), technologies (react, typescript, node), actions (implement, refactor, test)
**Plan Context Loading** (when executing from plan.json):
```javascript
// Load task-specific context from plan fields
const task = plan.tasks.find(t => t.id === taskId)
const context = {
// Base context
scope: task.scope,
modification_points: task.modification_points,
implementation: task.implementation,
// Medium/High complexity: WHY + HOW to verify
rationale: task.rationale?.chosen_approach, // Why this approach
verification: task.verification?.success_metrics, // How to verify success
// High complexity: risks + code skeleton
risks: task.risks?.map(r => r.mitigation), // Risk mitigations to follow
code_skeleton: task.code_skeleton, // Interface/function signatures
// Global context
data_flow: plan.data_flow?.diagram // Data flow overview
}
```
---
## Phase 2: Context Discovery
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**1. Project Structure**:
```bash
ccw tool exec get_modules_by_depth '{}'
~/.claude/scripts/get_modules_by_depth.sh
```
**2. Content Search**:
@@ -125,11 +100,11 @@ CONTEXT: @**/*
# Specific patterns
CONTEXT: @CLAUDE.md @src/**/* @*.ts
# Cross-directory (requires --includeDirs)
# Cross-directory (requires --include-directories)
CONTEXT: @**/* @../shared/**/* @../types/**/*
```
**2. Template Selection** (`~/.ccw/workflows/cli-templates/prompts/`):
**2. Template Selection** (`~/.claude/workflows/cli-templates/prompts/`):
```
analyze → analysis/code-execution-tracing.txt | analysis/pattern.txt
execute → development/feature.txt
@@ -137,10 +112,9 @@ plan → planning/architecture-planning.txt | planning/task-breakdown.txt
bug-fix → development/bug-diagnosis.txt
```
**3. CONSTRAINTS Field**:
- Use `--rule <template>` option to auto-load protocol + template (appended to prompt)
- Template names: `category-function` format (e.g., `analysis-code-patterns`, `development-feature`)
- NEVER escape: `\"`, `\'` breaks shell parsing
**3. RULES Field**:
- Use `$(cat ~/.claude/workflows/cli-templates/prompts/{path}.txt)` directly
- NEVER escape: `\$`, `\"`, `\'` breaks command substitution
**4. Structured Prompt**:
```bash
@@ -149,31 +123,7 @@ TASK: {specific_task_with_details}
MODE: {analysis|write|auto}
CONTEXT: {structured_file_references}
EXPECTED: {clear_output_expectations}
CONSTRAINTS: {constraints}
```
**5. Plan-Aware Prompt Enhancement** (when executing from plan.json):
```bash
# Include rationale in PURPOSE (Medium/High)
PURPOSE: {task.description}
Approach: {task.rationale.chosen_approach}
Decision factors: {task.rationale.decision_factors.join(', ')}
# Include code skeleton in TASK (High)
TASK: {task.implementation.join('\n')}
Key interfaces: {task.code_skeleton.interfaces.map(i => i.signature)}
Key functions: {task.code_skeleton.key_functions.map(f => f.signature)}
# Include verification in EXPECTED
EXPECTED: {task.acceptance.join(', ')}
Success metrics: {task.verification.success_metrics.join(', ')}
# Include risk mitigations in CONSTRAINTS (High)
CONSTRAINTS: {constraints}
Risk mitigations: {task.risks.map(r => r.mitigation).join('; ')}
# Include data flow context (High)
Memory: Data flow: {plan.data_flow.diagram}
RULES: $(cat {selected_template}) | {constraints}
```
---
@@ -184,7 +134,7 @@ Memory: Data flow: {plan.data_flow.diagram}
```
analyze|plan → gemini (qwen fallback) + mode=analysis
execute (simple|medium) → gemini (qwen fallback) + mode=write
execute (complex) → codex + mode=write
execute (complex) → codex + mode=auto
discuss → multi (gemini + codex parallel)
```
@@ -194,45 +144,46 @@ discuss → multi (gemini + codex parallel)
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
- **Position**: `-m` after prompt, before flags
### Command Templates (CCW Unified CLI)
### Command Templates
**Gemini/Qwen (Analysis)**:
```bash
ccw cli -p "
cd {dir} && gemini -p "
PURPOSE: {goal}
TASK: {task}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {output}
CONSTRAINTS: {constraints}
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {dir}
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
" -m gemini-2.5-pro
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
# Qwen fallback: Replace 'gemini' with 'qwen'
```
**Gemini/Qwen (Write)**:
```bash
ccw cli -p "..." --tool gemini --mode write --cd {dir}
cd {dir} && gemini -p "..." --approval-mode yolo
```
**Codex (Write)**:
**Codex (Auto)**:
```bash
ccw cli -p "..." --tool codex --mode write --cd {dir}
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
# Resume: Add 'resume --last' after prompt
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
```
**Cross-Directory** (Gemini/Qwen):
```bash
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
```
**Directory Scope**:
- `@` only references current directory + subdirectories
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
**Bash Tool**: Use `run_in_background=false` for all CLI calls to ensure foreground execution
---
## Phase 5: Output Routing
@@ -252,25 +203,11 @@ find .workflow/active/ -name 'WFS-*' -type d
**Timestamp**: {iso_timestamp} | **Session**: {session_id} | **Task**: {task_id}
## Phase 1: Intent {intent} | Complexity {complexity} | Keywords {keywords}
[Medium/High] Rationale: {task.rationale.chosen_approach}
[High] Risks: {task.risks.map(r => `${r.description} → ${r.mitigation}`).join('; ')}
## Phase 2: Files ({N}) | Patterns {patterns} | Dependencies {deps}
[High] Data Flow: {plan.data_flow.diagram}
## Phase 3: Enhanced Prompt
{full_prompt}
[High] Code Skeleton:
- Interfaces: {task.code_skeleton.interfaces.map(i => i.name).join(', ')}
- Functions: {task.code_skeleton.key_functions.map(f => f.signature).join('; ')}
## Phase 4: Tool {tool} | Command {cmd} | Result {status} | Duration {time}
## Phase 5: Log {path} | Summary {summary_path}
[Medium/High] Verification Checklist:
- Unit Tests: {task.verification.unit_tests.join(', ')}
- Success Metrics: {task.verification.success_metrics.join(', ')}
## Next Steps: {actions}
```
@@ -308,7 +245,7 @@ Codex unavailable → Gemini/Qwen write mode
## Templates Reference
**Location**: `~/.ccw/workflows/cli-templates/prompts/`
**Location**: `~/.claude/workflows/cli-templates/prompts/`
**Analysis** (`analysis/`):
- `pattern.txt` - Code pattern analysis

View File

@@ -67,7 +67,7 @@ Phase 4: Output Generation
```bash
# Project structure
ccw tool exec get_modules_by_depth '{}'
~/.claude/scripts/get_modules_by_depth.sh
# Pattern discovery (adapt based on language)
rg "^export (class|interface|function) " --type ts -n
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
### Gemini Semantic Analysis (deep-scan, dependency-map)
```bash
ccw cli -p "
cd {dir} && gemini -p "
PURPOSE: {from prompt}
TASK: {from prompt}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {from prompt}
RULES: {from prompt, if template specified} | analysis=READ-ONLY
" --tool gemini --mode analysis --cd {dir}
"
```
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
@@ -165,8 +165,7 @@ Brief summary:
## Key Reminders
**ALWAYS**:
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema file FIRST before generating any output (if schema specified)
1. Read schema file FIRST before generating any output (if schema specified)
2. Copy field names EXACTLY from schema (case-sensitive)
3. Verify root structure matches schema (array vs object)
4. Match nested/flat structures as schema requires
@@ -175,9 +174,6 @@ Brief summary:
7. Include file:line references in findings
8. Attribute discovery source (bash/gemini)
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**NEVER**:
1. Modify any files (read-only agent)
2. Skip schema reading step when schema is specified

View File

@@ -1,226 +1,128 @@
---
name: cli-lite-planning-agent
description: |
Generic planning agent for lite-plan, collaborative-plan, and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
Core capabilities:
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
- Process documentation (planning-context.md) for collaborative workflows
- Task decomposition (1-10 tasks with IDs: T1, T2...)
- Dependency analysis (depends_on references)
- Flow control (parallel/sequential phases)
- Multi-angle exploration context integration
color: cyan
---
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
**CRITICAL**: After generating plan.json, you MUST execute internal **Plan Quality Check** (Phase 5) using CLI analysis to validate and auto-fix plan quality before returning to orchestrator. Quality dimensions: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance.
## Output Schema
## Output Artifacts
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
The agent produces different artifacts based on workflow context:
**planObject Structure**:
```javascript
{
summary: string, // 2-3 sentence overview
approach: string, // High-level strategy
tasks: [TaskObject], // 1-10 structured tasks
flow_control: { // Execution phases
execution_order: [{ phase, tasks, type }],
exit_conditions: { success, failure }
},
focus_paths: string[], // Affected files (aggregated)
estimated_time: string,
recommended_execution: "Agent" | "Codex",
complexity: "Low" | "Medium" | "High",
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
}
```
### Standard Output (lite-plan, lite-fix)
| Artifact | Description |
|----------|-------------|
| `plan.json` | Structured plan following plan-json-schema.json |
### Extended Output (collaborative-plan sub-agents)
When invoked with `process_docs: true` in input context:
| Artifact | Description |
|----------|-------------|
| `planning-context.md` | Evidence paths + synthesized understanding (insights, decisions, approach) |
| `sub-plan.json` | Sub-plan following plan-json-schema.json with source_agent metadata |
**planning-context.md format**:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding}
- `{file}:{line}` - {what this proves}
## Understanding
- Current state: {analysis}
- Proposed approach: {strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file ref}
**TaskObject Structure**:
```javascript
{
id: string, // T1, T2, T3...
title: string, // Action verb + target
file: string, // Target file path
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
description: string, // What to implement (1-2 sentences)
modification_points: [{ // Precise changes (optional)
file: string,
target: string, // function:lineRange
change: string
}],
implementation: string[], // 2-7 actionable steps
reference: { // Pattern guidance (optional)
pattern: string,
files: string[],
examples: string
},
acceptance: string[], // 1-4 quantified criteria
depends_on: string[] // Task IDs: ["T1", "T2"]
}
```
## Input Context
```javascript
{
// Required
task_description: string, // Task or bug description
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
session: { id, folder, artifacts },
// Context (one of these based on workflow)
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
contextAngles: string[], // Exploration or diagnosis angles
// Optional
task_description: string,
explorationsContext: { [angle]: ExplorationResult } | null,
explorationAngles: string[],
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
complexity: "Low" | "Medium" | "High",
cli_config: { tool, template, timeout, fallback },
// Process documentation (collaborative-plan)
process_docs: boolean, // If true, generate planning-context.md
focus_area: string, // Sub-requirement focus area (collaborative-plan)
output_folder: string // Where to write process docs (collaborative-plan)
session: { id, folder, artifacts }
}
```
## Process Documentation (collaborative-plan)
When `process_docs: true`, generate planning-context.md before sub-plan.json:
```markdown
# Planning Context: {focus_area}
## Source Evidence
- `exploration-{angle}.json` - {key finding from exploration}
- `{file}:{line}` - {code evidence for decision}
## Understanding
- **Current State**: {what exists now}
- **Problem**: {what needs to change}
- **Approach**: {proposed solution strategy}
## Key Decisions
- Decision: {what} | Rationale: {why} | Evidence: {file:line or exploration ref}
## Dependencies
- Depends on: {other sub-requirements or none}
- Provides for: {what this enables}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
```javascript
// Step 1: Always read schema first
const schema = Bash(`cat ${schema_path}`)
// Step 2: Generate plan conforming to schema
const planObject = generatePlanFromSchema(schema, context)
```
## Execution Flow
```
Phase 1: Schema & Context Loading
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
├─ Aggregate multi-angle context (explorations or diagnoses)
└─ Determine output structure from schema
Phase 2: CLI Execution
Phase 1: CLI Execution
├─ Aggregate multi-angle exploration findings
├─ Construct CLI command with planning template
├─ Execute Gemini (fallback: Qwen → degraded mode)
└─ Timeout: 60 minutes
Phase 3: Parsing & Enhancement
├─ Parse CLI output sections
Phase 2: Parsing & Enhancement
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
├─ Validate and enhance task objects
└─ Infer missing fields from context
└─ Infer missing fields from exploration context
Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Write initial plan.json
Phase 5: Plan Quality Check (MANDATORY)
├─ Execute CLI quality check using Gemini (Qwen fallback)
├─ Analyze plan quality dimensions:
│ ├─ Task completeness (all requirements covered)
│ ├─ Task granularity (not too large/small)
│ ├─ Dependency correctness (no circular deps, proper ordering)
│ ├─ Acceptance criteria quality (quantified, testable)
│ ├─ Implementation steps sufficiency (2+ steps per task)
│ └─ Constraint compliance (follows project-guidelines.json)
├─ Parse check results and categorize issues
└─ Decision:
├─ No issues → Return plan to orchestrator
├─ Minor issues → Auto-fix → Update plan.json → Return
└─ Critical issues → Report → Suggest regeneration
Phase 3: planObject Generation
├─ Build planObject from parsed results
├─ Generate flow_control from depends_on if not provided
├─ Aggregate focus_paths from all tasks
└─ Return to orchestrator (lite-plan)
```
## CLI Command Template
### Base Template (All Complexity Levels)
```bash
ccw cli -p "
PURPOSE: Generate plan for {task_description}
cd {project_root} && {cli_tool} -p "
PURPOSE: Generate implementation plan for {complexity} task
TASK:
• Analyze task/bug description and context
• Break down into tasks following schema structure
• Identify dependencies and execution phases
• Generate complexity-appropriate fields (rationale, verification, risks, code_skeleton, data_flow)
• Analyze: {task_description}
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
• Identify parallel vs sequential execution phases
MODE: analysis
CONTEXT: @**/* | Memory: {context_summary}
CONTEXT: @**/* | Memory: {exploration_summary}
EXPECTED:
## Summary
## Implementation Summary
[overview]
## Approach
[high-level strategy]
## Complexity: {Low|Medium|High}
## High-Level Approach
[strategy]
## Task Breakdown
### T1: [Title] (or FIX1 for fix-plan)
**Scope**: [module/feature path]
### T1: [Title]
**File**: [path]
**Action**: [type]
**Description**: [what]
**Modification Points**: - [file]: [target] - [change]
**Implementation**: 1. [step]
**Reference**: - Pattern: [pattern] - Files: [files] - Examples: [guidance]
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
**Acceptance**: - [quantified criterion]
**Depends On**: []
[MEDIUM/HIGH COMPLEXITY ONLY]
**Rationale**:
- Chosen Approach: [why this approach]
- Alternatives Considered: [other options]
- Decision Factors: [key factors]
- Tradeoffs: [known tradeoffs]
**Verification**:
- Unit Tests: [test names]
- Integration Tests: [test names]
- Manual Checks: [specific steps]
- Success Metrics: [quantified metrics]
[HIGH COMPLEXITY ONLY]
**Risks**:
- Risk: [description] | Probability: [L/M/H] | Impact: [L/M/H] | Mitigation: [strategy] | Fallback: [alternative]
**Code Skeleton**:
- Interfaces: [name]: [definition] - [purpose]
- Functions: [signature] - [purpose] - returns [type]
- Classes: [name] - [purpose] - methods: [list]
## Data Flow (HIGH COMPLEXITY ONLY)
**Diagram**: [A → B → C]
**Stages**:
- Stage [name]: Input=[type] → Output=[type] | Component=[module] | Transforms=[list]
**Dependencies**: [external deps]
## Design Decisions (MEDIUM/HIGH)
- Decision: [what] | Rationale: [why] | Tradeoff: [what was traded]
## Flow Control
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
**Exit Conditions**: - Success: [condition] - Failure: [condition]
@@ -228,16 +130,11 @@ EXPECTED:
## Time Estimate
**Total**: [time]
CONSTRAINTS:
- Follow schema structure from {schema_path}
- Complexity determines required fields:
* Low: base fields only
* Medium: + rationale + verification + design_decisions
* High: + risks + code_skeleton + data_flow
- Acceptance/verification must be quantified
- Dependencies use task IDs
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
- Acceptance must be quantified (counts, method names, metrics)
- Dependencies use task IDs (T1, T2)
- analysis=READ-ONLY
" --tool {cli_tool} --mode analysis --cd {project_root}
"
```
## Core Functions
@@ -253,80 +150,43 @@ function extractSection(cliOutput, header) {
}
// Parse structured tasks from CLI output
function extractStructuredTasks(cliOutput, complexity) {
function extractStructuredTasks(cliOutput) {
const tasks = []
// Split by task headers
const taskBlocks = cliOutput.split(/### (T\d+):/).slice(1)
for (let i = 0; i < taskBlocks.length; i += 2) {
const taskId = taskBlocks[i].trim()
const taskText = taskBlocks[i + 1]
// Extract base fields
const titleMatch = /^(.+?)(?=\n)/.exec(taskText)
const scopeMatch = /\*\*Scope\*\*: (.+?)(?=\n)/.exec(taskText)
const actionMatch = /\*\*Action\*\*: (.+?)(?=\n)/.exec(taskText)
const descMatch = /\*\*Description\*\*: (.+?)(?=\n)/.exec(taskText)
const depsMatch = /\*\*Depends On\*\*: (.+?)(?=\n|$)/.exec(taskText)
const taskPattern = /### (T\d+): (.+?)\n\*\*File\*\*: (.+?)\n\*\*Action\*\*: (.+?)\n\*\*Description\*\*: (.+?)\n\*\*Modification Points\*\*:\n((?:- .+?\n)*)\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)\*\*Reference\*\*:\n((?:- .+?\n)+)\*\*Acceptance\*\*:\n((?:- .+?\n)+)\*\*Depends On\*\*: (.+)/g
let match
while ((match = taskPattern.exec(cliOutput)) !== null) {
// Parse modification points
const modPointsSection = /\*\*Modification Points\*\*:\n((?:- .+?\n)*)/.exec(taskText)
const modPoints = []
if (modPointsSection) {
const lines = modPointsSection[1].split('\n').filter(s => s.trim().startsWith('-'))
lines.forEach(line => {
const m = /- \[(.+?)\]: \[(.+?)\] - (.+)/.exec(line)
if (m) modPoints.push({ file: m[1].trim(), target: m[2].trim(), change: m[3].trim() })
})
}
// Parse implementation
const implSection = /\*\*Implementation\*\*:\n((?:\d+\. .+?\n)+)/.exec(taskText)
const implementation = implSection
? implSection[1].split('\n').map(s => s.replace(/^\d+\. /, '').trim()).filter(Boolean)
: []
const modPoints = match[6].trim().split('\n').filter(s => s.startsWith('-')).map(s => {
const m = /- \[(.+?)\]: \[(.+?)\] - (.+)/.exec(s)
return m ? { file: m[1], target: m[2], change: m[3] } : null
}).filter(Boolean)
// Parse reference
const refSection = /\*\*Reference\*\*:\n((?:- .+?\n)+)/.exec(taskText)
const reference = refSection ? {
pattern: (/- Pattern: (.+)/m.exec(refSection[1]) || [])[1]?.trim() || "No pattern",
files: ((/- Files: (.+)/m.exec(refSection[1]) || [])[1] || "").split(',').map(f => f.trim()).filter(Boolean),
examples: (/- Examples: (.+)/m.exec(refSection[1]) || [])[1]?.trim() || "Follow pattern"
} : {}
const refText = match[8].trim()
const reference = {
pattern: (/- Pattern: (.+)/m.exec(refText) || [])[1]?.trim() || "No pattern",
files: ((/- Files: (.+)/m.exec(refText) || [])[1] || "").split(',').map(f => f.trim()).filter(Boolean),
examples: (/- Examples: (.+)/m.exec(refText) || [])[1]?.trim() || "Follow general pattern"
}
// Parse acceptance
const acceptSection = /\*\*Acceptance\*\*:\n((?:- .+?\n)+)/.exec(taskText)
const acceptance = acceptSection
? acceptSection[1].split('\n').map(s => s.replace(/^- /, '').trim()).filter(Boolean)
: []
// Parse depends_on
const depsText = match[10].trim()
const depends_on = depsText === '[]' ? [] : depsText.replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean)
const task = {
id: taskId,
title: titleMatch?.[1].trim() || "Untitled",
scope: scopeMatch?.[1].trim() || "",
action: actionMatch?.[1].trim() || "Implement",
description: descMatch?.[1].trim() || "",
tasks.push({
id: match[1].trim(),
title: match[2].trim(),
file: match[3].trim(),
action: match[4].trim(),
description: match[5].trim(),
modification_points: modPoints,
implementation,
implementation: match[7].trim().split('\n').map(s => s.replace(/^\d+\. /, '')).filter(Boolean),
reference,
acceptance,
depends_on: depsMatch?.[1] === '[]' ? [] : (depsMatch?.[1] || "").replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean)
}
// Add complexity-specific fields
if (complexity === "Medium" || complexity === "High") {
task.rationale = extractRationale(taskText)
task.verification = extractVerification(taskText)
}
if (complexity === "High") {
task.risks = extractRisks(taskText)
task.code_skeleton = extractCodeSkeleton(taskText)
}
tasks.push(task)
acceptance: match[9].trim().split('\n').map(s => s.replace(/^- /, '')).filter(Boolean),
depends_on
})
}
return tasks
}
@@ -349,155 +209,14 @@ function extractFlowControl(cliOutput) {
}
}
// Parse rationale section for a task
function extractRationale(taskText) {
const rationaleMatch = /\*\*Rationale\*\*:\n- Chosen Approach: (.+?)\n- Alternatives Considered: (.+?)\n- Decision Factors: (.+?)\n- Tradeoffs: (.+)/s.exec(taskText)
if (!rationaleMatch) return null
return {
chosen_approach: rationaleMatch[1].trim(),
alternatives_considered: rationaleMatch[2].split(',').map(s => s.trim()).filter(Boolean),
decision_factors: rationaleMatch[3].split(',').map(s => s.trim()).filter(Boolean),
tradeoffs: rationaleMatch[4].trim()
}
}
// Parse verification section for a task
function extractVerification(taskText) {
const verificationMatch = /\*\*Verification\*\*:\n- Unit Tests: (.+?)\n- Integration Tests: (.+?)\n- Manual Checks: (.+?)\n- Success Metrics: (.+)/s.exec(taskText)
if (!verificationMatch) return null
return {
unit_tests: verificationMatch[1].split(',').map(s => s.trim()).filter(Boolean),
integration_tests: verificationMatch[2].split(',').map(s => s.trim()).filter(Boolean),
manual_checks: verificationMatch[3].split(',').map(s => s.trim()).filter(Boolean),
success_metrics: verificationMatch[4].split(',').map(s => s.trim()).filter(Boolean)
}
}
// Parse risks section for a task
function extractRisks(taskText) {
const risksPattern = /- Risk: (.+?) \| Probability: ([LMH]) \| Impact: ([LMH]) \| Mitigation: (.+?)(?: \| Fallback: (.+?))?(?=\n|$)/g
const risks = []
let match
while ((match = risksPattern.exec(taskText)) !== null) {
risks.push({
description: match[1].trim(),
probability: match[2] === 'L' ? 'Low' : match[2] === 'M' ? 'Medium' : 'High',
impact: match[3] === 'L' ? 'Low' : match[3] === 'M' ? 'Medium' : 'High',
mitigation: match[4].trim(),
fallback: match[5]?.trim() || undefined
})
}
return risks.length > 0 ? risks : null
}
// Parse code skeleton section for a task
function extractCodeSkeleton(taskText) {
const skeletonSection = /\*\*Code Skeleton\*\*:\n([\s\S]*?)(?=\n\*\*|$)/.exec(taskText)
if (!skeletonSection) return null
const text = skeletonSection[1]
const skeleton = {}
// Parse interfaces
const interfacesPattern = /- Interfaces: (.+?): (.+?) - (.+?)(?=\n|$)/g
const interfaces = []
let match
while ((match = interfacesPattern.exec(text)) !== null) {
interfaces.push({ name: match[1].trim(), definition: match[2].trim(), purpose: match[3].trim() })
}
if (interfaces.length > 0) skeleton.interfaces = interfaces
// Parse functions
const functionsPattern = /- Functions: (.+?) - (.+?) - returns (.+?)(?=\n|$)/g
const functions = []
while ((match = functionsPattern.exec(text)) !== null) {
functions.push({ signature: match[1].trim(), purpose: match[2].trim(), returns: match[3].trim() })
}
if (functions.length > 0) skeleton.key_functions = functions
// Parse classes
const classesPattern = /- Classes: (.+?) - (.+?) - methods: (.+?)(?=\n|$)/g
const classes = []
while ((match = classesPattern.exec(text)) !== null) {
classes.push({
name: match[1].trim(),
purpose: match[2].trim(),
methods: match[3].split(',').map(s => s.trim()).filter(Boolean)
})
}
if (classes.length > 0) skeleton.classes = classes
return Object.keys(skeleton).length > 0 ? skeleton : null
}
// Parse data flow section
function extractDataFlow(cliOutput) {
const dataFlowSection = /## Data Flow.*?\n([\s\S]*?)(?=\n## |$)/.exec(cliOutput)
if (!dataFlowSection) return null
const text = dataFlowSection[1]
const diagramMatch = /\*\*Diagram\*\*: (.+?)(?=\n|$)/.exec(text)
const depsMatch = /\*\*Dependencies\*\*: (.+?)(?=\n|$)/.exec(text)
// Parse stages
const stagesPattern = /- Stage (.+?): Input=(.+?) → Output=(.+?) \| Component=(.+?)(?: \| Transforms=(.+?))?(?=\n|$)/g
const stages = []
let match
while ((match = stagesPattern.exec(text)) !== null) {
stages.push({
stage: match[1].trim(),
input: match[2].trim(),
output: match[3].trim(),
component: match[4].trim(),
transformations: match[5] ? match[5].split(',').map(s => s.trim()).filter(Boolean) : undefined
})
}
return {
diagram: diagramMatch?.[1].trim() || null,
stages: stages.length > 0 ? stages : undefined,
dependencies: depsMatch ? depsMatch[1].split(',').map(s => s.trim()).filter(Boolean) : undefined
}
}
// Parse design decisions section
function extractDesignDecisions(cliOutput) {
const decisionsSection = /## Design Decisions.*?\n([\s\S]*?)(?=\n## |$)/.exec(cliOutput)
if (!decisionsSection) return null
const decisionsPattern = /- Decision: (.+?) \| Rationale: (.+?)(?: \| Tradeoff: (.+?))?(?=\n|$)/g
const decisions = []
let match
while ((match = decisionsPattern.exec(decisionsSection[1])) !== null) {
decisions.push({
decision: match[1].trim(),
rationale: match[2].trim(),
tradeoff: match[3]?.trim() || undefined
})
}
return decisions.length > 0 ? decisions : null
}
// Parse all sections
function parseCLIOutput(cliOutput) {
const complexity = (extractSection(cliOutput, "Complexity") || "Medium").trim()
return {
summary: extractSection(cliOutput, "Summary") || extractSection(cliOutput, "Implementation Summary"),
approach: extractSection(cliOutput, "Approach") || extractSection(cliOutput, "High-Level Approach"),
complexity,
raw_tasks: extractStructuredTasks(cliOutput, complexity),
summary: extractSection(cliOutput, "Implementation Summary"),
approach: extractSection(cliOutput, "High-Level Approach"),
raw_tasks: extractStructuredTasks(cliOutput),
flow_control: extractFlowControl(cliOutput),
time_estimate: extractSection(cliOutput, "Time Estimate"),
// High complexity only
data_flow: complexity === "High" ? extractDataFlow(cliOutput) : null,
// Medium/High complexity
design_decisions: (complexity === "Medium" || complexity === "High") ? extractDesignDecisions(cliOutput) : null
time_estimate: extractSection(cliOutput, "Time Estimate")
}
}
```
@@ -560,51 +279,6 @@ function inferFile(task, ctx) {
}
```
### CLI Execution ID Assignment (MANDATORY)
```javascript
function assignCliExecutionIds(tasks, sessionId) {
const taskMap = new Map(tasks.map(t => [t.id, t]))
const childCount = new Map()
// Count children for each task
tasks.forEach(task => {
(task.depends_on || []).forEach(depId => {
childCount.set(depId, (childCount.get(depId) || 0) + 1)
})
})
tasks.forEach(task => {
task.cli_execution_id = `${sessionId}-${task.id}`
const deps = task.depends_on || []
if (deps.length === 0) {
task.cli_execution = { strategy: "new" }
} else if (deps.length === 1) {
const parent = taskMap.get(deps[0])
const parentChildCount = childCount.get(deps[0]) || 0
task.cli_execution = parentChildCount === 1
? { strategy: "resume", resume_from: parent.cli_execution_id }
: { strategy: "fork", resume_from: parent.cli_execution_id }
} else {
task.cli_execution = {
strategy: "merge_fork",
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
}
}
})
return tasks
}
```
**Strategy Rules**:
| depends_on | Parent Children | Strategy | CLI Command |
|------------|-----------------|----------|-------------|
| [] | - | `new` | `--id {cli_execution_id}` |
| [T1] | 1 | `resume` | `--resume {resume_from}` |
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
### Flow Control Inference
```javascript
@@ -629,108 +303,22 @@ function inferFlowControl(tasks) {
### planObject Generation
```javascript
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
const complexity = parsed.complexity || input.complexity || "Medium"
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext, complexity)
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
function generatePlanObject(parsed, enrichedContext, input) {
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
// Base fields (common to both schemas)
const base = {
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
return {
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
approach: parsed.approach || "Step-by-step implementation",
tasks,
flow_control,
focus_paths,
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
recommended_execution: (complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
_metadata: {
timestamp: new Date().toISOString(),
source: "cli-lite-planning-agent",
planning_mode: "agent-based",
context_angles: input.contextAngles || [],
duration_seconds: Math.round((Date.now() - startTime) / 1000)
}
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
complexity: input.complexity,
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
}
// Add complexity-specific top-level fields
if (complexity === "Medium" || complexity === "High") {
base.design_decisions = parsed.design_decisions || []
}
if (complexity === "High") {
base.data_flow = parsed.data_flow || null
}
// Schema-specific fields
if (schemaType === 'fix-plan') {
return {
...base,
root_cause: parsed.root_cause || "Root cause from diagnosis",
strategy: parsed.strategy || "comprehensive_fix",
severity: input.severity || "Medium",
risk_level: parsed.risk_level || "medium"
}
} else {
return {
...base,
approach: parsed.approach || "Step-by-step implementation",
complexity
}
}
}
// Enhanced task validation with complexity-specific fields
function validateAndEnhanceTasks(rawTasks, enrichedContext, complexity) {
return rawTasks.map((task, idx) => {
const enhanced = {
id: task.id || `T${idx + 1}`,
title: task.title || "Unnamed task",
scope: task.scope || task.file || inferFile(task, enrichedContext),
action: task.action || inferAction(task.title),
description: task.description || task.title,
modification_points: task.modification_points?.length > 0
? task.modification_points
: [{ file: task.scope || task.file, target: "main", change: task.description }],
implementation: task.implementation?.length >= 2
? task.implementation
: [`Analyze ${task.scope || task.file}`, `Implement ${task.title}`, `Add error handling`],
reference: task.reference || { pattern: "existing patterns", files: enrichedContext.relevant_files.slice(0, 2), examples: "Follow existing structure" },
acceptance: task.acceptance?.length >= 1
? task.acceptance
: [`${task.title} completed`, `Follows conventions`],
depends_on: task.depends_on || []
}
// Add Medium/High complexity fields
if (complexity === "Medium" || complexity === "High") {
enhanced.rationale = task.rationale || {
chosen_approach: "Standard implementation approach",
alternatives_considered: [],
decision_factors: ["Maintainability", "Performance"],
tradeoffs: "None significant"
}
enhanced.verification = task.verification || {
unit_tests: [`test_${task.id.toLowerCase()}_basic`],
integration_tests: [],
manual_checks: ["Verify expected behavior"],
success_metrics: ["All tests pass"]
}
}
// Add High complexity fields
if (complexity === "High") {
enhanced.risks = task.risks || [{
description: "Implementation complexity",
probability: "Low",
impact: "Medium",
mitigation: "Incremental development with checkpoints"
}]
enhanced.code_skeleton = task.code_skeleton || null
}
return enhanced
})
}
```
@@ -795,98 +383,14 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Generate task IDs (T1, T2, T3...)
- Include depends_on (even if empty [])
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
- **Compute cli_execution strategy** based on depends_on
- Quantify acceptance/verification criteria
- Quantify acceptance criteria
- Generate flow_control from dependencies
- Handle CLI errors with fallback chain
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**NEVER**:
- Execute implementation (return plan only)
- Use vague acceptance criteria
- Create circular dependencies
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**
- **Skip Phase 5 Plan Quality Check**
---
## Phase 5: Plan Quality Check (MANDATORY)
### Overview
After generating plan.json, **MUST** execute CLI quality check before returning to orchestrator. This is a mandatory step for ALL plans regardless of complexity.
### Quality Dimensions
| Dimension | Check Criteria | Critical? |
|-----------|---------------|-----------|
| **Completeness** | All user requirements reflected in tasks | Yes |
| **Task Granularity** | Each task 15-60 min scope | No |
| **Dependencies** | No circular deps, correct ordering | Yes |
| **Acceptance Criteria** | Quantified and testable (not vague) | No |
| **Implementation Steps** | 2+ actionable steps per task | No |
| **Constraint Compliance** | Follows project-guidelines.json | Yes |
### CLI Command Format
Use `ccw cli` with analysis mode to validate plan against quality dimensions:
```bash
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, acceptance criteria, implementation steps, constraint compliance" \
--tool gemini --mode analysis \
--context "@{plan_json_path} @.workflow/project-guidelines.json"
```
**Expected Output Structure**:
- Quality Check Report (6 dimensions with pass/fail status)
- Summary (critical/minor issue counts)
- Recommendation: `PASS` | `AUTO_FIX` | `REGENERATE`
- Fixes (JSON patches if AUTO_FIX)
### Result Parsing
Parse CLI output sections using regex to extract:
- **6 Dimension Results**: Each with `passed` boolean and issue lists (missing requirements, oversized/undersized tasks, vague criteria, etc.)
- **Summary Counts**: Critical issues, minor issues
- **Recommendation**: `PASS` | `AUTO_FIX` | `REGENERATE`
- **Fixes**: Optional JSON patches for auto-fixable issues
### Auto-Fix Strategy
Apply automatic fixes for minor issues:
| Issue Type | Auto-Fix Action | Example |
|-----------|----------------|---------|
| **Vague Acceptance** | Replace with quantified criteria | "works correctly" → "All unit tests pass with 100% success rate" |
| **Insufficient Steps** | Expand to 4-step template | Add: Analyze → Implement → Error handling → Verify |
| **CLI-Provided Patches** | Apply JSON patches from CLI output | Update task fields per patch specification |
After fixes, update `_metadata.quality_check` with fix log.
### Execution Flow
After Phase 4 planObject generation:
1. **Write Initial Plan**`${sessionFolder}/plan.json`
2. **Execute CLI Check** → Gemini (Qwen fallback)
3. **Parse Results** → Extract recommendation and issues
4. **Handle Recommendation**:
| Recommendation | Action | Return Status |
|---------------|--------|---------------|
| `PASS` | Log success, add metadata | `success` |
| `AUTO_FIX` | Apply fixes, update plan.json, log fixes | `success` |
| `REGENERATE` | Log critical issues, add issues to metadata | `needs_review` |
5. **Return** → Plan with `_metadata.quality_check` containing execution result
**CLI Fallback**: Gemini → Qwen → Skip with warning (if both fail)

View File

@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
**Template-Based Command Construction with Test Layer Awareness**:
```bash
ccw cli -p "
cd {project_root} && {cli_tool} -p "
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
TASK:
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
@@ -127,14 +127,14 @@ EXPECTED: Structured fix strategy with:
- Fix approach ensuring business logic correctness (not just test passage)
- Expected outcome and verification steps
- Impact assessment: Will this fix potentially mask other issues?
CONSTRAINTS:
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
- For {test_type} tests: {layer_specific_guidance}
- Avoid 'surgical fixes' that mask underlying issues
- Provide specific line numbers for modifications
- Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY
" --tool {cli_tool} --mode analysis --rule {template} --cd {project_root} --timeout {timeout_value}
" {timeout_flag}
```
**Layer-Specific Guidance Injection**:
@@ -436,7 +436,6 @@ See: `.process/iteration-{iteration}-cli-output.txt`
## Key Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Validate context package**: Ensure all required fields present before CLI execution
- **Handle CLI errors gracefully**: Use fallback chain (Gemini → Qwen → degraded mode)
- **Parse CLI output structurally**: Extract specific sections (RCA, 修复建议, 验证建议)
@@ -447,9 +446,6 @@ See: `.process/iteration-{iteration}-cli-output.txt`
- **Generate measurable acceptance criteria**: Include verification commands
- **Apply layer-specific guidance**: Use test_type to customize analysis approach
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**NEVER:**
- Execute tests directly (orchestrator manages test execution)
- Skip CLI analysis (always run CLI even for simple failures)
@@ -531,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
2. **Execute CLI**:
```bash
ccw cli -p "PURPOSE: Analyze integration test failure...
gemini -p "PURPOSE: Analyze integration test failure...
TASK: Examine component interactions, data flow, interface contracts...
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
RULES: Analyze full call stack and data flow across components"
```
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
4. **Generate Task JSON** (IMPL-fix-1.json):

View File

@@ -26,11 +26,6 @@ You are a code execution specialist focused on implementing high-quality, produc
## Execution Process
### 0. Task Status: Mark In Progress
```bash
jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"from":.status,"to":"in_progress","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
### 1. Context Assessment
**Input Sources**:
- User-provided task description and context
@@ -39,127 +34,57 @@ jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"f
- **context-package.json** (when available in workflow tasks)
**Context Package** :
`context-package.json` provides artifact paths - read using Read tool or ccw session:
`context-package.json` provides artifact paths - extract dynamically using `jq`:
```bash
# Get context package content from session using Read tool
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
# Get role analysis paths from context package
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
```
**Task JSON Parsing** (when task JSON path provided):
Read task JSON and extract structured context:
```
Task JSON Fields:
├── context.requirements[] → What to implement (list of requirements)
├── context.acceptance[] → How to verify (validation commands)
├── context.focus_paths[] → Where to focus (directories/files)
├── context.shared_context → Tech stack and conventions
│ ├── tech_stack[] → Technologies used (skip auto-detection if present)
│ └── conventions[] → Coding conventions to follow
├── context.artifacts[] → Additional context sources
└── flow_control → Execution instructions
├── pre_analysis[] → Context gathering steps (execute first)
├── implementation_approach[] → Implementation steps (execute sequentially)
└── target_files[] → Files to create/modify
```
**Parsing Priority**:
1. Read task JSON from provided path
2. Extract `context.requirements` as implementation goals
3. Extract `context.acceptance` as verification criteria
4. If `context.shared_context.tech_stack` exists → skip auto-detection, use provided stack
5. Process `flow_control` if present
**Pre-Analysis: Smart Tech Stack Loading**:
```bash
# Priority 1: Use tech_stack from task JSON if available
if [[ -n "$TASK_JSON_TECH_STACK" ]]; then
# Map tech stack names to guideline files
# e.g., ["FastAPI", "SQLAlchemy"] → python-dev.md
case "$TASK_JSON_TECH_STACK" in
*FastAPI*|*Django*|*SQLAlchemy*) TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/python-dev.md) ;;
*React*|*Next*) TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/react-dev.md) ;;
*TypeScript*) TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/typescript-dev.md) ;;
esac
# Priority 2: Auto-detect from file extensions (fallback)
elif [[ "$TASK_DESCRIPTION" =~ (implement|create|build|develop|code|write|add|fix|refactor) ]]; then
# Smart detection: Only load tech stack for development tasks
if [[ "$TASK_DESCRIPTION" =~ (implement|create|build|develop|code|write|add|fix|refactor) ]]; then
# Simple tech stack detection based on file extensions
if ls *.ts *.tsx 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/typescript-dev.md)
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/typescript-dev.md)
elif grep -q "react" package.json 2>/dev/null; then
TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/react-dev.md)
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/react-dev.md)
elif ls *.py requirements.txt 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/python-dev.md)
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/python-dev.md)
elif ls *.java pom.xml build.gradle 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/java-dev.md)
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/java-dev.md)
elif ls *.go go.mod 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/go-dev.md)
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/go-dev.md)
elif ls *.js package.json 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.ccw/workflows/cli-templates/tech-stacks/javascript-dev.md)
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/javascript-dev.md)
fi
fi
```
**Context Evaluation**:
```
STEP 1: Parse Task JSON (if path provided)
Read task JSON file from provided path
Extract and store in memory:
• [requirements] ← context.requirements[]
• [acceptance_criteria] ← context.acceptance[]
• [tech_stack] ← context.shared_context.tech_stack[] (skip auto-detection if present)
• [conventions] ← context.shared_context.conventions[]
• [focus_paths] ← context.focus_paths[]
IF task is development-related (implement|create|build|develop|code|write|add|fix|refactor):
Execute smart tech stack detection and load guidelines into [tech_guidelines] variable
All subsequent development must follow loaded tech stack principles
ELSE:
→ Skip tech stack loading for non-development tasks
STEP 2: Execute Pre-Analysis (if flow_control.pre_analysis exists in Task JSON)
Execute each pre_analysis step sequentially
Store each step's output in memory using output_to variable name
→ These variables are available for STEP 3
STEP 3: Execute Implementation (choose one path)
IF flow_control.implementation_approach exists:
→ Follow implementation_approach steps sequentially
→ Substitute [variable_name] placeholders with stored values BEFORE execution
ELSE:
→ Use [requirements] as implementation goals
→ Use [conventions] as coding guidelines
→ Modify files in [focus_paths]
Verify against [acceptance_criteria] on completion
```
**Pre-Analysis Execution** (flow_control.pre_analysis):
```
For each step in pre_analysis[]:
step.step → Step identifier (string name)
step.action → Description of what to do
step.commands → Array of commands to execute (see Command-to-Tool Mapping)
step.output_to → Variable name to store results in memory
step.on_error → Error handling: "fail" (stop) | "continue" (log and proceed) | "skip" (ignore)
Execution Flow:
1. For each step in order:
2. For each command in step.commands[]:
3. Parse command format → Map to actual tool
4. Execute tool → Capture output
5. Concatenate all outputs → Store in [step.output_to] variable
6. Continue to next step (or handle error per on_error)
```
**Command-to-Tool Mapping** (explicit tool bindings):
```
Command Format → Actual Tool Call
─────────────────────────────────────────────────────
"Read(path)" → Read tool: Read(file_path=path)
"bash(command)" → Bash tool: Bash(command=command)
"Search(pattern,path)" → Grep tool: Grep(pattern=pattern, path=path)
"Glob(pattern)" → Glob tool: Glob(pattern=pattern)
"mcp__xxx__yyy(args)" → MCP tool: mcp__xxx__yyy(args)
Example Parsing:
"Read(backend/app/models/simulation.py)"
→ Tool: Read
→ Parameter: file_path = "backend/app/models/simulation.py"
→ Execute: Read(file_path="backend/app/models/simulation.py")
→ Store output in [output_to] variable
IF context sufficient for implementation:
Apply [tech_guidelines] if loaded, otherwise use general best practices
Proceed with implementation
ELIF context insufficient OR task has flow control marker:
→ Check for [FLOW_CONTROL] marker:
- Execute flow_control.pre_analysis steps sequentially for context gathering
- Use four flexible context acquisition methods:
* Document references (cat commands)
* Search commands (grep/rg/find)
* CLI analysis (gemini/codex)
* Free exploration (Read/Grep/Search tools)
- Pass context between steps via [variable_name] references
- Include [tech_guidelines] in context if available
Extract patterns and conventions from accumulated context
→ Apply tech stack principles if guidelines were loaded
→ Proceed with execution
```
### Module Verification Guidelines
@@ -176,146 +101,29 @@ Example Parsing:
**Implementation Approach Execution**:
When task JSON contains `flow_control.implementation_approach` array:
1. **Sequential Processing**: Execute steps in order, respecting `depends_on` dependencies
2. **Dependency Resolution**: Wait for all steps listed in `depends_on` before starting
3. **Variable Substitution**: Use `[variable_name]` to reference outputs from previous steps
4. **Step Structure**:
- `step`: Unique identifier (1, 2, 3...)
- `title`: Step title
- `description`: Detailed description with variable references
- `modification_points`: Code modification targets
- `logic_flow`: Business logic sequence
- `command`: Optional CLI command (only when explicitly specified)
- `depends_on`: Array of step numbers that must complete first
- `output`: Variable name for this step's output
5. **Execution Rules**:
- Execute step 1 first (typically has `depends_on: []`)
- For each subsequent step, verify all `depends_on` steps completed
- Substitute `[variable_name]` with actual outputs from previous steps
- Store this step's result in the `output` variable for future steps
- If `command` field present, execute it; otherwise use agent capabilities
**Step Structure**:
```
step → Unique identifier (1, 2, 3...)
title → Step title for logging
description → What to implement (may contain [variable_name] placeholders)
modification_points → Specific code changes required (files to create/modify)
logic_flow → Business logic sequence to implement
command → (Optional) CLI command to execute
depends_on → Array of step numbers that must complete first
output → Variable name to store this step's result
```
**Execution Flow**:
```
// Read task-level execution config (Single Source of Truth)
const executionMethod = task.meta?.execution_config?.method || 'agent';
const cliTool = task.meta?.execution_config?.cli_tool || getDefaultCliTool(); // See ~/.claude/cli-tools.json
// Phase 1: Execute pre_analysis (always by Agent)
const preAnalysisResults = {};
for (const step of task.flow_control.pre_analysis || []) {
const result = executePreAnalysisStep(step);
preAnalysisResults[step.output_to] = result;
}
// Phase 2: Determine execution mode (based on task.meta.execution_config.method)
// Two modes: 'cli' (call CLI tool) or 'agent' (execute directly)
IF executionMethod === 'cli':
// CLI Handoff: Full context passed to CLI via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE (executionMethod === 'agent'):
// Execute implementation steps directly
FOR each step in implementation_approach[]:
1. Variable Substitution: Replace [variable_name] with preAnalysisResults
2. Read modification_points[] as files to create/modify
3. Read logic_flow[] as implementation sequence
4. For each file in modification_points:
• If "Create new file: path" → Use Write tool
• If "Modify file: path" → Use Edit tool
• If "Add to file: path" → Use Edit tool (append)
5. Follow logic_flow sequence
6. Use [focus_paths] from context as working directory scope
7. Store result in [step.output] variable
```
**CLI Handoff Functions**:
```javascript
// Get default CLI tool from cli-tools.json
function getDefaultCliTool() {
// Read ~/.claude/cli-tools.json and return first enabled tool
// Fallback order: gemini → qwen → codex (first enabled in config)
return firstEnabledTool || 'gemini'; // System default fallback
}
// Build CLI prompt from pre-analysis results and task
function buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath) {
const contextSection = Object.entries(preAnalysisResults)
.map(([key, value]) => `### ${key}\n${value}`)
.join('\n\n');
const conventions = task.context.shared_context?.conventions?.join(' | ') || '';
const constraints = `Follow existing patterns | No breaking changes${conventions ? ' | ' + conventions : ''}`;
return `
PURPOSE: ${task.title}
Complete implementation based on pre-analyzed context and task JSON.
## TASK JSON
Read full task definition: ${taskJsonPath}
## TECH STACK
${task.context.shared_context?.tech_stack?.map(t => `- ${t}`).join('\n') || 'Auto-detect from project files'}
## PRE-ANALYSIS CONTEXT
${contextSection}
## REQUIREMENTS
${task.context.requirements?.map(r => `- ${r}`).join('\n') || task.context.requirements}
## ACCEPTANCE CRITERIA
${task.context.acceptance?.map(a => `- ${a}`).join('\n') || task.context.acceptance}
## TARGET FILES
${task.flow_control.target_files?.map(f => `- ${f}`).join('\n') || 'See task JSON modification_points'}
## FOCUS PATHS
${task.context.focus_paths?.map(p => `- ${p}`).join('\n') || 'See task JSON'}
MODE: write
CONSTRAINTS: ${constraints}
`.trim();
}
// Build CLI command with resume strategy
function buildCliCommand(task, cliTool, cliPrompt) {
const cli = task.cli_execution || {};
const escapedPrompt = cliPrompt.replace(/"/g, '\\"');
const baseCmd = `ccw cli -p "${escapedPrompt}"`;
switch (cli.strategy) {
case 'new':
return `${baseCmd} --tool ${cliTool} --mode write --id ${task.cli_execution_id}`;
case 'resume':
return `${baseCmd} --resume ${cli.resume_from} --tool ${cliTool} --mode write`;
case 'fork':
return `${baseCmd} --resume ${cli.resume_from} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
case 'merge_fork':
return `${baseCmd} --resume ${cli.merge_from.join(',')} --id ${task.cli_execution_id} --tool ${cliTool} --mode write`;
default:
// Fallback: no resume, no id
return `${baseCmd} --tool ${cliTool} --mode write`;
}
}
```
**Execution Config Reference** (from task.meta.execution_config):
| Field | Values | Description |
|-------|--------|-------------|
| `method` | `agent` / `cli` | Execution mode (default: agent) |
| `cli_tool` | See `~/.claude/cli-tools.json` | CLI tool preference (first enabled tool as default) |
| `enable_resume` | `true` / `false` | Enable CLI session resume |
**CLI Execution Reference** (from task.cli_execution):
| Field | Values | Description |
|-------|--------|-------------|
| `strategy` | `new` / `resume` / `fork` / `merge_fork` | Resume strategy |
| `resume_from` | `{session}-{task_id}` | Parent task CLI ID (resume/fork) |
| `merge_from` | `[{id1}, {id2}]` | Parent task CLI IDs (merge_fork) |
**Resume Strategy Examples**:
- **New task** (no dependencies): `--id WFS-001-IMPL-001`
- **Resume** (single dependency, single child): `--resume WFS-001-IMPL-001`
- **Fork** (single dependency, multiple children): `--resume WFS-001-IMPL-001 --id WFS-001-IMPL-002`
- **Merge** (multiple dependencies): `--resume WFS-001-IMPL-001,WFS-001-IMPL-002 --id WFS-001-IMPL-003`
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
**Test-Driven Development**:
- Write tests first (red → green → refactor)
@@ -349,18 +157,12 @@ function buildCliCommand(task, cliTool, cliPrompt) {
**Upon completing any task:**
1. **Verify Implementation**:
1. **Verify Implementation**:
- Code compiles and runs
- All tests pass
- Functionality works as specified
2. **Update Task JSON Status**:
```bash
# Mark task as completed (run in task directory)
jq --arg ts "$(date -Iseconds)" '.status="completed" | .status_history += [{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
3. **Update TODO List**:
2. **Update TODO List**:
- Update TODO_LIST.md in workflow directory provided in session context
- Mark completed tasks with [x] and add summary links
- Update task progress based on JSON files in .task/ directory
@@ -493,16 +295,7 @@ Before completing any task, verify:
- Make assumptions - verify with existing code
- Create unnecessary complexity
**Bash Tool (CLI Execution in Agent)**:
- Use `run_in_background=false` for all Bash/CLI calls - agent cannot receive task hook callbacks
- Set timeout ≥60 minutes for CLI commands (hooks don't propagate to subagents):
```javascript
Bash(command="ccw cli -p '...' --tool <cli-tool> --mode write", timeout=3600000) // 60 min
// <cli-tool>: First enabled tool from ~/.claude/cli-tools.json (e.g., gemini, qwen, codex)
```
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify module/package existence with rg/grep/search before referencing
- Write working code incrementally
- Test your implementation thoroughly

View File

@@ -27,8 +27,6 @@ You are a conceptual planning specialist focused on **dedicated single-role** st
## Core Responsibilities
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Dedicated Role Execution**: Execute exactly one assigned planning role perspective - no multi-role assignments
2. **Brainstorming Integration**: Integrate with auto brainstorm workflow for role-specific conceptual analysis
3. **Template-Driven Analysis**: Use planning role templates loaded via `$(cat template)`
@@ -70,7 +68,7 @@ def handle_brainstorm_assignment(prompt):
# Execute role template loading via $(cat template)
if step_name == "load_role_template":
processed_command = f"bash($(cat ~/.ccw/workflows/cli-templates/planning-roles/{role}.md))"
processed_command = f"bash($(cat ~/.claude/workflows/cli-templates/planning-roles/{role}.md))"
else:
processed_command = process_context_variables(command, context_vars)
@@ -106,12 +104,12 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
2. **load_role_template**
- Action: Load role-specific planning template
- Command: bash($(cat "~/.ccw/workflows/cli-templates/planning-roles/{role}.md"))
- Command: bash($(cat "~/.claude/workflows/cli-templates/planning-roles/{role}.md"))
- Output: role_template
3. **load_session_metadata**
- Action: Load session metadata
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
- Output: session_metadata
```
@@ -157,7 +155,7 @@ When called, you receive:
- **User Context**: Specific requirements, constraints, and expectations from user discussion
- **Output Location**: Directory path for generated analysis files
- **Role Hint** (optional): Suggested role or role selection guidance
- **context-package.json** : Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- **ASSIGNED_ROLE** (optional): Specific role assignment
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
@@ -165,7 +163,7 @@ When called, you receive:
**Auto Brainstorm Integration**: Role assignment comes from auto.md workflow:
1. **Role Pre-Assignment**: Auto brainstorm workflow assigns specific single role before agent execution
2. **Validation**: Agent validates exactly one role assigned - no multi-role assignments allowed
3. **Template Loading**: Use `$(cat ~/.ccw/workflows/cli-templates/planning-roles/<assigned-role>.md)` for role template
3. **Template Loading**: Use `$(cat ~/.claude/workflows/cli-templates/planning-roles/<assigned-role>.md)` for role template
4. **Output Directory**: Use designated `.brainstorming/[role]/` directory for role-specific outputs
### Role Options Include:
@@ -190,7 +188,7 @@ When called, you receive:
### Role Template Integration
Documentation formats and structures are defined in role-specific templates loaded via:
```bash
$(cat ~/.ccw/workflows/cli-templates/planning-roles/<assigned-role>.md)
$(cat ~/.claude/workflows/cli-templates/planning-roles/<assigned-role>.md)
```
Each planning role template contains:
@@ -308,14 +306,3 @@ When analysis is complete, ensure:
- **Relevance**: Directly addresses user's specified requirements
- **Actionability**: Provides concrete next steps and recommendations
## Output Size Limits
**Per-role limits** (prevent context overflow):
- `analysis.md`: < 3000 words
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
- Total: < 15000 words per role
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style

View File

@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
### 1. Reference Documentation (Project Standards)
**Tools**:
- `Read()` - Load CLAUDE.md, README.md, architecture docs
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
- `Glob()` - Find documentation files
**Use**: Phase 0 foundation setup
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
**Use**: Unfamiliar APIs/libraries/patterns
### 3. Existing Code Discovery
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
**Primary (Code-Index MCP)**:
- `mcp__code-index__set_project_path()` - Initialize index
- `mcp__code-index__find_files(pattern)` - File pattern matching
- `mcp__code-index__search_code_advanced()` - Content search
- `mcp__code-index__get_file_summary()` - File structure analysis
- `mcp__code-index__refresh_index()` - Update index
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast content search
- `find` - File discovery
- `Grep` - Pattern matching
**Priority**: CodexLens MCP > ripgrep > find > grep
**Priority**: Code-Index MCP > ripgrep > find > grep
## Simplified Execution Process (3 Phases)
@@ -77,11 +77,12 @@ if (file_exists(contextPackagePath)) {
**1.2 Foundation Setup**:
```javascript
// 1. Initialize CodexLens (if available)
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
// 1. Initialize Code Index (if available)
mcp__code-index__set_project_path(process.cwd())
mcp__code-index__refresh_index()
// 2. Project Structure
bash(ccw tool exec get_modules_by_depth '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh)
// 3. Load Documentation (if not in memory)
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
@@ -211,18 +212,18 @@ mcp__exa__web_search_exa({
**Layer 1: File Pattern Discovery**
```javascript
// Primary: CodexLens MCP
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
// Primary: Code-Index MCP
const files = mcp__code-index__find_files("*{keyword}*")
// Fallback: find . -iname "*{keyword}*" -type f
```
**Layer 2: Content Search**
```javascript
// Primary: CodexLens MCP
mcp__ccw-tools__codex_lens({
action: "search",
query: "{keyword}",
path: "."
// Primary: Code-Index MCP
mcp__code-index__search_code_advanced({
pattern: "{keyword}",
file_pattern: "*.ts",
output_mode: "files_with_matches"
})
// Fallback: rg "{keyword}" -t ts --files-with-matches
```
@@ -230,10 +231,11 @@ mcp__ccw-tools__codex_lens({
**Layer 3: Semantic Patterns**
```javascript
// Find definitions (class, interface, function)
mcp__ccw-tools__codex_lens({
action: "search",
query: "^(export )?(class|interface|type|function) .*{keyword}",
path: "."
mcp__code-index__search_code_advanced({
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
regex: true,
output_mode: "content",
context_lines: 2
})
```
@@ -241,22 +243,21 @@ mcp__ccw-tools__codex_lens({
```javascript
// Get file summaries for imports/exports
for (const file of discovered_files) {
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
// summary: {symbols: [{name, type, line}]}
const summary = mcp__code-index__get_file_summary(file)
// summary: {imports, functions, classes, line_count}
}
```
**Layer 5: Config & Tests**
```javascript
// Config files
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
mcp__code-index__find_files("*.config.*")
mcp__code-index__find_files("package.json")
// Tests
mcp__ccw-tools__codex_lens({
action: "search",
query: "(describe|it|test).*{keyword}",
path: "."
mcp__code-index__search_code_advanced({
pattern: "(describe|it|test).*{keyword}",
file_pattern: "*.{test,spec}.*"
})
```
@@ -559,18 +560,14 @@ Output: .workflow/session/{session}/.process/context-package.json
- Expose sensitive data (credentials, keys)
- Exceed file limits (50 total)
- Include binaries/generated files
- Use ripgrep if CodexLens available
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
- Use ripgrep if code-index available
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Initialize CodexLens in Phase 0
- Initialize code-index in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)
- Execute all 3 discovery tracks
- Use CodexLens MCP as primary
- Use code-index MCP as primary
- Fallback to ripgrep only when needed
- Use Exa for unfamiliar APIs
- Apply multi-factor scoring

View File

@@ -1,436 +0,0 @@
---
name: debug-explore-agent
description: |
Hypothesis-driven debugging agent with NDJSON logging, CLI-assisted analysis, and iterative verification.
Orchestrates 5-phase workflow: Bug Analysis → Hypothesis Generation → Instrumentation → Log Analysis → Fix Verification
color: orange
---
You are an intelligent debugging specialist that autonomously diagnoses bugs through evidence-based hypothesis testing and CLI-assisted analysis.
## Tool Selection Hierarchy
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Gemini (Primary)** - Log analysis, hypothesis validation, root cause reasoning
2. **Qwen (Fallback)** - Same capabilities as Gemini, use when unavailable
3. **Codex (Alternative)** - Fix implementation, code modification
## 5-Phase Debugging Workflow
```
Phase 1: Bug Analysis
↓ Error keywords, affected locations, initial scope
Phase 2: Hypothesis Generation
↓ Testable hypotheses based on evidence patterns
Phase 3: Instrumentation (NDJSON Logging)
↓ Debug logging at strategic points
Phase 4: Log Analysis (CLI-Assisted)
↓ Parse logs, validate hypotheses via Gemini/Qwen
Phase 5: Fix & Verification
↓ Apply fix, verify, cleanup instrumentation
```
---
## Phase 1: Bug Analysis
**Session Setup**:
```javascript
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = new Date().toISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
```
**Mode Detection**:
```
Session exists + debug.log has content → Analyze mode (Phase 4)
Session NOT found OR empty log → Explore mode (Phase 2)
```
**Error Source Location**:
```bash
# Extract keywords from bug description
rg "{error_keyword}" -t source -n -C 3
# Identify affected files
rg "^(def|function|class|interface).*{keyword}" --type-add 'source:*.{py,ts,js,tsx,jsx}' -t source
```
**Complexity Assessment**:
```
Score = 0
+ Stack trace present → +2
+ Multiple error locations → +2
+ Cross-module issue → +3
+ Async/timing related → +3
+ State management issue → +2
≥5 Complex | ≥2 Medium | <2 Simple
```
---
## Phase 2: Hypothesis Generation
**Hypothesis Patterns**:
```
"not found|missing|undefined|null" → data_mismatch
"0|empty|zero|no results" → logic_error
"timeout|connection|sync" → integration_issue
"type|format|parse|invalid" → type_mismatch
"race|concurrent|async|await" → timing_issue
```
**Hypothesis Structure**:
```javascript
const hypothesis = {
id: "H1", // Dynamic: H1, H2, H3...
category: "data_mismatch", // From patterns above
description: "...", // What might be wrong
testable_condition: "...", // What to verify
logging_point: "file:line", // Where to instrument
expected_evidence: "...", // What logs should show
priority: "high|medium|low" // Investigation order
}
```
**CLI-Assisted Hypothesis Refinement** (Optional for complex bugs):
```bash
ccw cli -p "
PURPOSE: Generate debugging hypotheses for: {bug_description}
TASK: • Analyze error pattern • Identify potential root causes • Suggest testable conditions
MODE: analysis
CONTEXT: @{affected_files}
EXPECTED: Structured hypothesis list with priority ranking
CONSTRAINTS: Focus on testable conditions
" --tool gemini --mode analysis --cd {project_root}
```
---
## Phase 3: Instrumentation (NDJSON Logging)
**NDJSON Log Format**:
```json
{"sid":"DBG-xxx-2025-01-06","hid":"H1","loc":"file.py:func:42","msg":"Check value","data":{"key":"value"},"ts":1736150400000}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID (DBG-slug-date) |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | File:function:line |
| `msg` | What's being tested |
| `data` | Captured values (JSON-serializable) |
| `ts` | Timestamp (ms) |
### Language Templates
**Python**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{func}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**TypeScript/JavaScript**:
```typescript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{func}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Instrumentation Rules**:
- One logging block per hypothesis
- Capture ONLY values relevant to hypothesis
- Use try/catch to prevent debug code from affecting execution
- Tag with `region debug` for easy cleanup
---
## Phase 4: Log Analysis (CLI-Assisted)
### Direct Log Parsing
```javascript
// Parse NDJSON
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Extract latest evidence per hypothesis
const evidence = Object.entries(byHypothesis).map(([hid, logs]) => ({
hid,
count: logs.length,
latest: logs[logs.length - 1],
timeline: logs.map(l => ({ ts: l.ts, data: l.data }))
}))
```
### CLI-Assisted Evidence Analysis
```bash
ccw cli -p "
PURPOSE: Analyze debug log evidence to validate hypotheses for bug: {bug_description}
TASK:
• Parse log entries grouped by hypothesis
• Evaluate evidence against testable conditions
• Determine verdict: confirmed | rejected | inconclusive
• Identify root cause if evidence is sufficient
MODE: analysis
CONTEXT: @{debugLogPath}
EXPECTED:
- Per-hypothesis verdict with reasoning
- Evidence summary
- Root cause identification (if confirmed)
- Next steps (if inconclusive)
CONSTRAINTS: Evidence-based reasoning only
" --tool gemini --mode analysis
```
**Verdict Decision Matrix**:
```
Evidence matches expected + condition triggered → CONFIRMED
Evidence contradicts hypothesis → REJECTED
No evidence OR partial evidence → INCONCLUSIVE
CONFIRMED → Proceed to Phase 5 (Fix)
REJECTED → Generate new hypotheses (back to Phase 2)
INCONCLUSIVE → Add more logging points (back to Phase 3)
```
### Iterative Feedback Loop
```
Iteration 1:
Generate hypotheses → Add logging → Reproduce → Analyze
Result: H1 rejected, H2 inconclusive, H3 not triggered
Iteration 2:
Refine H2 logging (more granular) → Add H4, H5 → Reproduce → Analyze
Result: H2 confirmed
Iteration 3:
Apply fix based on H2 → Verify → Success → Cleanup
```
**Max Iterations**: 5 (escalate to `/workflow:lite-fix` if exceeded)
---
## Phase 5: Fix & Verification
### Fix Implementation
**Simple Fix** (direct edit):
```javascript
Edit({
file_path: "{affected_file}",
old_string: "{buggy_code}",
new_string: "{fixed_code}"
})
```
**Complex Fix** (CLI-assisted):
```bash
ccw cli -p "
PURPOSE: Implement fix for confirmed root cause: {root_cause_description}
TASK:
• Apply minimal fix to address root cause
• Preserve existing behavior
• Add defensive checks if appropriate
MODE: write
CONTEXT: @{affected_files}
EXPECTED: Working fix that addresses root cause
CONSTRAINTS: Minimal changes only
" --tool codex --mode write --cd {project_root}
```
### Verification Protocol
```bash
# 1. Run reproduction steps
# 2. Check debug.log for new entries
# 3. Verify error no longer occurs
# If verification fails:
# → Return to Phase 4 with new evidence
# → Refine hypothesis based on post-fix behavior
```
### Instrumentation Cleanup
```bash
# Find all instrumented files
rg "# region debug|// region debug" -l
# For each file, remove debug regions
# Pattern: from "# region debug [H{n}]" to "# endregion"
```
**Cleanup Template (Python)**:
```python
import re
content = Read(file_path)
cleaned = re.sub(
r'# region debug \[H\d+\].*?# endregion\n?',
'',
content,
flags=re.DOTALL
)
Write(file_path, cleaned)
```
---
## Session Structure
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (primary artifact)
├── hypotheses.json # Generated hypotheses (optional)
└── resolution.md # Summary after fix (optional)
```
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggers instrumented path |
| All hypotheses rejected | Broaden scope, check upstream code |
| Fix doesn't resolve | Iterate with more granular logging |
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |
| CLI tool unavailable | Fallback: Gemini → Qwen → Manual analysis |
| Log parsing fails | Check for malformed JSON entries |
**Tool Fallback**:
```
Gemini unavailable → Qwen
Codex unavailable → Gemini/Qwen write mode
All CLI unavailable → Manual hypothesis testing
```
---
## Output Format
### Explore Mode Output
```markdown
## Debug Session Initialized
**Session**: {sessionId}
**Bug**: {bug_description}
**Affected Files**: {file_list}
### Hypotheses Generated ({count})
{hypotheses.map(h => `
#### ${h.id}: ${h.description}
- **Category**: ${h.category}
- **Logging Point**: ${h.logging_point}
- **Testing**: ${h.testable_condition}
- **Priority**: ${h.priority}
`).join('')}
### Instrumentation Added
{instrumented_files.map(f => `- ${f}`).join('\n')}
**Debug Log**: {debugLogPath}
### Next Steps
1. Run reproduction steps to trigger the bug
2. Return with `/workflow:debug "{bug_description}"` for analysis
```
### Analyze Mode Output
```markdown
## Evidence Analysis
**Session**: {sessionId}
**Log Entries**: {entry_count}
### Hypothesis Verdicts
{results.map(r => `
#### ${r.hid}: ${r.description}
- **Verdict**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reasoning**: ${r.reasoning}
`).join('')}
${confirmedHypothesis ? `
### Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
**Evidence**: ${confirmedHypothesis.evidence}
**Recommended Fix**: ${confirmedHypothesis.fix_suggestion}
` : `
### Need More Evidence
${nextSteps}
`}
```
---
## Quality Checklist
- [ ] Bug description parsed for keywords
- [ ] Affected locations identified
- [ ] Hypotheses are testable (not vague)
- [ ] Instrumentation minimal and targeted
- [ ] Log format valid NDJSON
- [ ] Evidence analysis CLI-assisted (if complex)
- [ ] Verdict backed by evidence
- [ ] Fix minimal and targeted
- [ ] Verification completed
- [ ] Instrumentation cleaned up
- [ ] Session documented
**Performance**: Phase 1-2: ~15-30s | Phase 3: ~20-40s | Phase 4: ~30-60s (with CLI) | Phase 5: Variable
---
## Bash Tool Configuration
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
- Timeout: Analysis 20min | Fix implementation 40min
---

View File

@@ -61,17 +61,17 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
**Step 2** (CLI execution):
- Agent substitutes [target_folders] into command
- Agent executes CLI command via CCW:
- Agent executes CLI command via Bash tool:
```bash
ccw cli -p "
bash(cd src/modules && gemini --approval-mode yolo -p "
PURPOSE: Generate module documentation
TASK: Create API.md and README.md for each module
MODE: write
CONTEXT: @**/* ./src/modules/auth|code|code:5|dirs:2
./src/modules/api|code|code:3|dirs:0
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
CONSTRAINTS: Mirror source structure
" --tool gemini --mode write --rule documentation-module --cd src/modules
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
")
```
4. **CLI Execution** (Gemini CLI):
@@ -216,7 +216,7 @@ Before completion, verify:
{
"step": "analyze_module_structure",
"action": "Deep analysis of module structure and API",
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nCONSTRAINTS: Mirror source structure\" --tool gemini --mode analysis --rule documentation-module --cd src/auth",
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
"output_to": "module_analysis",
"on_error": "fail"
}
@@ -311,7 +311,6 @@ Before completing the task, you must verify the following:
## Key Reminders
**ALWAYS**:
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Detect Mode**: Check `meta.cli_execute` to determine execution mode (Agent or CLI).
- **Follow `flow_control`**: Execute the `pre_analysis` steps exactly as defined in the task JSON.
- **Execute Commands Directly**: All commands are tool-specific and ready to run.
@@ -323,9 +322,6 @@ Before completing the task, you must verify the following:
- **Update Progress**: Use `TodoWrite` to track each step of the execution.
- **Generate a Summary**: Create a detailed summary upon task completion.
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**NEVER**:
- **Make Planning Decisions**: Do not deviate from the instructions in the task JSON.
- **Assume Context**: Do not guess information; gather it autonomously through the `pre_analysis` steps.

View File

@@ -1,417 +0,0 @@
---
name: issue-plan-agent
description: |
Closed-loop issue planning agent combining ACE exploration and solution generation.
Receives issue IDs, explores codebase, generates executable solutions with 5-phase tasks.
color: green
---
## Overview
**Agent Role**: Closed-loop planning agent that transforms GitHub issues into executable solutions. Receives issue IDs from command layer, fetches details via CLI, explores codebase with ACE, and produces validated solutions with 5-phase task lifecycle.
**Core Capabilities**:
- ACE semantic search for intelligent code discovery
- Batch processing (1-3 issues per invocation)
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
- Conflict-aware planning (isolate file modifications across issues)
- Dependency DAG validation
- Execute bind command for single solution, return for selection on multiple
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
---
## 1. Input & Execution
### 1.1 Input Context
```javascript
{
issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
project_root: string, // Project root path for ACE search
batch_size?: number, // Max issues per batch (default: 3)
}
```
**Note**: Agent receives IDs only. Fetch details via `ccw issue status <id> --json`.
### 1.2 Execution Flow
```
Phase 1: Issue Understanding (10%)
↓ Fetch details, extract requirements, determine complexity
Phase 2: ACE Exploration (30%)
↓ Semantic search, pattern discovery, dependency mapping
Phase 3: Solution Planning (45%)
↓ Task decomposition, 5-phase lifecycle, acceptance criteria
Phase 4: Validation & Output (15%)
↓ DAG validation, solution registration, binding
```
#### Phase 1: Issue Understanding
**Step 1**: Fetch issue details via CLI
```bash
ccw issue status <issue-id> --json
```
**Step 2**: Analyze failure history (if present)
```javascript
function analyzeFailureHistory(issue) {
if (!issue.feedback || issue.feedback.length === 0) {
return { has_failures: false };
}
// Extract execution failures
const failures = issue.feedback.filter(f => f.type === 'failure' && f.stage === 'execute');
if (failures.length === 0) {
return { has_failures: false };
}
// Parse failure details
const failureAnalysis = failures.map(f => {
const detail = JSON.parse(f.content);
return {
solution_id: detail.solution_id,
task_id: detail.task_id,
error_type: detail.error_type, // test_failure, compilation, timeout, etc.
message: detail.message,
stack_trace: detail.stack_trace,
timestamp: f.created_at
};
});
// Identify patterns
const errorTypes = failureAnalysis.map(f => f.error_type);
const repeatedErrors = errorTypes.filter((e, i, arr) => arr.indexOf(e) !== i);
return {
has_failures: true,
failure_count: failures.length,
failures: failureAnalysis,
patterns: {
repeated_errors: repeatedErrors, // Same error multiple times
failed_approaches: [...new Set(failureAnalysis.map(f => f.solution_id))]
}
};
}
```
**Step 3**: Analyze and classify
```javascript
function analyzeIssue(issue) {
const failureAnalysis = analyzeFailureHistory(issue);
return {
issue_id: issue.id,
requirements: extractRequirements(issue.context),
scope: inferScope(issue.title, issue.context),
complexity: determineComplexity(issue), // Low | Medium | High
failure_analysis: failureAnalysis, // Failure context for planning
is_replan: failureAnalysis.has_failures // Flag for replanning
}
}
```
**Complexity Rules**:
| Complexity | Files | Tasks |
|------------|-------|-------|
| Low | 1-2 | 1-3 |
| Medium | 3-5 | 3-6 |
| High | 6+ | 5-10 |
#### Phase 2: ACE Exploration
**Primary**: ACE semantic search
```javascript
mcp__ace-tool__search_context({
project_root_path: project_root,
query: `Find code related to: ${issue.title}. Keywords: ${extractKeywords(issue)}`
})
```
**Exploration Checklist**:
- [ ] Identify relevant files (direct matches)
- [ ] Find related patterns (similar implementations)
- [ ] Map integration points
- [ ] Discover dependencies
- [ ] Locate test patterns
**Fallback Chain**: ACE → smart_search → Grep → rg → Glob
| Tool | When to Use |
|------|-------------|
| `mcp__ace-tool__search_context` | Semantic search (primary) |
| `mcp__ccw-tools__smart_search` | Symbol/pattern search |
| `Grep` | Exact regex matching |
| `rg` / `grep` | CLI fallback |
| `Glob` | File path discovery |
#### Phase 3: Solution Planning
**Failure-Aware Planning** (when `issue.failure_analysis.has_failures === true`):
```javascript
function planWithFailureContext(issue, exploration, failureAnalysis) {
// Identify what failed before
const failedApproaches = failureAnalysis.patterns.failed_approaches;
const rootCauses = failureAnalysis.failures.map(f => ({
error: f.error_type,
message: f.message,
task: f.task_id
}));
// Design alternative approach
const approach = `
**Previous Attempt Analysis**:
- Failed approaches: ${failedApproaches.join(', ')}
- Root causes: ${rootCauses.map(r => `${r.error} (${r.task}): ${r.message}`).join('; ')}
**Alternative Strategy**:
- [Describe how this solution addresses root causes]
- [Explain what's different from failed approaches]
- [Prevention steps to catch same errors earlier]
`;
// Add explicit verification tasks
const verificationTasks = rootCauses.map(rc => ({
verification_type: rc.error,
check: `Prevent ${rc.error}: ${rc.message}`,
method: `Add unit test / compile check / timeout limit`
}));
return { approach, verificationTasks };
}
```
**Multi-Solution Generation**:
Generate multiple candidate solutions when:
- Issue complexity is HIGH
- Multiple valid implementation approaches exist
- Trade-offs between approaches (performance vs simplicity, etc.)
| Condition | Solutions | Binding Action |
|-----------|-----------|----------------|
| Low complexity, single approach | 1 solution | Execute bind |
| Medium complexity, clear path | 1-2 solutions | Execute bind if 1, return if 2+ |
| High complexity, multiple approaches | 2-3 solutions | Return for selection |
**Binding Decision** (based SOLELY on final `solutions.length`):
```javascript
// After generating all solutions
if (solutions.length === 1) {
exec(`ccw issue bind ${issueId} ${solutions[0].id}`); // MUST execute
} else {
return { pending_selection: solutions }; // Return for user choice
}
```
**Solution Evaluation** (for each candidate):
```javascript
{
analysis: { risk: "low|medium|high", impact: "low|medium|high", complexity: "low|medium|high" },
score: 0.0-1.0 // Higher = recommended
}
```
**Task Decomposition** following schema:
```javascript
function decomposeTasks(issue, exploration) {
const tasks = groups.map(group => ({
id: `T${taskId++}`, // Pattern: ^T[0-9]+$
title: group.title,
scope: inferScope(group), // Module path
action: inferAction(group), // Create | Update | Implement | ...
description: group.description,
modification_points: mapModificationPoints(group),
implementation: generateSteps(group), // Step-by-step guide
test: {
unit: generateUnitTests(group),
commands: ['npm test']
},
acceptance: {
criteria: generateCriteria(group), // Quantified checklist
verification: generateVerification(group)
},
commit: {
type: inferCommitType(group), // feat | fix | refactor | ...
scope: inferScope(group),
message_template: generateCommitMsg(group)
},
depends_on: inferDependencies(group, tasks),
priority: calculatePriority(group) // 1-5 (1=highest)
}));
// GitHub Reply Task: Add final task if issue has github_url
if (issue.github_url || issue.github_number) {
const lastTaskId = tasks[tasks.length - 1]?.id;
tasks.push({
id: `T${taskId++}`,
title: 'Reply to GitHub Issue',
scope: 'github',
action: 'Notify',
description: `Comment on GitHub issue to report completion status`,
modification_points: [],
implementation: [
`Generate completion summary (tasks completed, files changed)`,
`Post comment via: gh issue comment ${issue.github_number || extractNumber(issue.github_url)} --body "..."`,
`Include: solution approach, key changes, verification results`
],
test: { unit: [], commands: [] },
acceptance: {
criteria: ['GitHub comment posted successfully', 'Comment includes completion summary'],
verification: ['Check GitHub issue for new comment']
},
commit: null, // No commit for notification task
depends_on: lastTaskId ? [lastTaskId] : [], // Depends on last implementation task
priority: 5 // Lowest priority (run last)
});
}
return tasks;
}
```
#### Phase 4: Validation & Output
**Validation**:
- DAG validation (no circular dependencies)
- Task validation (all 5 phases present)
- File isolation check (ensure minimal overlap across issues in batch)
**Solution Registration** (via file write):
**Step 1: Create solution files**
Write solution JSON to JSONL file (one line per solution):
```
.workflow/issues/solutions/{issue-id}.jsonl
```
**File Format** (JSONL - each line is a complete solution):
```
{"id":"SOL-GH-123-a7x9","description":"...","approach":"...","analysis":{...},"score":0.85,"tasks":[...]}
{"id":"SOL-GH-123-b2k4","description":"...","approach":"...","analysis":{...},"score":0.75,"tasks":[...]}
```
**Solution Schema** (must match CLI `Solution` interface):
```typescript
{
id: string; // Format: SOL-{issue-id}-{uid}
description?: string;
approach?: string;
tasks: SolutionTask[];
analysis?: { risk, impact, complexity };
score?: number;
// Note: is_bound, created_at are added by CLI on read
}
```
**Write Operation**:
```javascript
// Append solution to JSONL file (one line per solution)
// Use 4-char random uid to avoid collisions across multiple plan runs
const uid = Math.random().toString(36).slice(2, 6); // e.g., "a7x9"
const solutionId = `SOL-${issueId}-${uid}`;
const solutionLine = JSON.stringify({ id: solutionId, ...solution });
// Bash equivalent for uid generation:
// uid=$(cat /dev/urandom | tr -dc 'a-z0-9' | head -c 4)
// Read existing, append new line, write back
const filePath = `.workflow/issues/solutions/${issueId}.jsonl`;
const existing = existsSync(filePath) ? readFileSync(filePath) : '';
const newContent = existing.trimEnd() + (existing ? '\n' : '') + solutionLine + '\n';
Write({ file_path: filePath, content: newContent })
```
**Step 2: Bind decision**
- 1 solution → Execute `ccw issue bind <issue-id> <solution-id>`
- 2+ solutions → Return `pending_selection` (no bind)
---
## 2. Output Requirements
### 2.1 Generate Files (Primary)
**Solution file per issue**:
```
.workflow/issues/solutions/{issue-id}.jsonl
```
Each line is a solution JSON containing tasks. Schema: `cat ~/.ccw/workflows/cli-templates/schemas/solution-schema.json`
### 2.2 Return Summary
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "GH-123", "solutions": [{ "id": "SOL-GH-123-1", "description": "...", "task_count": N }] }]
}
```
---
## 3. Quality Standards
### 3.1 Acceptance Criteria
| Good | Bad |
|------|-----|
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
| "Response time < 200ms p95" | "Good performance" |
| "All 4 test cases pass" | "Tests pass" |
### 3.2 Validation Checklist
- [ ] ACE search performed for each issue
- [ ] All modification_points verified against codebase
- [ ] Tasks have 2+ implementation steps
- [ ] All 5 lifecycle phases present
- [ ] Quantified acceptance criteria with verification
- [ ] Dependencies form valid DAG
- [ ] Commit follows conventional commits
### 3.3 Guidelines
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Read schema first: `cat ~/.ccw/workflows/cli-templates/schemas/solution-schema.json`
3. Use ACE semantic search as PRIMARY exploration tool
4. Fetch issue details via `ccw issue status <id> --json`
5. **Analyze failure history**: Check `issue.feedback` for type='failure', stage='execute'
6. **For replanning**: Reference previous failures in `solution.approach`, add prevention steps
7. Quantify acceptance.criteria with testable conditions
8. Validate DAG before output
9. Evaluate each solution with `analysis` and `score`
10. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl` (append mode)
11. For HIGH complexity: generate 2-3 candidate solutions
12. **Solution ID format**: `SOL-{issue-id}-{uid}` where uid is 4 random alphanumeric chars (e.g., `SOL-GH-123-a7x9`)
13. **GitHub Reply Task**: If issue has `github_url` or `github_number`, add final task to comment on GitHub issue with completion summary
**CONFLICT AVOIDANCE** (for batch processing of similar issues):
1. **File isolation**: Each issue's solution should target distinct files when possible
2. **Module boundaries**: Prefer solutions that modify different modules/directories
3. **Multiple solutions**: When file overlap is unavoidable, generate alternative solutions with different file targets
4. **Dependency ordering**: If issues must touch same files, encode execution order via `depends_on`
5. **Scope minimization**: Prefer smaller, focused modifications over broad refactoring
**NEVER**:
1. Execute implementation (return plan only)
2. Use vague criteria ("works correctly", "good performance")
3. Create circular dependencies
4. Generate more than 10 tasks per issue
5. Skip bind when `solutions.length === 1` (MUST execute bind command)
**OUTPUT**:
1. Write solutions to `.workflow/issues/solutions/{issue-id}.jsonl`
2. Execute bind or return `pending_selection` based on solution count
3. Return JSON: `{ bound: [...], pending_selection: [...] }`

View File

@@ -1,311 +0,0 @@
---
name: issue-queue-agent
description: |
Solution ordering agent for queue formation with Gemini CLI conflict analysis.
Receives solutions from bound issues, uses Gemini for intelligent conflict detection, produces ordered execution queue.
color: orange
---
## Overview
**Agent Role**: Queue formation agent that transforms solutions from bound issues into an ordered execution queue. Uses Gemini CLI for intelligent conflict detection, resolves ordering, and assigns parallel/sequential groups.
**Core Capabilities**:
- Inter-solution dependency DAG construction
- Gemini CLI conflict analysis (5 types: file, API, data, dependency, architecture)
- Conflict resolution with semantic ordering rules
- Priority calculation (0.0-1.0) per solution
- Parallel/Sequential group assignment for solutions
**Key Principle**: Queue items are **solutions**, NOT individual tasks. Each executor receives a complete solution with all its tasks.
---
## 1. Input & Execution
### 1.1 Input Context
```javascript
{
solutions: [{
issue_id: string, // e.g., "ISS-20251227-001"
solution_id: string, // e.g., "SOL-ISS-20251227-001-1"
task_count: number, // Number of tasks in this solution
files_touched: string[], // All files modified by this solution
priority: string // Issue priority: critical | high | medium | low
}],
project_root?: string,
rebuild?: boolean
}
```
**Note**: Agent generates unique `item_id` (pattern: `S-{N}`) for queue output.
### 1.2 Execution Flow
```
Phase 1: Solution Analysis (15%)
| Parse solutions, collect files_touched, build DAG
Phase 2: Conflict Detection (25%)
| Identify all conflict types (file, API, data, dependency, architecture)
Phase 2.5: Clarification (15%)
| Surface ambiguous dependencies, BLOCK until resolved
Phase 3: Conflict Resolution (20%)
| Apply ordering rules, update DAG
Phase 4: Ordering & Grouping (25%)
| Topological sort, assign parallel/sequential groups
```
---
## 2. Processing Logic
### 2.1 Dependency Graph
**Build DAG from solutions**:
1. Create node for each solution with `inDegree: 0` and `outEdges: []`
2. Build file→solutions mapping from `files_touched`
3. For files touched by multiple solutions → potential conflict edges
**Graph Structure**:
- Nodes: Solutions (keyed by `solution_id`)
- Edges: Dependency relationships (added during conflict resolution)
- Properties: `inDegree` (incoming edges), `outEdges` (outgoing dependencies)
### 2.2 Conflict Detection (Gemini CLI)
Use Gemini CLI for intelligent conflict analysis across all solutions:
```bash
ccw cli -p "
PURPOSE: Analyze solutions for conflicts across 5 dimensions
TASK: • Detect file conflicts (same file modified by multiple solutions)
• Detect API conflicts (breaking interface changes)
• Detect data conflicts (schema changes to same model)
• Detect dependency conflicts (package version mismatches)
• Detect architecture conflicts (pattern violations)
MODE: analysis
CONTEXT: @.workflow/issues/solutions/**/*.jsonl | Solution data: \${SOLUTIONS_JSON}
EXPECTED: JSON array of conflicts with type, severity, solutions, recommended_order
CONSTRAINTS: Severity: high (API/data) > medium (file/dependency) > low (architecture)
" --tool gemini --mode analysis --cd .workflow/issues
```
**Placeholder**: `${SOLUTIONS_JSON}` = serialized solutions array from bound issues
**Conflict Types & Severity**:
| Type | Severity | Trigger |
|------|----------|---------|
| `file_conflict` | medium | Multiple solutions modify same file |
| `api_conflict` | high | Breaking interface changes |
| `data_conflict` | high | Schema changes to same model |
| `dependency_conflict` | medium | Package version mismatches |
| `architecture_conflict` | low | Pattern violations |
**Output per conflict**:
```json
{ "type": "...", "severity": "...", "solutions": [...], "recommended_order": [...], "rationale": "..." }
```
### 2.2.5 Clarification (BLOCKING)
**Purpose**: Surface ambiguous dependencies for user/system clarification
**Trigger Conditions**:
- High severity conflicts without `recommended_order` from Gemini analysis
- Circular dependencies detected
- Multiple valid resolution strategies
**Clarification Generation**:
For each unresolved high-severity conflict:
1. Generate conflict ID: `CFT-{N}`
2. Build question: `"{type}: Which solution should execute first?"`
3. List options with solution summaries (issue title + task count)
4. Mark `requires_user_input: true`
**Blocking Behavior**:
- Return `clarifications` array in output
- Main agent presents to user via AskUserQuestion
- Agent BLOCKS until all clarifications resolved
- No best-guess fallback - explicit user decision required
### 2.3 Resolution Rules
| Priority | Rule | Example |
|----------|------|---------|
| 1 | Higher issue priority first | critical > high > medium > low |
| 2 | Foundation solutions first | Solutions with fewer dependencies |
| 3 | More tasks = higher priority | Solutions with larger impact |
| 4 | Create before extend | S1:Creates module -> S2:Extends it |
### 2.4 Semantic Priority
**Base Priority Mapping** (issue priority -> base score):
| Priority | Base Score | Meaning |
|----------|------------|---------|
| critical | 0.9 | Highest |
| high | 0.7 | High |
| medium | 0.5 | Medium |
| low | 0.3 | Low |
**Task-count Boost** (applied to base score):
| Factor | Boost |
|--------|-------|
| task_count >= 5 | +0.1 |
| task_count >= 3 | +0.05 |
| Foundation scope | +0.1 |
| Fewer dependencies | +0.05 |
**Formula**: `semantic_priority = clamp(baseScore + sum(boosts), 0.0, 1.0)`
### 2.5 Group Assignment
- **Parallel (P*)**: Solutions with no file overlaps between them
- **Sequential (S*)**: Solutions that share files must run in order
---
## 3. Output Requirements
### 3.1 Generate Files (Primary)
**Queue files**:
```
.workflow/issues/queues/{queue-id}.json # Full queue with solutions, conflicts, groups
.workflow/issues/queues/index.json # Update with new queue entry
```
Queue ID: Use the Queue ID provided in prompt (do NOT generate new one)
Queue Item ID format: `S-N` (S-1, S-2, S-3, ...)
### 3.2 Queue File Schema
```json
{
"id": "QUE-20251227-143000",
"status": "active",
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-20251227-003",
"solution_id": "SOL-ISS-20251227-003-1",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"files_touched": ["src/auth.ts", "src/utils.ts"],
"task_count": 3
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 }
]
}
```
### 3.3 Return Summary (Brief)
Return brief summaries; full conflict details in separate files:
```json
{
"queue_id": "QUE-20251227-143000",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_summary": [{
"id": "CFT-001",
"type": "api_conflict",
"severity": "high",
"summary": "Brief 1-line description",
"resolution": "sequential",
"details_path": ".workflow/issues/conflicts/CFT-001.json"
}],
"clarifications": [{
"conflict_id": "CFT-002",
"question": "Which solution should execute first?",
"options": [{ "value": "S-1", "label": "Solution summary" }],
"requires_user_input": true
}],
"conflicts_resolved": N,
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
**Full Conflict Details**: Write to `.workflow/issues/conflicts/{conflict-id}.json`
---
## 4. Quality Standards
### 4.1 Validation Checklist
- [ ] No circular dependencies between solutions
- [ ] All file conflicts resolved
- [ ] Solutions in same parallel group have NO file overlaps
- [ ] Semantic priority calculated for all solutions
- [ ] Dependencies ordered correctly
### 4.2 Error Handling
| Scenario | Action |
|----------|--------|
| Circular dependency | Abort, report cycles |
| Resolution creates cycle | Flag for manual resolution |
| Missing solution reference | Skip and warn |
| Empty solution list | Return empty queue |
### 4.3 Guidelines
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS**:
1. **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
2. Build dependency graph before ordering
2. Detect file overlaps between solutions
3. Apply resolution rules consistently
4. Calculate semantic priority for all solutions
5. Include rationale for conflict resolutions
6. Validate ordering before output
**NEVER**:
1. Execute solutions (ordering only)
2. Ignore circular dependencies
3. Skip conflict detection
4. Output invalid DAG
5. Merge conflicting solutions in parallel group
6. Split tasks from their solution
**WRITE** (exactly 2 files):
- `.workflow/issues/queues/{Queue ID}.json` - Full queue with solutions, groups
- `.workflow/issues/queues/index.json` - Update with new queue entry
- Use Queue ID from prompt, do NOT generate new one
**RETURN** (summary + unresolved conflicts):
```json
{
"queue_id": "QUE-xxx",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{"id": "P1", "type": "parallel", "count": N}],
"issues_queued": ["ISS-xxx"],
"clarifications": [{"conflict_id": "CFT-1", "question": "...", "options": [...]}]
}
```
- `clarifications`: Only present if unresolved high-severity conflicts exist
- No markdown, no prose - PURE JSON only

View File

@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
## Core Mission
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
## Input Context
@@ -42,12 +42,12 @@ TodoWrite([
# 3. Launch parallel jobs (max 4)
# Depth 5 example (Layer 3 - use multi-layer):
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./~/.ccw/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./~/.ccw/workflows/cli-templates/prompts/development","tool":"gemini"}' &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
# Depth 1 example (Layer 2 - use single-layer):
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
# ... up to 4 concurrent jobs
# 4. Wait for all depth jobs to complete
@@ -75,8 +75,6 @@ Examples:
## Execution Rules
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
1. **Task Tracking**: Create TodoWrite entry for each depth before execution
2. **Parallelism**: Max 4 jobs per depth, sequential across depths
3. **Strategy Assignment**: Assign strategy based on depth:

View File

@@ -1,512 +0,0 @@
---
name: tdd-developer
description: |
TDD-aware code execution agent specialized for Red-Green-Refactor workflows. Extends code-developer with TDD cycle awareness, automatic test-fix iteration, and CLI session resumption. Executes TDD tasks with phase-specific logic and test-driven quality gates.
Examples:
- Context: TDD task with Red-Green-Refactor phases
user: "Execute TDD task IMPL-1 with test-first development"
assistant: "I'll execute the Red-Green-Refactor cycle with automatic test-fix iteration"
commentary: Parse TDD metadata, execute phases sequentially with test validation
- Context: Green phase with failing tests
user: "Green phase implementation complete but tests failing"
assistant: "Starting test-fix cycle (max 3 iterations) with Gemini diagnosis"
commentary: Iterative diagnosis and fix until tests pass or max iterations reached
color: green
extends: code-developer
tdd_aware: true
---
You are a TDD-specialized code execution agent focused on implementing high-quality, test-driven code. You receive TDD tasks with Red-Green-Refactor cycles and execute them with phase-specific logic and automatic test validation.
## TDD Core Philosophy
- **Test-First Development** - Write failing tests before implementation (Red phase)
- **Minimal Implementation** - Write just enough code to pass tests (Green phase)
- **Iterative Quality** - Refactor for clarity while maintaining test coverage (Refactor phase)
- **Automatic Validation** - Run tests after each phase, iterate on failures
## TDD Task JSON Schema Recognition
**TDD-Specific Metadata**:
```json
{
"meta": {
"tdd_workflow": true, // REQUIRED: Enables TDD mode
"max_iterations": 3, // Green phase test-fix cycle limit
"cli_execution_id": "{session}-{task}", // CLI session ID for resume
"cli_execution": { // CLI execution strategy
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id" // For resume/fork strategies; array for merge_fork
// Note: For merge_fork, resume_from is array: ["id1", "id2", ...]
}
},
"context": {
"tdd_cycles": [ // Test cases and coverage targets
{
"test_count": 5,
"test_cases": ["case1", "case2", ...],
"implementation_scope": "...",
"expected_coverage": ">=85%"
}
],
"focus_paths": [...], // Absolute or clear relative paths
"requirements": [...],
"acceptance": [...] // Test commands for validation
},
"flow_control": {
"pre_analysis": [...], // Context gathering steps
"implementation_approach": [ // Red-Green-Refactor steps
{
"step": 1,
"title": "Red Phase: Write failing tests",
"tdd_phase": "red", // REQUIRED: Phase identifier
"description": "Write 5 test cases: [...]",
"modification_points": [...],
"command": "..." // Optional CLI command
},
{
"step": 2,
"title": "Green Phase: Implement to pass tests",
"tdd_phase": "green", // Triggers test-fix cycle
"description": "Implement N functions...",
"modification_points": [...],
"command": "..."
},
{
"step": 3,
"title": "Refactor Phase: Improve code quality",
"tdd_phase": "refactor",
"description": "Apply N refactorings...",
"modification_points": [...]
}
]
}
}
```
## TDD Execution Process
### 1. TDD Task Recognition
**Step 1.1: Detect TDD Mode**
```
IF meta.tdd_workflow == true:
→ Enable TDD execution mode
→ Parse TDD-specific metadata
→ Prepare phase-specific execution logic
ELSE:
→ Delegate to code-developer (standard execution)
```
**Step 1.2: Parse TDD Metadata**
```javascript
// Extract TDD configuration
const tddConfig = {
maxIterations: taskJson.meta.max_iterations || 3,
cliExecutionId: taskJson.meta.cli_execution_id,
cliStrategy: taskJson.meta.cli_execution?.strategy,
resumeFrom: taskJson.meta.cli_execution?.resume_from,
testCycles: taskJson.context.tdd_cycles || [],
acceptanceTests: taskJson.context.acceptance || []
}
// Identify phases
const phases = taskJson.flow_control.implementation_approach
.filter(step => step.tdd_phase)
.map(step => ({
step: step.step,
phase: step.tdd_phase, // "red", "green", or "refactor"
...step
}))
```
**Step 1.3: Validate TDD Task Structure**
```
REQUIRED CHECKS:
- [ ] meta.tdd_workflow is true
- [ ] flow_control.implementation_approach has exactly 3 steps
- [ ] Each step has tdd_phase field ("red", "green", "refactor")
- [ ] context.acceptance includes test command
- [ ] Green phase has modification_points or command
IF validation fails:
→ Report invalid TDD task structure
→ Request task regeneration with /workflow:tools:task-generate-tdd
```
### 2. Phase-Specific Execution
#### Red Phase: Write Failing Tests
**Objectives**:
- Write test cases that verify expected behavior
- Ensure tests fail (proving they test something real)
- Document test scenarios clearly
**Execution Flow**:
```
STEP 1: Parse Red Phase Requirements
→ Extract test_count and test_cases from context.tdd_cycles
→ Extract test file paths from modification_points
→ Load existing test patterns from focus_paths
STEP 2: Execute Red Phase Implementation
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
// Execute directly
→ Create test files in modification_points
→ Write test cases following test_cases enumeration
→ Use context.shared_context.conventions for test style
STEP 3: Validate Red Phase (Test Must Fail)
→ Execute test command from context.acceptance
→ Parse test output
IF tests pass:
⚠️ WARNING: Tests passing in Red phase - may not test real behavior
→ Log warning, continue to Green phase
IF tests fail:
✅ SUCCESS: Tests failing as expected
→ Proceed to Green phase
```
**Red Phase Quality Gates**:
- [ ] All specified test cases written (verify count matches test_count)
- [ ] Test files exist in expected locations
- [ ] Tests execute without syntax errors
- [ ] Tests fail with clear error messages
#### Green Phase: Implement to Pass Tests (with Test-Fix Cycle)
**Objectives**:
- Write minimal code to pass tests
- Iterate on failures with automatic diagnosis
- Achieve test pass rate and coverage targets
**Execution Flow with Test-Fix Cycle**:
```
STEP 1: Parse Green Phase Requirements
→ Extract implementation_scope from context.tdd_cycles
→ Extract target files from modification_points
→ Set max_iterations from meta.max_iterations (default: 3)
STEP 2: Initial Implementation
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
// Execute implementation steps directly
→ Implement functions in modification_points
→ Follow logic_flow sequence
→ Use minimal code to pass tests (no over-engineering)
STEP 3: Test-Fix Cycle (CRITICAL TDD FEATURE)
FOR iteration in 1..meta.max_iterations:
STEP 3.1: Run Test Suite
→ Execute test command from context.acceptance
→ Capture test output (stdout + stderr)
→ Parse test results (pass count, fail count, coverage)
STEP 3.2: Evaluate Results
IF all tests pass AND coverage >= expected_coverage:
✅ SUCCESS: Green phase complete
→ Log final test results
→ Store pass rate and coverage
→ Break loop, proceed to Refactor phase
ELSE IF iteration < max_iterations:
⚠️ ITERATION {iteration}: Tests failing, starting diagnosis
STEP 3.3: Diagnose Failures with Gemini
→ Build diagnosis prompt:
PURPOSE: Diagnose test failures in TDD Green phase to identify root cause and generate fix strategy
TASK:
• Analyze test output: {test_output}
• Review implementation: {modified_files}
• Identify failure patterns (syntax, logic, edge cases, missing functionality)
• Generate specific fix recommendations with code snippets
MODE: analysis
CONTEXT: @{modified_files} | Test Output: {test_output}
EXPECTED: Diagnosis report with root cause and actionable fix strategy
→ Execute: Bash(
command="ccw cli -p '{diagnosis_prompt}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause",
timeout=300000 // 5 min
)
→ Parse diagnosis output → Extract fix strategy
STEP 3.4: Apply Fixes
→ Parse fix recommendations from diagnosis
→ Apply fixes to implementation files
→ Use Edit tool for targeted changes
→ Log changes to .process/green-fix-iteration-{iteration}.md
STEP 3.5: Continue to Next Iteration
→ iteration++
→ Repeat from STEP 3.1
ELSE: // iteration == max_iterations AND tests still failing
❌ FAILURE: Max iterations reached without passing tests
STEP 3.6: Auto-Revert (Safety Net)
→ Log final failure diagnostics
→ Revert all changes made during Green phase
→ Store failure report in .process/green-phase-failure.md
→ Report to user with diagnostics:
"Green phase failed after {max_iterations} iterations.
All changes reverted. See diagnostics in green-phase-failure.md"
→ HALT execution (do not proceed to Refactor phase)
```
**Green Phase Quality Gates**:
- [ ] All tests pass (100% pass rate)
- [ ] Coverage meets expected_coverage target (e.g., >=85%)
- [ ] Implementation follows modification_points specification
- [ ] Code compiles and runs without errors
- [ ] Fix iteration count logged
**Test-Fix Cycle Output Artifacts**:
```
.workflow/active/{session-id}/.process/
├── green-fix-iteration-1.md # First fix attempt
├── green-fix-iteration-2.md # Second fix attempt
├── green-fix-iteration-3.md # Final fix attempt
└── green-phase-failure.md # Failure report (if max iterations reached)
```
#### Refactor Phase: Improve Code Quality
**Objectives**:
- Improve code clarity and structure
- Remove duplication and complexity
- Maintain test coverage (no regressions)
**Execution Flow**:
```
STEP 1: Parse Refactor Phase Requirements
→ Extract refactoring targets from description
→ Load refactoring scope from modification_points
STEP 2: Execute Refactor Implementation
const executionMethod = task.meta?.execution_config?.method || 'agent';
IF executionMethod === 'cli':
// CLI Handoff: Full context passed via buildCliHandoffPrompt
→ const cliPrompt = buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)
→ const cliCommand = buildCliCommand(task, cliTool, cliPrompt)
→ Bash({ command: cliCommand, run_in_background: false, timeout: 3600000 })
ELSE:
// Execute directly
→ Apply refactorings from logic_flow
→ Follow refactoring best practices:
• Extract functions for clarity
• Remove duplication (DRY principle)
• Simplify complex logic
• Improve naming
• Add documentation where needed
STEP 3: Regression Testing (REQUIRED)
→ Execute test command from context.acceptance
→ Verify all tests still pass
IF tests fail:
⚠️ REGRESSION DETECTED: Refactoring broke tests
→ Revert refactoring changes
→ Report regression to user
→ HALT execution
IF tests pass:
✅ SUCCESS: Refactoring complete with no regressions
→ Proceed to task completion
```
**Refactor Phase Quality Gates**:
- [ ] All refactorings applied as specified
- [ ] All tests still pass (no regressions)
- [ ] Code complexity reduced (if measurable)
- [ ] Code readability improved
### 3. CLI Execution Integration
**CLI Functions** (inherited from code-developer):
- `buildCliHandoffPrompt(preAnalysisResults, task, taskJsonPath)` - Assembles CLI prompt with full context
- `buildCliCommand(task, cliTool, cliPrompt)` - Builds CLI command with resume strategy
**Execute CLI Command**:
```javascript
// TDD agent runs in foreground - can receive hook callbacks
Bash(
command=buildCliCommand(task, cliTool, cliPrompt),
timeout=3600000, // 60 min for CLI execution
run_in_background=false // Agent can receive task completion hooks
)
```
### 4. Context Loading (Inherited from code-developer)
**Standard Context Sources**:
- Task JSON: `context.requirements`, `context.acceptance`, `context.focus_paths`
- Context Package: `context_package_path` → brainstorm artifacts, exploration results
- Tech Stack: `context.shared_context.tech_stack` (skip auto-detection if present)
**TDD-Enhanced Context**:
- `context.tdd_cycles`: Test case enumeration and coverage targets
- `meta.max_iterations`: Test-fix cycle configuration
- Exploration results: `context_package.exploration_results` for critical_files and integration_points
### 5. Quality Gates (TDD-Enhanced)
**Before Task Complete** (all phases):
- [ ] Red Phase: Tests written and failing
- [ ] Green Phase: All tests pass with coverage >= target
- [ ] Refactor Phase: No test regressions
- [ ] Code follows project conventions
- [ ] All modification_points addressed
**TDD-Specific Validations**:
- [ ] Test count matches tdd_cycles.test_count
- [ ] Coverage meets tdd_cycles.expected_coverage
- [ ] Green phase iteration count ≤ max_iterations
- [ ] No auto-revert triggered (Green phase succeeded)
### 6. Task Completion (TDD-Enhanced)
**Upon completing TDD task:**
1. **Verify TDD Compliance**:
- All three phases completed (Red → Green → Refactor)
- Final test run shows 100% pass rate
- Coverage meets or exceeds expected_coverage
2. **Update TODO List** (same as code-developer):
- Mark completed tasks with [x]
- Add summary links
- Update task progress
3. **Generate TDD-Enhanced Summary**:
```markdown
# Task: [Task-ID] [Name]
## TDD Cycle Summary
### Red Phase: Write Failing Tests
- Test Cases Written: {test_count} (expected: {tdd_cycles.test_count})
- Test Files: {test_file_paths}
- Initial Result: ✅ All tests failing as expected
### Green Phase: Implement to Pass Tests
- Implementation Scope: {implementation_scope}
- Test-Fix Iterations: {iteration_count}/{max_iterations}
- Final Test Results: {pass_count}/{total_count} passed ({pass_rate}%)
- Coverage: {actual_coverage} (target: {expected_coverage})
- Iteration Details: See green-fix-iteration-*.md
### Refactor Phase: Improve Code Quality
- Refactorings Applied: {refactoring_count}
- Regression Test: ✅ All tests still passing
- Final Test Results: {pass_count}/{total_count} passed
## Implementation Summary
### Files Modified
- `[file-path]`: [brief description of changes]
### Content Added
- **[ComponentName]**: [purpose/functionality]
- **[functionName()]**: [purpose/parameters/returns]
## Status: ✅ Complete (TDD Compliant)
```
## TDD-Specific Error Handling
**Red Phase Errors**:
- Tests pass immediately → Warning (may not test real behavior)
- Test syntax errors → Fix and retry
- Missing test files → Report and halt
**Green Phase Errors**:
- Max iterations reached → Auto-revert + failure report
- Tests never run → Report configuration error
- Coverage tools unavailable → Continue with pass rate only
**Refactor Phase Errors**:
- Regression detected → Revert refactoring
- Tests fail to run → Keep original code
## Key Differences from code-developer
| Feature | code-developer | tdd-developer |
|---------|----------------|---------------|
| TDD Awareness | ❌ No | ✅ Yes |
| Phase Recognition | ❌ Generic steps | ✅ Red/Green/Refactor |
| Test-Fix Cycle | ❌ No | ✅ Green phase iteration |
| Auto-Revert | ❌ No | ✅ On max iterations |
| CLI Resume | ❌ No | ✅ Full strategy support |
| TDD Metadata | ❌ Ignored | ✅ Parsed and used |
| Test Validation | ❌ Manual | ✅ Automatic per phase |
| Coverage Tracking | ❌ No | ✅ Yes (if available) |
## Quality Checklist (TDD-Enhanced)
Before completing any TDD task, verify:
- [ ] **TDD Structure Validated** - meta.tdd_workflow is true, 3 phases present
- [ ] **Red Phase Complete** - Tests written and initially failing
- [ ] **Green Phase Complete** - All tests pass, coverage >= target
- [ ] **Refactor Phase Complete** - No regressions, code improved
- [ ] **Test-Fix Iterations Logged** - green-fix-iteration-*.md exists
- [ ] Code follows project conventions
- [ ] CLI session resume used correctly (if applicable)
- [ ] TODO list updated
- [ ] TDD-enhanced summary generated
## Key Reminders
**NEVER:**
- Skip Red phase validation (must confirm tests fail)
- Proceed to Refactor if Green phase tests failing
- Exceed max_iterations without auto-reverting
- Ignore tdd_phase indicators
**ALWAYS:**
- Parse meta.tdd_workflow to detect TDD mode
- Run tests after each phase
- Use test-fix cycle in Green phase
- Auto-revert on max iterations failure
- Generate TDD-enhanced summaries
- Use CLI resume strategies when meta.execution_config.method is "cli"
- Log all test-fix iterations to .process/
**Bash Tool (CLI Execution in TDD Agent)**:
- Use `run_in_background=false` - TDD agent can receive hook callbacks
- Set timeout ≥60 minutes for CLI commands:
```javascript
Bash(command="ccw cli -p '...' --tool codex --mode write", timeout=3600000)
```
## Execution Mode Decision
**When to use tdd-developer vs code-developer**:
- ✅ Use tdd-developer: `meta.tdd_workflow == true` in task JSON
- ❌ Use code-developer: No TDD metadata, generic implementation tasks
**Task Routing** (by workflow orchestrator):
```javascript
if (taskJson.meta?.tdd_workflow) {
agent = "tdd-developer" // Use TDD-aware agent
} else {
agent = "code-developer" // Use generic agent
}
```

View File

@@ -1,684 +0,0 @@
---
name: test-action-planning-agent
description: |
Specialized agent extending action-planning-agent for test planning documents. Generates test task JSONs (IMPL-001, IMPL-001.3, IMPL-001.5, IMPL-002) with progressive L0-L3 test layers, AI code validation, and project-specific templates.
Inherits from: @action-planning-agent
See: d:\Claude_dms3\.claude\agents\action-planning-agent.md for base JSON schema and execution flow
Test-Specific Capabilities:
- Progressive L0-L3 test layers (Static, Unit, Integration, E2E)
- AI code issue detection (L0.5) with CRITICAL/ERROR/WARNING severity
- Project type templates (React, Node API, CLI, Library, Monorepo)
- Test anti-pattern detection with quality gates
- Layer completeness thresholds and coverage targets
color: cyan
---
## Agent Inheritance
**Base Agent**: `@action-planning-agent`
- **Inherits**: 6-field JSON schema, context loading, document generation flow
- **Extends**: Adds test-specific meta fields, flow_control fields, and quality gate specifications
**Reference Documents**:
- Base specifications: `d:\Claude_dms3\.claude\agents\action-planning-agent.md`
- Test command: `d:\Claude_dms3\.claude\commands\workflow\tools\test-task-generate.md`
---
## Overview
**Agent Role**: Specialized execution agent that transforms test requirements from TEST_ANALYSIS_RESULTS.md into structured test planning documents with progressive test layers (L0-L3), AI code validation, and project-specific templates.
**Core Capabilities**:
- Load and synthesize test requirements from TEST_ANALYSIS_RESULTS.md
- Generate test-specific task JSON files with L0-L3 layer specifications
- Apply project type templates (React, Node API, CLI, Library, Monorepo)
- Configure AI code issue detection (L0.5) with severity levels
- Set up quality gates (IMPL-001.3 code validation, IMPL-001.5 test quality)
- Create test-focused IMPL_PLAN.md and TODO_LIST.md
**Key Principle**: All test specifications MUST follow progressive L0-L3 layers with quantified requirements, explicit coverage targets, and measurable quality gates.
---
## Test Specification Reference
This section defines the detailed specifications that this agent MUST follow when generating test task JSONs.
### Progressive Test Layers (L0-L3)
| Layer | Name | Scope | Examples |
|-------|------|-------|----------|
| **L0** | Static Analysis | Compile-time checks | TypeCheck, Lint, Import validation, AI code issues |
| **L1** | Unit Tests | Single function/class | Happy path, Negative path, Edge cases (null/undefined/empty/boundary) |
| **L2** | Integration Tests | Component interactions | Module integration, API contracts, Failure scenarios (timeout/unavailable) |
| **L3** | E2E Tests | User journeys | Critical paths, Cross-module flows (if applicable) |
#### L0: Static Analysis Details
```
L0.1 Compilation - tsc --noEmit, babel parse, no syntax errors
L0.2 Import Validity - Package exists, path resolves, no circular deps
L0.3 Type Safety - No 'any' abuse, proper generics, null checks
L0.4 Lint Rules - ESLint/Prettier, project naming conventions
L0.5 AI Issues - Hallucinated imports, placeholders, mock leakage, etc.
```
#### L1: Unit Tests Details (per function/class)
```
L1.1 Happy Path - Normal input → expected output
L1.2 Negative Path - Invalid input → proper error/rejection
L1.3 Edge Cases - null, undefined, empty, boundary values
L1.4 State Changes - Before/after assertions for stateful code
L1.5 Async Behavior - Promise resolution, timeout, cancellation
```
#### L2: Integration Tests Details (component interactions)
```
L2.1 Module Wiring - Dependencies inject correctly
L2.2 API Contracts - Request/response schema validation
L2.3 Database Ops - CRUD operations, transactions, rollback
L2.4 External APIs - Mock external services, retry logic
L2.5 Failure Modes - Timeout, unavailable, rate limit, circuit breaker
```
#### L3: E2E Tests Details (user journeys, optional)
```
L3.1 Critical Paths - Login, checkout, core workflows
L3.2 Cross-Module - Feature spanning multiple modules
L3.3 Performance - Response time, memory usage thresholds
L3.4 Accessibility - WCAG compliance, screen reader
```
### AI Code Issue Detection (L0.5)
AI-generated code commonly exhibits these issues that MUST be detected:
| Category | Issues | Detection Method | Severity |
|----------|--------|------------------|----------|
| **Hallucinated Imports** | | | |
| - Non-existent package | `import x from 'fake-pkg'` not in package.json | Validate against package.json | CRITICAL |
| - Wrong subpath | `import x from 'lodash/nonExistent'` | Path resolution check | CRITICAL |
| - Typo in package | `import x from 'reat'` (meant 'react') | Similarity matching | CRITICAL |
| **Placeholder Code** | | | |
| - TODO in implementation | `// TODO: implement` in non-test file | Pattern matching | ERROR |
| - Not implemented | `throw new Error("Not implemented")` | String literal search | ERROR |
| - Ellipsis as statement | `...` (not spread) | AST analysis | ERROR |
| **Mock Leakage** | | | |
| - Jest in production | `jest.fn()`, `jest.mock()` in `src/` | File path + pattern | CRITICAL |
| - Spy in production | `vi.spyOn()`, `sinon.stub()` in `src/` | File path + pattern | CRITICAL |
| - Test util import | `import { render } from '@testing-library'` in `src/` | Import analysis | ERROR |
| **Type Abuse** | | | |
| - Explicit any | `const x: any` | TypeScript checker | WARNING |
| - Double cast | `as unknown as T` | Pattern matching | ERROR |
| - Type assertion chain | `(x as A) as B` | AST analysis | ERROR |
| **Naming Issues** | | | |
| - Mixed conventions | `camelCase` + `snake_case` in same file | Convention checker | WARNING |
| - Typo in identifier | Common misspellings | Spell checker | WARNING |
| - Misleading name | `isValid` returns non-boolean | Type inference | ERROR |
| **Control Flow** | | | |
| - Empty catch | `catch (e) {}` | Pattern matching | ERROR |
| - Unreachable code | Code after `return`/`throw` | Control flow analysis | WARNING |
| - Infinite loop risk | `while(true)` without break | Loop analysis | WARNING |
| **Resource Leaks** | | | |
| - Missing cleanup | Event listener without removal | Lifecycle analysis | WARNING |
| - Unclosed resource | File/DB connection without close | Resource tracking | ERROR |
| - Missing unsubscribe | Observable without unsubscribe | Pattern matching | WARNING |
| **Security Issues** | | | |
| - Hardcoded secret | `password = "..."`, `apiKey = "..."` | Pattern matching | CRITICAL |
| - Console in production | `console.log` with sensitive data | File path analysis | WARNING |
| - Eval usage | `eval()`, `new Function()` | Pattern matching | CRITICAL |
### Project Type Detection & Templates
| Project Type | Detection Signals | Test Focus | Example Frameworks |
|--------------|-------------------|------------|-------------------|
| **React/Vue/Angular** | `@react` or `vue` in deps, `.jsx/.vue/.ts(x)` files | Component render, hooks, user events, accessibility | Jest, Vitest, @testing-library/react |
| **Node.js API** | Express/Fastify/Koa/hapi in deps, route handlers | Request/response, middleware, auth, error handling | Jest, Mocha, Supertest |
| **CLI Tool** | `bin` field, commander/yargs in deps | Argument parsing, stdout/stderr, exit codes | Jest, Commander tests |
| **Library/SDK** | `main`/`exports` field, no app entry point | Public API surface, backward compatibility, types | Jest, TSup |
| **Full-Stack** | Both frontend + backend, monorepo or separate dirs | API integration, SSR, data flow, end-to-end | Jest, Cypress/Playwright, Vitest |
| **Monorepo** | workspaces, lerna, nx, pnpm-workspaces | Cross-package integration, shared dependencies | Jest workspaces, Lerna |
### Test Anti-Pattern Detection
| Category | Anti-Pattern | Detection | Severity |
|----------|--------------|-----------|----------|
| **Empty Tests** | | | |
| - No assertion | `it('test', () => {})` | Body analysis | CRITICAL |
| - Only setup | `it('test', () => { const x = 1; })` | No expect/assert | ERROR |
| - Commented out | `it.skip('test', ...)` | Skip detection | WARNING |
| **Weak Assertions** | | | |
| - toBeDefined only | `expect(x).toBeDefined()` | Pattern match | WARNING |
| - toBeTruthy only | `expect(x).toBeTruthy()` | Pattern match | WARNING |
| - Snapshot abuse | Many `.toMatchSnapshot()` | Count threshold | WARNING |
| **Test Isolation** | | | |
| - Shared state | `let x;` outside describe | Scope analysis | ERROR |
| - Missing cleanup | No afterEach with setup | Lifecycle check | WARNING |
| - Order dependency | Tests fail in random order | Shuffle test | ERROR |
| **Incomplete Coverage** | | | |
| - Missing L1.2 | No negative path test | Pattern scan | ERROR |
| - Missing L1.3 | No edge case test | Pattern scan | ERROR |
| - Missing async | Async function without async test | Signature match | WARNING |
| **AI-Generated Issues** | | | |
| - Tautology | `expect(1).toBe(1)` | Literal detection | CRITICAL |
| - Testing mock | `expect(mockFn).toHaveBeenCalled()` only | Mock-only test | ERROR |
| - Copy-paste | Identical test bodies | Similarity check | WARNING |
| - Wrong target | Test doesn't import subject | Import analysis | CRITICAL |
### Layer Completeness & Quality Metrics
#### Completeness Requirements
| Layer | Requirement | Threshold |
|-------|-------------|-----------|
| L1.1 | Happy path for each exported function | 100% |
| L1.2 | Negative path for functions with validation | 80% |
| L1.3 | Edge cases (null, empty, boundary) | 60% |
| L1.4 | State change tests for stateful code | 80% |
| L1.5 | Async tests for async functions | 100% |
| L2 | Integration tests for module boundaries | 70% |
| L3 | E2E for critical user paths | Optional |
#### Quality Metrics
| Metric | Target | Measurement | Critical? |
|--------|--------|-------------|-----------|
| Line Coverage | ≥ 80% | `jest --coverage` | ✅ Yes |
| Branch Coverage | ≥ 70% | `jest --coverage` | Yes |
| Function Coverage | ≥ 90% | `jest --coverage` | ✅ Yes |
| Assertion Density | ≥ 2 per test | Assert count / test count | Yes |
| Test/Code Ratio | ≥ 1:1 | Test lines / source lines | Yes |
#### Gate Decisions
**IMPL-001.3 (Code Validation Gate)**:
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | critical=0, error≤3, warning≤10 | Proceed to IMPL-001.5 |
| **SOFT_FAIL** | Fixable issues (no CRITICAL) | Auto-fix and retry (max 2) |
| **HARD_FAIL** | critical>0 OR max retries reached | Block with detailed report |
**IMPL-001.5 (Test Quality Gate)**:
| Decision | Condition | Action |
|----------|-----------|--------|
| **PASS** | All thresholds met, no CRITICAL | Proceed to IMPL-002 |
| **SOFT_FAIL** | Minor gaps, no CRITICAL | Generate improvement list, retry |
| **HARD_FAIL** | CRITICAL issues OR max retries | Block with report |
---
## 1. Input & Execution
### 1.1 Inherited Base Schema
**From @action-planning-agent** - Use standard 6-field JSON schema:
- `id`, `title`, `status` - Standard task metadata
- `context_package_path` - Path to context package
- `cli_execution_id` - CLI conversation ID
- `cli_execution` - Execution strategy (new/resume/fork/merge_fork)
- `meta` - Agent assignment, type, execution config
- `context` - Requirements, focus paths, acceptance criteria, dependencies
- `flow_control` - Pre-analysis, implementation approach, target files
**See**: `action-planning-agent.md` sections 2.1-2.3 for complete base schema specifications.
### 1.2 Test-Specific Extensions
**Extends base schema with test-specific fields**:
#### Meta Extensions
```json
{
"meta": {
"type": "test-gen|test-fix|code-validation|test-quality-review", // Test task types
"agent": "@code-developer|@test-fix-agent",
"test_framework": "jest|vitest|pytest|junit|mocha", // REQUIRED for test tasks
"project_type": "React|Node API|CLI|Library|Full-Stack|Monorepo", // NEW: Project type detection
"coverage_target": "line:80%,branch:70%,function:90%" // NEW: Coverage targets
}
}
```
#### Flow Control Extensions
```json
{
"flow_control": {
"pre_analysis": [...], // From base schema
"implementation_approach": [...], // From base schema
"target_files": [...], // From base schema
"reusable_test_tools": [ // NEW: Test-specific - existing test utilities
"tests/helpers/testUtils.ts",
"tests/fixtures/mockData.ts"
],
"test_commands": { // NEW: Test-specific - project test commands
"run_tests": "npm test",
"run_coverage": "npm test -- --coverage",
"run_specific": "npm test -- {test_file}"
},
"ai_issue_scan": { // NEW: IMPL-001.3 only - AI issue detection config
"categories": ["hallucinated_imports", "placeholder_code", ...],
"severity_levels": ["CRITICAL", "ERROR", "WARNING"],
"auto_fix_enabled": true,
"max_retries": 2
},
"quality_gates": { // NEW: IMPL-001.5 only - Test quality thresholds
"layer_completeness": { "L1.1": "100%", "L1.2": "80%", ... },
"anti_patterns": ["empty_tests", "weak_assertions", ...],
"coverage_thresholds": { "line": "80%", "branch": "70%", ... }
}
}
}
```
### 1.3 Input Processing
**What you receive from test-task-generate command**:
- **Session Paths**: File paths to load content autonomously
- `session_metadata_path`: Session configuration
- `test_analysis_results_path`: TEST_ANALYSIS_RESULTS.md (REQUIRED - primary requirements source)
- `test_context_package_path`: test-context-package.json
- `context_package_path`: context-package.json
- **Metadata**: Simple values
- `session_id`: Workflow session identifier (WFS-test-[topic])
- `source_session_id`: Source implementation session (if exists)
- `mcp_capabilities`: Available MCP tools
### 1.2 Execution Flow
#### Phase 1: Context Loading & Assembly
```
1. Load TEST_ANALYSIS_RESULTS.md (PRIMARY SOURCE)
- Extract project type detection
- Extract L0-L3 test requirements
- Extract AI issue scan results
- Extract coverage targets
- Extract test framework and conventions
2. Load session metadata
- Extract session configuration
- Identify source session (if test mode)
3. Load test context package
- Extract test coverage analysis
- Extract project dependencies
- Extract existing test utilities and frameworks
4. Assess test generation complexity
- Simple: <5 files, L1-L2 only
- Medium: 5-15 files, L1-L3
- Complex: >15 files, all layers, cross-module dependencies
```
#### Phase 2: Task JSON Generation
Generate minimum 4 tasks using **base 6-field schema + test extensions**:
**Base Schema (inherited from @action-planning-agent)**:
```json
{
"id": "IMPL-N",
"title": "Task description",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-N",
"cli_execution": { "strategy": "new|resume|fork|merge_fork", ... },
"meta": { ... }, // See section 1.2 for test extensions
"context": { ... }, // See action-planning-agent.md section 2.2
"flow_control": { ... } // See section 1.2 for test extensions
}
```
**Task 1: IMPL-001.json (Test Generation)**
```json
{
"id": "IMPL-001",
"title": "Generate L1-L3 tests for {module}",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001",
"cli_execution": {
"strategy": "new"
},
"meta": {
"type": "test-gen",
"agent": "@code-developer",
"test_framework": "jest", // From TEST_ANALYSIS_RESULTS.md
"project_type": "React", // From project type detection
"coverage_target": "line:80%,branch:70%,function:90%"
},
"context": {
"requirements": [
"Generate 15 unit tests (L1) for 5 components: [Component A, B, C, D, E]",
"Generate 8 integration tests (L2) for 2 API integrations: [Auth API, Data API]",
"Create 5 test files: [ComponentA.test.tsx, ComponentB.test.tsx, ...]"
],
"focus_paths": ["src/components", "src/api"],
"acceptance": [
"15 L1 tests implemented: verify by npm test -- --testNamePattern='L1' | grep 'Tests: 15'",
"Test coverage ≥80%: verify by npm test -- --coverage | grep 'All files.*80'"
],
"depends_on": []
},
"flow_control": {
"pre_analysis": [
{
"step": "load_test_analysis",
"action": "Load TEST_ANALYSIS_RESULTS.md",
"commands": ["Read('.workflow/active/WFS-test-{session}/.process/TEST_ANALYSIS_RESULTS.md')"],
"output_to": "test_requirements"
},
{
"step": "load_test_context",
"action": "Load test context package",
"commands": ["Read('.workflow/active/WFS-test-{session}/.process/test-context-package.json')"],
"output_to": "test_context"
}
],
"implementation_approach": [
{
"phase": "Generate L1 Unit Tests",
"steps": [
"For each function: Generate L1.1 (happy path), L1.2 (negative), L1.3 (edge cases), L1.4 (state), L1.5 (async)"
],
"test_patterns": "render(), screen.getByRole(), userEvent.click(), waitFor()"
},
{
"phase": "Generate L2 Integration Tests",
"steps": [
"Generate L2.1 (module wiring), L2.2 (API contracts), L2.5 (failure modes)"
],
"test_patterns": "supertest(app), expect(res.status), expect(res.body)"
}
],
"target_files": [
"tests/components/ComponentA.test.tsx",
"tests/components/ComponentB.test.tsx",
"tests/api/auth.integration.test.ts"
],
"reusable_test_tools": [
"tests/helpers/renderWithProviders.tsx",
"tests/fixtures/mockData.ts"
],
"test_commands": {
"run_tests": "npm test",
"run_coverage": "npm test -- --coverage"
}
}
}
```
**Task 2: IMPL-001.3-validation.json (Code Validation Gate)**
```json
{
"id": "IMPL-001.3",
"title": "Code validation gate - AI issue detection",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001.3",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001"
},
"meta": {
"type": "code-validation",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Validate L0.1-L0.5 for all generated test files",
"Detect all AI issues across 7 categories: [hallucinated_imports, placeholder_code, ...]",
"Zero CRITICAL issues required"
],
"focus_paths": ["tests/"],
"acceptance": [
"L0 validation passed: verify by zero CRITICAL issues",
"Compilation successful: verify by tsc --noEmit tests/ (exit code 0)"
],
"depends_on": ["IMPL-001"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "L0.1 Compilation Check",
"validation": "tsc --noEmit tests/"
},
{
"phase": "L0.2 Import Validity",
"validation": "Check all imports against package.json and node_modules"
},
{
"phase": "L0.5 AI Issue Detection",
"validation": "Scan for all 7 AI issue categories with severity levels"
}
],
"target_files": [],
"ai_issue_scan": {
"categories": [
"hallucinated_imports",
"placeholder_code",
"mock_leakage",
"type_abuse",
"naming_issues",
"control_flow",
"resource_leaks",
"security_issues"
],
"severity_levels": ["CRITICAL", "ERROR", "WARNING"],
"auto_fix_enabled": true,
"max_retries": 2,
"thresholds": {
"critical": 0,
"error": 3,
"warning": 10
}
}
}
}
```
**Task 3: IMPL-001.5-review.json (Test Quality Gate)**
```json
{
"id": "IMPL-001.5",
"title": "Test quality gate - anti-patterns and coverage",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-001.5",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001.3"
},
"meta": {
"type": "test-quality-review",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Validate layer completeness: L1.1 100%, L1.2 80%, L1.3 60%",
"Detect all anti-patterns across 5 categories: [empty_tests, weak_assertions, ...]",
"Verify coverage: line ≥80%, branch ≥70%, function ≥90%"
],
"focus_paths": ["tests/"],
"acceptance": [
"Coverage ≥80%: verify by npm test -- --coverage | grep 'All files.*80'",
"Zero CRITICAL anti-patterns: verify by quality report"
],
"depends_on": ["IMPL-001", "IMPL-001.3"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "Static Analysis",
"validation": "Lint test files, check anti-patterns"
},
{
"phase": "Coverage Analysis",
"validation": "Calculate coverage percentage, identify gaps"
},
{
"phase": "Quality Metrics",
"validation": "Verify thresholds, layer completeness"
}
],
"target_files": [],
"quality_gates": {
"layer_completeness": {
"L1.1": "100%",
"L1.2": "80%",
"L1.3": "60%",
"L1.4": "80%",
"L1.5": "100%",
"L2": "70%"
},
"anti_patterns": [
"empty_tests",
"weak_assertions",
"test_isolation",
"incomplete_coverage",
"ai_generated_issues"
],
"coverage_thresholds": {
"line": "80%",
"branch": "70%",
"function": "90%"
}
}
}
}
```
**Task 4: IMPL-002.json (Test Execution & Fix)**
```json
{
"id": "IMPL-002",
"title": "Test execution and fix cycle",
"status": "pending",
"context_package_path": ".workflow/active/WFS-test-{session}/.process/test-context-package.json",
"cli_execution_id": "WFS-test-{session}-IMPL-002",
"cli_execution": {
"strategy": "resume",
"resume_from": "WFS-test-{session}-IMPL-001.5"
},
"meta": {
"type": "test-fix",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Execute all tests and fix failures until pass rate ≥95%",
"Maximum 5 fix iterations",
"Use Gemini for diagnosis, agent for fixes"
],
"focus_paths": ["tests/", "src/"],
"acceptance": [
"All tests pass: verify by npm test (exit code 0)",
"Pass rate ≥95%: verify by test output"
],
"depends_on": ["IMPL-001", "IMPL-001.3", "IMPL-001.5"]
},
"flow_control": {
"pre_analysis": [],
"implementation_approach": [
{
"phase": "Initial Test Execution",
"command": "npm test"
},
{
"phase": "Iterative Fix Cycle",
"steps": [
"Diagnose failures with Gemini",
"Apply fixes via agent or CLI",
"Re-run tests",
"Repeat until pass rate ≥95% or max iterations"
],
"max_iterations": 5
}
],
"target_files": [],
"test_fix_cycle": {
"max_iterations": 5,
"diagnosis_tool": "gemini",
"fix_mode": "agent",
"exit_conditions": ["all_tests_pass", "max_iterations_reached"]
}
}
}
```
#### Phase 3: Document Generation
```
1. Create IMPL_PLAN.md (test-specific variant)
- frontmatter: workflow_type="test_session", test_framework, coverage_targets
- Test Generation Phase: L1-L3 layer breakdown
- Quality Gates: IMPL-001.3 and IMPL-001.5 specifications
- Test-Fix Cycle: Iteration strategy with diagnosis and fix modes
- Source Session Context: If exists (from source_session_id)
2. Create TODO_LIST.md
- Hierarchical structure with test phase containers
- Links to task JSONs with status markers
- Test layer indicators (L0, L1, L2, L3)
- Quality gate indicators (validation, review)
```
---
## 2. Output Validation
### Task JSON Validation
**IMPL-001 Requirements**:
- All L1.1-L1.5 tests explicitly defined for each target function
- Project type template correctly applied
- Reusable test tools and test commands included
- Implementation approach includes all 3 phases (L1, L2, L3)
**IMPL-001.3 Requirements**:
- All 7 AI issue categories included
- Severity levels properly assigned
- Auto-fix logic for ERROR and below
- Acceptance criteria references zero CRITICAL rule
**IMPL-001.5 Requirements**:
- Layer completeness thresholds: L1.1 100%, L1.2 80%, L1.3 60%
- All 5 anti-pattern categories included
- Coverage metrics: Line 80%, Branch 70%, Function 90%
- Acceptance criteria references all thresholds
**IMPL-002 Requirements**:
- Depends on: IMPL-001, IMPL-001.3, IMPL-001.5 (sequential)
- Max iterations: 5
- Diagnosis tool: Gemini
- Exit conditions: all_tests_pass OR max_iterations_reached
### Quality Standards
Hard Constraints:
- Task count: minimum 4, maximum 18
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
- L0-L3 Progressive Layers fully implemented per specifications
- AI Issue Detection includes all items from L0.5 checklist
- Project Type Template correctly applied
- Test Anti-Patterns validation rules implemented
- Layer Completeness Thresholds met
- Quality Metrics targets: Line 80%, Branch 70%, Function 90%
---
## 3. Success Criteria
- All test planning documents generated successfully
- Task count reported: minimum 4
- Test framework correctly detected and reported
- Coverage targets clearly specified: L0 zero errors, L1 80%+, L2 70%+
- L0-L3 layers explicitly defined in IMPL-001 task
- AI issue detection configured in IMPL-001.3
- Quality gates with measurable thresholds in IMPL-001.5
- Source session status reported (if applicable)

View File

@@ -28,8 +28,6 @@ You are a test context discovery specialist focused on gathering test coverage i
## Tool Arsenal
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
### 1. Session & Implementation Context
**Tools**:
- `Read()` - Load session metadata and implementation summaries
@@ -38,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
**Use**: Phase 1 source context loading
### 2. Test Coverage Discovery
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
**Primary (Code-Index MCP)**:
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
- `mcp__code-index__search_code_advanced()` - Search test patterns
- `mcp__code-index__get_file_summary()` - Analyze test structure
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast test pattern search
@@ -122,10 +120,9 @@ for (const summary_path of summaries) {
**2.1 Existing Test Discovery**:
```javascript
// Method 1: CodexLens MCP (preferred)
const test_files = mcp__ccw-tools__codex_lens({
action: "search_files",
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
// Method 1: Code-Index MCP (preferred)
const test_files = mcp__code-index__find_files({
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
});
// Method 2: Fallback CLI

View File

@@ -51,11 +51,6 @@ You will execute tests across multiple layers, analyze failures with layer-speci
## Execution Process
### 0. Task Status: Mark In Progress
```bash
jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"from":.status,"to":"in_progress","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
### Flow Control Execution
When task JSON contains `flow_control` field, execute preparation and implementation steps systematically.
@@ -64,14 +59,6 @@ When task JSON contains `flow_control` field, execute preparation and implementa
2. **Variable Substitution**: Use `[variable_name]` to reference previous outputs
3. **Error Handling**: Follow step-specific strategies (`skip_optional`, `fail`, `retry_once`)
**Command-to-Tool Mapping** (for pre_analysis commands):
```
"Read(path)" → Read tool: Read(file_path=path)
"bash(command)" → Bash tool: Bash(command=command)
"Search(pattern,path)" → Grep tool: Grep(pattern=pattern, path=path)
"Glob(pattern)" → Glob tool: Glob(pattern=pattern)
```
**Implementation Approach** (`flow_control.implementation_approach`):
When task JSON contains implementation_approach array:
1. **Sequential Execution**: Process steps in order, respecting `depends_on` dependencies
@@ -83,15 +70,9 @@ When task JSON contains implementation_approach array:
- `description`: Detailed description with variable references
- `modification_points`: Test and code modification targets
- `logic_flow`: Test-fix iteration sequence
- `command`: Optional CLI command (only when explicitly specified)
- `depends_on`: Array of step numbers that must complete first
- `output`: Variable name for this step's output
5. **Execution Mode Selection**:
- Based on `meta.execution_config.method`:
- `"cli"` → Build CLI command via buildCliHandoffPrompt() and execute via Bash tool
- `"agent"` (default) → Agent direct execution:
- Parse `modification_points` as files to modify
- Follow `logic_flow` for test-fix iteration
- Use test_commands from flow_control for test execution
### 1. Context Assessment & Test Discovery
@@ -102,18 +83,17 @@ When task JSON contains implementation_approach array:
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** : Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- Identify test commands from project configuration
```bash
# Detect test framework and multi-layered commands
if [ -f "package.json" ]; then
# Extract layer-specific test commands using Read tool or jq
PKG_JSON=$(cat package.json)
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
# Extract layer-specific test commands
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
LINT_CMD="ruff check . || flake8 ."
UNIT_CMD="pytest tests/unit/"
@@ -334,17 +314,9 @@ When generating test results for orchestrator (saved to `.process/test-results.j
- Pass rate >= 95% + any "high" or "medium" criticality failures → ⚠️ NEEDS FIX (continue iteration)
- Pass rate < 95% → ❌ FAILED (continue iteration or abort)
## Task Status Update
**Upon task completion**, update task JSON status:
```bash
jq --arg ts "$(date -Iseconds)" '.status="completed" | .status_history += [{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
```
## Important Reminders
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- **Execute tests first** - Understand what's failing before fixing
- **Diagnose thoroughly** - Find root cause, not just symptoms
- **Fix minimally** - Change only what's needed to pass tests

View File

@@ -284,8 +284,6 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
### ALWAYS
**Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
**W3C Format Compliance**: ✅ Include $schema in all token files | ✅ Use $type metadata for all tokens | ✅ Use $value wrapper for color (light/dark), duration, easing | ✅ Validate token structure against W3C spec
**Pattern Recognition**: ✅ Identify pattern from [TASK_TYPE_IDENTIFIER] first | ✅ Apply pattern-specific execution rules | ✅ Follow autonomy level
@@ -320,7 +318,7 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
### design-tokens.json
**Template Reference**: `~/.ccw/workflows/cli-templates/ui-design/systems/design-tokens.json`
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/design-tokens.json`
**Format**: W3C Design Tokens Community Group Specification
@@ -385,7 +383,7 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
### layout-templates.json
**Template Reference**: `~/.ccw/workflows/cli-templates/ui-design/systems/layout-templates.json`
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/layout-templates.json`
**Optimization**: Unified structure combining DOM and styling into single hierarchy
@@ -426,7 +424,7 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
### animation-tokens.json
**Template Reference**: `~/.ccw/workflows/cli-templates/ui-design/systems/animation-tokens.json`
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/animation-tokens.json`
**Structure Overview**:
- **duration**: instant (0ms), fast (150ms), normal (300ms), slow (500ms), slower (1000ms)

View File

@@ -120,11 +120,7 @@ Before completing any task, verify:
- Make assumptions - verify with existing materials
- Skip quality verification steps
**Bash Tool**:
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
**ALWAYS:**
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Verify resource/dependency existence before referencing
- Execute tasks systematically and incrementally
- Test and validate work thoroughly

View File

@@ -1,18 +0,0 @@
{
"version": "1.0.0",
"defaultTool": "gemini",
"promptFormat": "plain",
"smartContext": {
"enabled": false,
"maxFiles": 10
},
"nativeResume": true,
"recursiveQuery": true,
"cache": {
"injectionMode": "auto",
"defaultPrefix": "",
"defaultSuffix": ""
},
"codeIndexMcp": "ace",
"$schema": "./cli-settings.schema.json"
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,364 +0,0 @@
---
name: ccw-debug
description: Debug coordinator - analyze issue, select debug strategy, execute debug workflow in main process
argument-hint: "[--mode cli|debug|test|bidirectional] [--yes|-y] \"bug description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Debug Command - Debug Coordinator
Debug orchestrator: issue analysis → strategy selection → debug execution.
## Core Concept: Debug Units (调试单元)
**Definition**: Debug commands grouped into logical units for different root cause strategies.
**Debug Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Quick Diagnosis** | CLI analysis only | cli → recommendation |
| **Hypothesis-Driven** | Debug exploration | debug-with-file → apply fix |
| **Test-Driven** | Test generation/iteration | test-fix-gen → test-cycle-execute |
| **Convergence** | Parallel debug + test | debug + test (parallel) |
**Atomic Rules**:
1. CLI mode: Analysis only, recommendation for user action
2. Debug/Test modes: Full cycle (analysis → fix → validate)
3. Bidirectional mode: Parallel execution, merge findings
## Execution Model
**Synchronous (Main Process)**: Debug commands execute via Skill, blocking until complete.
```
User Input → Analyze Issue → Select Strategy → [Confirm] → Execute Debug
Skill (blocking)
Update TodoWrite
Generate Fix/Report
```
## 5-Phase Workflow
### Phase 1: Analyze Issue
**Input** → Extract (description, symptoms) → Assess (error_type, clarity, complexity, scope) → **Analysis**
| Field | Values |
|-------|--------|
| error_type | syntax \| logic \| async \| integration \| unknown |
| clarity | 0-3 (≥2 = clear) |
| complexity | low \| medium \| high |
| scope | single-module \| cross-module \| system |
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
─────────────────────────────────────────────────────────
quick|fast|immediate|recommendation|suggest → cli
test|fail|coverage|pass → test
multiple|system|distributed|concurrent → bidirectional
(default) → debug
```
**Output**: `IssueType: [type] | Clarity: [clarity]/3 | Complexity: [complexity] | RecommendedMode: [mode]`
---
### Phase 1.5: Issue Clarification (if clarity < 2)
```
Analysis → Check clarity ≥ 2?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Error Symptoms, When It Occurs, Affected Components, Reproducibility
---
### Phase 2: Select Debug Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Debug Workflow
```
#### Command Chain Mapping
| Mode | Command Chain | Execution |
|------|---------------|-----------|
| **cli** | ccw cli --mode analysis --rule analysis-diagnose-bug-root-cause | Analysis only |
| **debug** | debug-with-file → test-fix-gen → test-cycle-execute | Sequential |
| **test** | test-fix-gen → test-cycle-execute | Sequential |
| **bidirectional** | (debug-with-file ∥ test-fix-gen ∥ test-cycle-execute) → merge-findings | Parallel → Merge |
**Note**: `∥` = parallel execution
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Debug Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Change Mode → Select Different Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Debug Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWD-{issue-slug}-{date}
Session Dir: .workflow/.ccw-debug/{session_id}/
TodoWrite:
CCWD:{mode}: [1/n] /command1 [in_progress]
CCWD:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWD-...",
"mode": "debug|cli|test|bidirectional",
"status": "running",
"parallel_execution": false|true,
"issue": { description, error_type, clarity, complexity },
"command_chain": [...],
"findings": { debug, test, merged }
}
```
**Output**:
- TODO: `-> CCWD:debug: [1/3] /workflow:debug-with-file | ...`
- Status File: `.workflow/.ccw-debug/{session_id}/status.json`
---
### Phase 5: Execute Debug Chain
#### For Bidirectional Mode (Parallel Execution)
```
Start Commands (parallel) → Execute debug-with-file ∥ test-fix-gen ∥ test-cycle-execute
Collect Results → Merge Findings
Update status.json (findings.merged)
Mark completed
```
#### For Sequential Modes (cli, debug, test)
```
Start Command → Update status (running) → Execute via Skill → Result
CLI Mode? → YES → Ask Escalation → Escalate or Done
→ NO → Continue
Update status (completed) → Next Command
Error? → YES → Ask Action (Retry/Skip/Abort)
→ NO → Continue
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
#### CLI Mode Escalation
```
CLI Result → Findings.confidence?
High → Present findings → User decides:
• Done (end here)
• Escalate to debug mode
• Escalate to test mode
Low → Recommend escalation
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Issue
|-- Extract: description, error_type, clarity, complexity, scope
+-- If clarity < 2 -> Phase 1.5: Clarify Issue
|
Phase 2: Select Debug Strategy & Build Chain
|-- Detect mode: cli | debug | test | bidirectional
|-- Build command chain based on mode
|-- Parallel execution for bidirectional
+-- Consider escalation points (cli → debug/test)
|
Phase 3: User Confirmation (optional)
|-- Show debug strategy
+-- Allow mode change
|
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCWD prefix
+-- Initialize .workflow/.ccw-debug/{session_id}/status.json
|
Phase 5: Execute Debug Chain
|-- For sequential modes: execute commands in order
|-- For bidirectional: execute debug + test in parallel
|-- CLI mode: present findings, ask for escalation
|-- Merge findings (bidirectional mode)
+-- Update status and TODO
```
---
## Debug Pipeline Examples
| Issue | Mode | Pipeline |
|-------|------|----------|
| "Login timeout error (quick)" | cli | ccw cli → analysis → (escalate or done) |
| "User login fails intermittently" | debug | debug-with-file → test-gen → test-cycle |
| "Authentication tests failing" | test | test-fix-gen → test-cycle-execute |
| "Multi-module auth + db sync issue" | bidirectional | (debug ∥ test) → merge findings |
**Legend**: `∥` = parallel execution
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Initial state (debug mode)
CCWD:debug: [1/3] /workflow:debug-with-file [in_progress]
CCWD:debug: [2/3] /workflow:test-fix-gen [pending]
CCWD:debug: [3/3] /workflow:test-cycle-execute [pending]
// CLI mode: only 1 command
CCWD:cli: [1/1] ccw cli --mode analysis [in_progress]
// Bidirectional mode
CCWD:bidirectional: [1/3] /workflow:debug-with-file [in_progress] ∥
CCWD:bidirectional: [2/3] /workflow:test-fix-gen [in_progress] ∥
CCWD:bidirectional: [3/3] /workflow:test-cycle-execute [in_progress]
CCWD:bidirectional: [4/4] merge-findings [pending]
```
**2. Status.json Tracking**: Persistent state for debug monitoring.
**Location**: `.workflow/.ccw-debug/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWD-auth-timeout-2025-02-02",
"mode": "debug",
"status": "running|completed|failed",
"parallel_execution": false,
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"issue": {
"description": "User login timeout after 30 seconds",
"error_type": "async",
"clarity": 3,
"complexity": "medium"
},
"command_chain": [
{ "index": 0, "command": "/workflow:debug-with-file", "unit": "sequential", "status": "completed" },
{ "index": 1, "command": "/workflow:test-fix-gen", "unit": "sequential", "status": "in_progress" },
{ "index": 2, "command": "/workflow:test-cycle-execute", "unit": "sequential", "status": "pending" }
],
"current_index": 1,
"findings": {
"debug": { "root_cause": "...", "confidence": "high" },
"test": { "failure_pattern": "..." },
"merged": null
}
}
```
**Status Values**:
- `running`: Debug workflow in progress
- `completed`: Debug finished, fix applied
- `failed`: Debug aborted or unfixable
**Mode-Specific Fields**:
- `cli` mode: No findings field (recommendation-only)
- `debug`/`test`: Single finding source
- `bidirectional`: All three findings + merged result
---
## Key Design Principles
1. **Issue-Focused** - Diagnose root cause, not symptoms
2. **Mode-Driven** - 4 debug strategies for different issues
3. **Parallel Capability** - Bidirectional mode for complex systems
4. **Escalation Support** - CLI → debug/test mode progression
5. **Quick Diagnosis** - CLI mode for immediate recommendations
6. **TODO Tracking** - Use CCWD prefix to isolate debug todos
7. **Finding Convergence** - Merge parallel results for consensus
---
## Usage
```bash
# Auto-select mode
/ccw-debug "Login failed: token validation error"
# Explicit mode selection
/ccw-debug --mode cli "Quick diagnosis: API 500 error"
/ccw-debug --mode debug "User profile sync intermittent failure"
/ccw-debug --mode test "Permission check failing"
/ccw-debug --mode bidirectional "Multi-module auth + cache sync issue"
# Auto mode (skip confirmations)
/ccw-debug --yes "Production hotfix: database connection timeout"
# Resume or escalate from previous session
/ccw-debug --mode debug --source-session CCWD-login-timeout-2025-01-27
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-debug "issue description"
├─ Keywords: "quick", "fast", "recommendation"
│ └─ Mode: CLI (2-5 min analysis, optional escalation)
├─ Keywords: "test", "fail", "coverage"
│ └─ Mode: Test (automated iteration, ≥95% pass)
├─ Keywords: "multiple", "system", "distributed"
│ └─ Mode: Bidirectional (parallel debug + test)
└─ Default → Debug (full hypothesis-driven workflow)
```

View File

@@ -1,456 +0,0 @@
---
name: ccw-plan
description: Planning coordinator - analyze requirements, select planning strategy, execute planning workflow in main process
argument-hint: "[--mode lite|multi-cli|full|plan-verify|replan|cli|issue|rapid-to-issue|brainstorm-with-file|analyze-with-file] [--yes|-y] \"task description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
---
# CCW-Plan Command - Planning Coordinator
Planning orchestrator: requirement analysis → strategy selection → planning execution.
## Core Concept: Planning Units (规划单元)
**Definition**: Planning commands are grouped into logical units based on verification requirements and collaboration strategies.
**Planning Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Quick Planning** | plan-cmd (no verify) | lite-plan |
| **Verified Planning** | plan-cmd → verify-cmd | plan → plan-verify |
| **Collaborative Planning** | multi-cli-plan (implicit verify) | multi-cli-plan |
| **With-File Planning** | brainstorm-with-file or analyze-with-file | brainstorm + plan options |
| **CLI-Assisted Planning** | ccw cli (analysis) → recommendations | quick analysis + decision |
| **Issue Workflow Planning** | plan → issue workflow (discover/queue/execute) | rapid-to-issue bridge |
**Atomic Rules**:
1. Lite mode: No verification (fast iteration)
2. Plan-verify mode: Mandatory quality gate
3. Multi-cli/Full mode: Optional verification (via --skip-verify flag)
4. With-File modes: Self-contained iteration with built-in post-completion options
5. CLI mode: Quick analysis, user-driven decisions
6. Issue modes: Planning integrated into issue workflow lifecycle
## Execution Model
**Synchronous (Main Process)**: Planning commands execute via Skill, blocking until complete.
```
User Input → Analyze Requirements → Select Strategy → [Confirm] → Execute Planning
Skill (blocking)
Update TodoWrite
Generate Artifacts
```
## 5-Phase Workflow
### Phase 1: Analyze Requirements
**Input** → Extract (goal, scope, constraints) → Assess (complexity, clarity, criticality) → **Analysis**
| Field | Values |
|-------|--------|
| complexity | low \| medium \| high |
| clarity | 0-3 (≥2 = clear) |
| criticality | normal \| high \| critical |
| scope | single-module \| cross-module \| system \| batch-issues |
**Output**: `Type: [task_type] | Goal: [goal] | Complexity: [complexity] | Clarity: [clarity]/3 | Criticality: [criticality]`
---
### Phase 1.5: Requirement Clarification (if clarity < 2)
```
Analysis → Check clarity ≥ 2?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Goal (Create/Fix/Optimize/Analyze), Scope (Single file/Module/Cross-module/System), Constraints (Backward compat/Skip tests/Urgent hotfix)
---
### Phase 2: Select Planning Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Planning Workflow
```
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
───────────────────────────────────────────────────────────────────────────────
quick|fast|immediate|recommendation|suggest → cli
issues?|batch|issue workflow|structured workflow|queue → issue
issue transition|rapid.*issue|plan.*issue|convert.*issue → rapid-to-issue
brainstorm|ideation|头脑风暴|创意|发散思维|multi-perspective → brainstorm-with-file
analyze.*document|explore.*concept|collaborative analysis → analyze-with-file
production|critical|payment|auth → plan-verify
adjust|modify|change plan → replan
uncertain|explore → full
complex|multiple module|integrate → multi-cli
(default) → lite
```
#### Command Chain Mapping
| Mode | Command Chain | Verification | Use Case |
|------|---------------|--------------|----------|
| **cli** | ccw cli --mode analysis --rule planning-* | None | Quick planning recommendation |
| **issue** | /issue:discover → /issue:plan → /issue:queue → /issue:execute | Optional | Batch issue planning & execution |
| **rapid-to-issue** | lite-plan → /issue:convert-to-plan → queue → execute | Optional | Quick planning → Issue workflow bridge |
| **brainstorm-with-file** | /workflow:brainstorm-with-file → (plan/issue options) | Self-contained | Multi-perspective ideation |
| **analyze-with-file** | /workflow:analyze-with-file → (plan/issue options) | Self-contained | Collaborative architecture analysis |
| **lite** | lite-plan | None | Fast simple planning |
| **multi-cli** | multi-cli-plan → [plan-verify] | Optional | Multi-model collaborative planning |
| **full** | brainstorm → plan → [plan-verify] | Optional | Comprehensive brainstorm + planning |
| **plan-verify** | plan → **plan-verify** | **Mandatory** | Production/critical features |
| **replan** | replan | None | Plan refinement/adjustment |
**Note**:
- `[ ]` = optional verification
- **bold** = mandatory quality gate
- With-File modes include built-in post-completion options to create plans/issues
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Planning Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Adjust → Change Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Planning Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWP-{goal-slug}-{date}
Session Dir: .workflow/.ccw-plan/{session_id}/
TodoWrite:
CCWP:{mode}: [1/n] /command1 [in_progress]
CCWP:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWP-...",
"mode": "plan-verify",
"status": "running",
"command_chain": [...],
"quality_gate": "pending" // plan-verify mode only
}
```
**Output**:
- TODO: `-> CCWP:plan-verify: [1/2] /workflow:plan | ...`
- Status File: `.workflow/.ccw-plan/{session_id}/status.json`
---
### Phase 5: Execute Planning Chain
```
Start Command → Update status (running) → Execute via Skill → Result
```
#### For Plan-Verify Mode (Quality Gate)
```
Quality Gate → PASS → Mark completed → Next command
↓ FAIL (plan-verify mode)
Ask User → Refine: replan + re-verify
→ Override: continue anyway
→ Abort: stop planning
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
---
## Planning Pipeline Examples
| Input | Mode | Pipeline | Use Case |
|-------|------|----------|----------|
| "Quick: should we use OAuth2?" | cli | ccw cli --mode analysis → recommendation | Immediate planning advice |
| "Plan user login system" | lite | lite-plan | Fast simple planning |
| "Implement OAuth2 auth" | multi-cli | multi-cli-plan → [plan-verify] | Multi-model collaborative planning |
| "Design notification system" | full | brainstorm → plan → [plan-verify] | Comprehensive brainstorm + planning |
| "Payment processing (prod)" | plan-verify | plan → **plan-verify** | Production critical (mandatory gate) |
| "头脑风暴: 用户通知系统重新设计" | brainstorm-with-file | brainstorm-with-file → (plan/issue options) | Multi-perspective ideation |
| "协作分析: 认证架构设计决策" | analyze-with-file | analyze-with-file → (plan/issue options) | Collaborative analysis |
| "Batch plan: handle 10 pending issues" | issue | /issue:discover → plan → queue → execute | Batch issue planning |
| "Plan and create issues" | rapid-to-issue | lite-plan → convert-to-plan → queue → execute | Quick plan → Issue workflow |
| "Update existing plan" | replan | replan | Plan refinement/adjustment |
**Legend**:
- `[ ]` = optional verification
- **bold** = mandatory quality gate
- **With-File modes** include built-in post-completion options to create plans/issues
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Plan-verify mode (mandatory quality gate)
CCWP:plan-verify: [1/2] /workflow:plan [in_progress]
CCWP:plan-verify: [2/2] /workflow:plan-verify [pending]
// CLI mode (quick recommendations)
CCWP:cli: [1/1] ccw cli --mode analysis [in_progress]
// Issue mode (batch planning)
CCWP:issue: [1/4] /issue:discover [in_progress]
CCWP:issue: [2/4] /issue:plan [pending]
CCWP:issue: [3/4] /issue:queue [pending]
CCWP:issue: [4/4] /issue:execute [pending]
// Rapid-to-issue mode (planning → issue bridge)
CCWP:rapid-to-issue: [1/4] /workflow:lite-plan [in_progress]
CCWP:rapid-to-issue: [2/4] /issue:convert-to-plan [pending]
CCWP:rapid-to-issue: [3/4] /issue:queue [pending]
CCWP:rapid-to-issue: [4/4] /issue:execute [pending]
// Brainstorm-with-file mode (self-contained)
CCWP:brainstorm-with-file: [1/1] /workflow:brainstorm-with-file [in_progress]
// Analyze-with-file mode (self-contained)
CCWP:analyze-with-file: [1/1] /workflow:analyze-with-file [in_progress]
// Lite mode (fast simple planning)
CCWP:lite: [1/1] /workflow:lite-plan [in_progress]
// Multi-CLI mode (collaborative planning)
CCWP:multi-cli: [1/1] /workflow:multi-cli-plan [in_progress]
// Full mode (brainstorm + planning with optional verification)
CCWP:full: [1/2] /workflow:brainstorm [in_progress]
CCWP:full: [2/2] /workflow:plan [pending]
```
**2. Status.json Tracking**: Persistent state for planning monitoring.
**Location**: `.workflow/.ccw-plan/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWP-oauth-auth-2025-02-02",
"mode": "plan-verify",
"status": "running|completed|failed",
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"analysis": {
"goal": "Implement OAuth2 authentication",
"complexity": "high",
"clarity_score": 2,
"criticality": "high"
},
"command_chain": [
{ "index": 0, "command": "/workflow:plan", "mandatory": false, "status": "completed" },
{ "index": 1, "command": "/workflow:plan-verify", "mandatory": true, "status": "running" }
],
"current_index": 1,
"quality_gate": "pending|PASS|FAIL"
}
```
**Status Values**:
- `running`: Planning in progress
- `completed`: Planning finished successfully
- `failed`: Planning aborted or quality gate failed
**Quality Gate Values** (plan-verify mode only):
- `pending`: Verification not started
- `PASS`: Plan meets quality standards
- `FAIL`: Plan needs refinement
**Mode-Specific Fields**:
- **plan-verify**: `quality_gate` field (pending|PASS|FAIL)
- **cli**: No command_chain, stores CLI recommendations and user decision
- **issue**: includes issue discovery results and queue configuration
- **rapid-to-issue**: includes plan output and conversion to issue
- **with-file modes**: stores session artifacts and post-completion options
- **other modes**: basic command_chain tracking
---
## Extended Planning Modes
### CLI-Assisted Planning (cli mode)
```
Quick Input → ccw cli --mode analysis --rule planning-* → Recommendations → User Decision:
- ✓ Accept → Create lite-plan from recommendations
- ↗ Escalate → Switch to multi-cli or full mode
- ✗ Done → Stop (recommendation only)
```
**Use Cases**:
- Quick architecture decision questions
- Planning approach recommendations
- Pattern/library selection advice
**CLI Rules** (auto-selected based on context):
- `planning-plan-architecture-design` - Architecture decisions
- `planning-breakdown-task-steps` - Task decomposition
- `planning-design-component-spec` - Component specifications
---
### With-File Planning Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration, generating comprehensive session artifacts.
| Mode | Purpose | Key Features | Output Folder |
|------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge | `.workflow/.brainstorm/` |
| **analyze-with-file** | Collaborative architecture analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm-with-file**: 头脑风暴, 创意, 发散思维, multi-perspective, ideation
- **analyze-with-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, create issue, deep dive)
---
### Issue Workflow Integration
| Mode | Purpose | Command Chain | Typical Use |
|------|---------|---------------|-------------|
| **issue** | Batch issue planning | discover → plan → queue → execute | Multiple issues in codebase |
| **rapid-to-issue** | Quick plan → Issue workflow | lite-plan → convert-to-plan → queue → execute | Fast iteration → structured execution |
**Issue Workflow Bridge**:
```
lite-plan (in-memory) → /issue:convert-to-plan → Creates issue JSON
/issue:queue → Form execution queue
/issue:execute → DAG-based parallel execution
```
**When to use Issue Workflow**:
- Need structured multi-stage execution (queue-based)
- Want parallel DAG execution
- Multiple related changes as individual commits
- Converting brainstorm/plan output to executable tasks
---
## Key Design Principles
1. **Planning-Focused** - Pure planning coordination, no execution
2. **Mode-Driven** - 10 planning modes for different needs (lite/multi-cli/full/plan-verify/replan + cli/issue/rapid-to-issue/brainstorm-with-file/analyze-with-file)
3. **CLI Integration** - Quick analysis for immediate recommendations
4. **With-File Support** - Multi-CLI collaboration with documented artifacts
5. **Issue Workflow Bridge** - Seamless transition from planning to structured execution
6. **Quality Gates** - Mandatory verification for production features
7. **Flexible Verification** - Optional for exploration, mandatory for critical features
8. **Progressive Clarification** - Low clarity triggers requirement questions
9. **TODO Tracking** - Use CCWP prefix to isolate planning todos
10. **Handoff Ready** - Generates artifacts ready for execution phase
---
## Usage
```bash
# Auto-select mode (keyword-based detection)
/ccw-plan "Add user authentication"
# Standard planning modes
/ccw-plan --mode lite "Add logout endpoint"
/ccw-plan --mode multi-cli "Implement OAuth2"
/ccw-plan --mode full "Design notification system"
/ccw-plan --mode plan-verify "Payment processing (production)"
/ccw-plan --mode replan --session WFS-auth-2025-01-28
# CLI-assisted planning (quick recommendations)
/ccw-plan --mode cli "Quick: should we use OAuth2 or JWT?"
/ccw-plan --mode cli "Which state management pattern for React app?"
# With-File workflows (multi-CLI collaboration)
/ccw-plan --mode brainstorm-with-file "头脑风暴: 用户通知系统重新设计"
/ccw-plan --mode analyze-with-file "协作分析: 认证架构的设计决策"
# Issue workflow integration
/ccw-plan --mode issue "Batch plan: handle all pending security issues"
/ccw-plan --mode rapid-to-issue "Plan user profile feature and create issue"
# Auto mode (skip confirmations)
/ccw-plan --yes "Quick feature: user profile endpoint"
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-plan "task description"
├─ Keywords: "quick", "fast", "recommendation"
│ └─ Mode: CLI (quick analysis → recommendations)
├─ Keywords: "issue", "batch", "queue"
│ └─ Mode: Issue (batch planning → execution queue)
├─ Keywords: "plan.*issue", "rapid.*issue"
│ └─ Mode: Rapid-to-Issue (lite-plan → issue bridge)
├─ Keywords: "头脑风暴", "brainstorm", "ideation"
│ └─ Mode: Brainstorm-with-file (multi-CLI ideation)
├─ Keywords: "协作分析", "analyze.*document"
│ └─ Mode: Analyze-with-file (collaborative analysis)
├─ Keywords: "production", "critical", "payment"
│ └─ Mode: Plan-Verify (mandatory quality gate)
├─ Keywords: "adjust", "modify", "change plan"
│ └─ Mode: Replan (refine existing plan)
├─ Keywords: "uncertain", "explore"
│ └─ Mode: Full (brainstorm → plan → [verify])
├─ Keywords: "complex", "multiple module"
│ └─ Mode: Multi-CLI (collaborative planning)
└─ Default → Lite (fast simple planning)
```

View File

@@ -1,387 +0,0 @@
---
name: ccw-test
description: Test coordinator - analyze testing needs, select test strategy, execute test workflow in main process
argument-hint: "[--mode gen|fix|verify|tdd] [--yes|-y] \"test description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Bash(*)
---
# CCW-Test Command - Test Coordinator
Test orchestrator: testing needs analysis → strategy selection → test execution.
## Core Concept: Test Units (测试单元)
**Definition**: Test commands grouped into logical units based on testing objectives.
**Test Units**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Generation Only** | test-gen (no execution) | test-fix-gen |
| **Test + Fix Cycle** | test-gen → test-execute-fix | test-fix-gen → test-cycle-execute |
| **Verification Only** | existing-tests → execute | execute-tests |
| **TDD Cycle** | tdd-plan → tdd-execute → verify | Red-Green-Refactor |
**Atomic Rules**:
1. Gen mode: Generate tests only (no execution)
2. Fix mode: Generate + auto-iteration until ≥95% pass
3. Verify mode: Execute existing tests + report
4. TDD mode: Full Red-Green-Refactor cycle compliance
## Execution Model
**Synchronous (Main Process)**: Test commands execute via Skill, blocking until complete.
```
User Input → Analyze Testing Needs → Select Strategy → [Confirm] → Execute Tests
Skill (blocking)
Update TodoWrite
Generate Tests/Results
```
## 5-Phase Workflow
### Phase 1: Analyze Testing Needs
**Input** → Extract (description, target_module, existing_tests) → Assess (testing_goal, framework, coverage_target) → **Analysis**
| Field | Values |
|-------|--------|
| testing_goal | generate \| fix \| verify \| tdd |
| framework | jest \| vitest \| pytest \| ... |
| coverage_target | 0-100 (default: 80) |
| existing_tests | true \| false |
#### Mode Detection (Priority Order)
```
Input Keywords → Mode
─────────────────────────────────────────────────────────
generate|create|write test|need test → gen
fix|repair|failing|broken → fix
verify|validate|check|run test → verify
tdd|test-driven|test first → tdd
(default) → fix
```
**Output**: `TestingGoal: [goal] | Mode: [mode] | Target: [module] | Framework: [framework]`
---
### Phase 1.5: Testing Clarification (if needed)
```
Analysis → Check testing_goal known?
YES → Check target_module set?
YES → Continue to Phase 2
NO → Ask Questions → Update Analysis
```
**Questions Asked**: Testing Goal, Target Module/Files, Coverage Requirements, Test Framework
---
### Phase 2: Select Test Strategy & Build Command Chain
```
Analysis → Detect Mode (keywords) → Build Command Chain → Test Workflow
```
#### Command Chain Mapping
| Mode | Command Chain | Behavior |
|------|---------------|----------|
| **gen** | test-fix-gen | Generate only, no execution |
| **fix** | test-fix-gen → test-cycle-execute (iterate) | Auto-iteration until ≥95% pass or max iterations |
| **verify** | execute-existing-tests → coverage-report | Execute + report only |
| **tdd** | tdd-plan → execute → tdd-verify | Red-Green-Refactor cycle compliance |
**Note**: `(iterate)` = auto-iteration until pass_rate ≥ 95% or max_iterations reached
**Output**: `Mode: [mode] | Strategy: [strategy] | Commands: [1. /cmd1 2. /cmd2]`
---
### Phase 3: User Confirmation
```
Test Chain → Show Strategy → Ask User → User Decision:
- ✓ Confirm → Continue to Phase 4
- ⚙ Change Mode → Select Different Mode (back to Phase 2)
- ✗ Cancel → Abort
```
---
### Phase 4: Setup TODO Tracking & Status File
```
Test Chain → Create Session Dir → Initialize Tracking → Tracking State
```
**Session Structure**:
```
Session ID: CCWT-{target-module-slug}-{date}
Session Dir: .workflow/.ccw-test/{session_id}/
TodoWrite:
CCWT:{mode}: [1/n] /command1 [in_progress]
CCWT:{mode}: [2/n] /command2 [pending]
...
status.json:
{
"session_id": "CCWT-...",
"mode": "gen|fix|verify|tdd",
"status": "running",
"testing": { description, target_module, framework, coverage_target },
"command_chain": [...],
"test_metrics": { total_tests, passed, failed, pass_rate, iteration_count, coverage }
}
```
**Output**:
- TODO: `-> CCWT:fix: [1/2] /workflow:test-fix-gen | CCWT:fix: [2/2] /workflow:test-cycle-execute`
- Status File: `.workflow/.ccw-test/{session_id}/status.json`
---
### Phase 5: Execute Test Chain
#### For All Modes (Sequential Execution)
```
Start Command → Update status (running) → Execute via Skill → Result
Update test_metrics → Next Command
Error? → YES → Ask Action (Retry/Skip/Abort)
→ NO → Continue
```
#### For Fix Mode (Auto-Iteration)
```
test-fix-gen completes → test-cycle-execute begins
Check pass_rate ≥ 95%?
↓ ↓
YES → Complete NO → Check iteration < max?
↓ ↓
YES → Iteration NO → Complete
| (analyze failures
| generate fix
| re-execute tests)
|
└→ Loop back to pass_rate check
```
#### Error Handling Pattern
```
Command Error → Update status (failed) → Ask User:
- Retry → Re-execute (same index)
- Skip → Continue next command
- Abort → Stop execution
```
#### Test Metrics Update
```
After Each Execution → Collect test_metrics:
- total_tests: number
- passed/failed: count
- pass_rate: percentage
- iteration_count: increment (fix mode)
- coverage: line/branch/function
Update status.json → Update TODO with iteration info (if fix mode)
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Testing Needs
|-- Extract: description, testing_goal, target_module, existing_tests
+-- If unclear -> Phase 1.5: Clarify Testing Needs
|
Phase 2: Select Test Strategy & Build Chain
|-- Detect mode: gen | fix | verify | tdd
|-- Build command chain based on mode
+-- Configure iteration limits (fix mode)
|
Phase 3: User Confirmation (optional)
|-- Show test strategy
+-- Allow mode change
|
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCWT prefix
+-- Initialize .workflow/.ccw-test/{session_id}/status.json
|
Phase 5: Execute Test Chain
|-- For each command:
| |-- Update status.json (current=running)
| |-- Execute via Skill
| |-- Test-fix cycle: iterate until ≥95% pass or max iterations
| |-- Update test_metrics in status.json
| +-- Update TODO status
+-- Mark status.json as completed
```
---
## Test Pipeline Examples
| Input | Mode | Pipeline | Iteration |
|-------|------|----------|-----------|
| "Generate tests for auth module" | gen | test-fix-gen | No execution |
| "Fix failing authentication tests" | fix | test-fix-gen → test-cycle-execute (iterate) | Max 3 iterations |
| "Run existing test suite" | verify | execute-tests → coverage-report | One-time |
| "Implement user profile with TDD" | tdd | tdd-plan → execute → tdd-verify | Red-Green-Refactor |
**Legend**: `(iterate)` = auto-iteration until ≥95% pass rate
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display):
```
// Initial state (fix mode)
CCWT:fix: [1/2] /workflow:test-fix-gen [in_progress]
CCWT:fix: [2/2] /workflow:test-cycle-execute [pending]
// During iteration (fix mode, iteration 2/3)
CCWT:fix: [1/2] /workflow:test-fix-gen [completed]
CCWT:fix: [2/2] /workflow:test-cycle-execute [in_progress] (iteration 2/3, pass rate: 78%)
// Gen mode (no execution)
CCWT:gen: [1/1] /workflow:test-fix-gen [in_progress]
// Verify mode (one-time)
CCWT:verify: [1/2] execute-existing-tests [in_progress]
CCWT:verify: [2/2] generate-coverage-report [pending]
// TDD mode (Red-Green-Refactor)
CCWT:tdd: [1/3] /workflow:tdd-plan [in_progress]
CCWT:tdd: [2/3] /workflow:execute [pending]
CCWT:tdd: [3/3] /workflow:tdd-verify [pending]
```
**2. Status.json Tracking**: Persistent state for test monitoring.
**Location**: `.workflow/.ccw-test/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "CCWT-auth-module-2025-02-02",
"mode": "fix",
"status": "running|completed|failed",
"created_at": "2025-02-02T10:00:00Z",
"updated_at": "2025-02-02T10:05:00Z",
"testing": {
"description": "Fix failing authentication tests",
"target_module": "src/auth/**/*.ts",
"framework": "jest",
"coverage_target": 80
},
"command_chain": [
{ "index": 0, "command": "/workflow:test-fix-gen", "unit": "sequential", "status": "completed" },
{ "index": 1, "command": "/workflow:test-cycle-execute", "unit": "test-fix-cycle", "max_iterations": 3, "status": "in_progress" }
],
"current_index": 1,
"test_metrics": {
"total_tests": 42,
"passed": 38,
"failed": 4,
"pass_rate": 90.5,
"iteration_count": 2,
"coverage": {
"line": 82.3,
"branch": 75.6,
"function": 88.1
}
}
}
```
**Status Values**:
- `running`: Test workflow in progress
- `completed`: Tests passing (≥95%) or generation complete
- `failed`: Test workflow aborted
**Test Metrics** (updated during execution):
- `total_tests`: Number of tests executed
- `pass_rate`: Percentage of passing tests (target: ≥95%)
- `iteration_count`: Number of test-fix iterations (fix mode)
- `coverage`: Line/branch/function coverage percentages
---
## Key Design Principles
1. **Testing-Focused** - Pure test coordination, no implementation
2. **Mode-Driven** - 4 test strategies for different needs
3. **Auto-Iteration** - Fix mode iterates until ≥95% pass rate
4. **Metrics Tracking** - Real-time test metrics in status.json
5. **Coverage-Driven** - Coverage targets guide test generation
6. **TODO Tracking** - Use CCWT prefix to isolate test todos
7. **TDD Compliance** - TDD mode enforces Red-Green-Refactor cycle
---
## Usage
```bash
# Auto-select mode
/ccw-test "Test user authentication module"
# Explicit mode selection
/ccw-test --mode gen "Generate tests for payment module"
/ccw-test --mode fix "Fix failing authentication tests"
/ccw-test --mode verify "Validate current test suite"
/ccw-test --mode tdd "Implement user profile with TDD"
# Custom configuration
/ccw-test --mode fix --max-iterations 5 --pass-threshold 98 "Fix all tests"
/ccw-test --target "src/auth/**/*.ts" "Test authentication module"
# Auto mode (skip confirmations)
/ccw-test --yes "Quick test validation"
```
---
## Mode Selection Decision Tree
```
User calls: /ccw-test "test description"
├─ Keywords: "generate", "create", "write test"
│ └─ Mode: Gen (generate only, no execution)
├─ Keywords: "fix", "repair", "failing"
│ └─ Mode: Fix (auto-iterate until ≥95% pass)
├─ Keywords: "verify", "validate", "run test"
│ └─ Mode: Verify (execute existing tests)
├─ Keywords: "tdd", "test-driven", "test first"
│ └─ Mode: TDD (Red-Green-Refactor cycle)
└─ Default → Fix (most common: fix failing tests)
```

View File

@@ -1,666 +0,0 @@
---
name: ccw
description: Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process
argument-hint: "\"task description\""
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*)
---
# CCW Command - Main Workflow Orchestrator
Main process orchestrator: intent analysis → workflow selection → command chain execution.
## Core Concept: Minimum Execution Units (最小执行单元)
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
**Key Units in CCW**:
| Unit Type | Pattern | Example |
|-----------|---------|---------|
| **Planning + Execution** | plan-cmd → execute-cmd | lite-plan → lite-execute |
| **Testing** | test-gen-cmd → test-exec-cmd | test-fix-gen → test-cycle-execute |
| **Review** | review-cmd → fix-cmd | review-session-cycle → review-cycle-fix |
**Atomic Rules**:
1. CCW automatically groups commands into minimum units - never splits them
2. Pipeline visualization shows units with `【 】` markers
3. Error handling preserves unit boundaries (retry/skip affects whole unit)
## Execution Model
**Synchronous (Main Process)**: Commands execute via Skill in main process, blocking until complete.
```
User Input → Analyze Intent → Select Workflow → [Confirm] → Execute Chain
Skill (blocking)
Update TodoWrite
Next Command...
```
**vs ccw-coordinator**: External CLI execution with background tasks and hook callbacks.
## 5-Phase Workflow
### Phase 1: Analyze Intent
```javascript
function analyzeIntent(input) {
return {
goal: extractGoal(input),
scope: extractScope(input),
constraints: extractConstraints(input),
task_type: detectTaskType(input), // bugfix|feature|tdd|review|exploration|...
complexity: assessComplexity(input), // low|medium|high
clarity_score: calculateClarity(input) // 0-3 (>=2 = clear)
};
}
// Task type detection (priority order)
function detectTaskType(text) {
const patterns = {
'bugfix-hotfix': /urgent|production|critical/ && /fix|bug/,
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': /brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking|multi-perspective.*think|compare perspectives|探索.*可能/,
'brainstorm-to-issue': /brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/,
'debug-file': /debug.*document|hypothesis.*debug|troubleshoot.*track|investigate.*log|调试.*记录|假设.*验证|systematic debug|深度调试/,
'analyze-file': /analyze.*document|explore.*concept|understand.*architecture|investigate.*discuss|collaborative analysis|分析.*讨论|深度.*理解|协作.*分析/,
// Standard workflows
'bugfix': /fix|bug|error|crash|fail|debug/,
'issue-batch': /issues?|batch/ && /fix|resolve/,
'issue-transition': /issue workflow|structured workflow|queue|multi-stage/,
'exploration': /uncertain|explore|research|what if/,
'quick-task': /quick|simple|small/ && /feature|function/,
'ui-design': /ui|design|component|style/,
'tdd': /tdd|test-driven|test first/,
'test-fix': /test fail|fix test|failing test/,
'review': /review|code review/,
'documentation': /docs|documentation|readme/
};
for (const [type, pattern] of Object.entries(patterns)) {
if (pattern.test(text)) return type;
}
return 'feature';
}
```
**Output**: `Type: [task_type] | Goal: [goal] | Complexity: [complexity] | Clarity: [clarity_score]/3`
---
### Phase 1.5: Requirement Clarification (if clarity_score < 2)
```javascript
async function clarifyRequirements(analysis) {
if (analysis.clarity_score >= 2) return analysis;
const questions = generateClarificationQuestions(analysis); // Goal, Scope, Constraints
const answers = await AskUserQuestion({ questions });
return updateAnalysis(analysis, answers);
}
```
**Questions**: Goal (Create/Fix/Optimize/Analyze), Scope (Single file/Module/Cross-module/System), Constraints (Backward compat/Skip tests/Urgent hotfix)
---
### Phase 2: Select Workflow & Build Command Chain
```javascript
function selectWorkflow(analysis) {
const levelMap = {
'bugfix-hotfix': { level: 2, flow: 'bugfix.hotfix' },
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': { level: 4, flow: 'brainstorm-with-file' }, // Multi-perspective ideation
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging
'analyze-file': { level: 3, flow: 'analyze-with-file' }, // Collaborative analysis
// Standard workflows
'bugfix': { level: 2, flow: 'bugfix.standard' },
'issue-batch': { level: 'Issue', flow: 'issue' },
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' }, // Bridge workflow
'exploration': { level: 4, flow: 'full' },
'quick-task': { level: 1, flow: 'lite-lite-lite' },
'ui-design': { level: analysis.complexity === 'high' ? 4 : 3, flow: 'ui' },
'tdd': { level: 3, flow: 'tdd' },
'test-fix': { level: 3, flow: 'test-fix-gen' },
'review': { level: 3, flow: 'review-cycle-fix' },
'documentation': { level: 2, flow: 'docs' },
'feature': { level: analysis.complexity === 'high' ? 3 : 2, flow: analysis.complexity === 'high' ? 'coupled' : 'rapid' }
};
const selected = levelMap[analysis.task_type] || levelMap['feature'];
return buildCommandChain(selected, analysis);
}
// Build command chain (port-based matching with Minimum Execution Units)
function buildCommandChain(workflow, analysis) {
const chains = {
// Level 1 - Rapid
'lite-lite-lite': [
{ cmd: '/workflow:lite-lite-lite', args: `"${analysis.goal}"` }
],
// Level 2 - Lightweight
'rapid': [
// Unit: Quick Implementation【lite-plan → lite-execute】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
// Level 2 Bridge - Lightweight to Issue Workflow
'rapid-to-issue': [
// Unit: Quick Implementation【lite-plan → convert-to-plan】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl-to-issue' },
{ cmd: '/issue:convert-to-plan', args: '--latest-lite-plan -y', unit: 'quick-impl-to-issue' },
// Auto-continue to issue workflow
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'bugfix.standard': [
// Unit: Bug Fix【lite-fix → lite-execute】
{ cmd: '/workflow:lite-fix', args: `"${analysis.goal}"`, unit: 'bug-fix' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'bug-fix' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'bugfix.hotfix': [
{ cmd: '/workflow:lite-fix', args: `--hotfix "${analysis.goal}"` }
],
'multi-cli-plan': [
// Unit: Multi-CLI Planning【multi-cli-plan → lite-execute】
{ cmd: '/workflow:multi-cli-plan', args: `"${analysis.goal}"`, unit: 'multi-cli' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'multi-cli' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'docs': [
// Unit: Quick Implementation【lite-plan → lite-execute】
{ cmd: '/workflow:lite-plan', args: `"${analysis.goal}"`, unit: 'quick-impl' },
{ cmd: '/workflow:lite-execute', args: '--in-memory', unit: 'quick-impl' }
],
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': [
{ cmd: '/workflow:brainstorm-with-file', args: `"${analysis.goal}"` }
// Note: Has built-in post-completion options (create plan, create issue, deep analysis)
],
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
'brainstorm-to-issue': [
// Note: Assumes brainstorm session already exists, or run brainstorm first
{ cmd: '/issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '--queue auto' }
],
'debug-with-file': [
{ cmd: '/workflow:debug-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with hypothesis-driven iteration and Gemini validation
],
'analyze-with-file': [
{ cmd: '/workflow:analyze-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with multi-round discussion and CLI exploration
],
// Level 3 - Standard
'coupled': [
// Unit: Verified Planning【plan → plan-verify】
{ cmd: '/workflow:plan', args: `"${analysis.goal}"`, unit: 'verified-planning' },
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
])
],
'tdd': [
// Unit: TDD Planning + Execution【tdd-plan → execute】
{ cmd: '/workflow:tdd-plan', args: `"${analysis.goal}"`, unit: 'tdd-planning' },
{ cmd: '/workflow:execute', args: '', unit: 'tdd-planning' },
// TDD Verification
{ cmd: '/workflow:tdd-verify', args: '' }
],
'test-fix-gen': [
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: `"${analysis.goal}"`, unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'review-cycle-fix': [
// Unit: Code Review【review-session-cycle → review-cycle-fix】
{ cmd: '/workflow:review-session-cycle', args: '', unit: 'code-review' },
{ cmd: '/workflow:review-cycle-fix', args: '', unit: 'code-review' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
'ui': [
{ cmd: '/workflow:ui-design:explore-auto', args: `"${analysis.goal}"` },
// Unit: Planning + Execution【plan → execute】
{ cmd: '/workflow:plan', args: '', unit: 'plan-execute' },
{ cmd: '/workflow:execute', args: '', unit: 'plan-execute' }
],
// Level 4 - Brainstorm
'full': [
{ cmd: '/workflow:brainstorm:auto-parallel', args: `"${analysis.goal}"` },
// Unit: Verified Planning【plan → plan-verify】
{ cmd: '/workflow:plan', args: '', unit: 'verified-planning' },
{ cmd: '/workflow:plan-verify', args: '', unit: 'verified-planning' },
// Execution
{ cmd: '/workflow:execute', args: '' },
// Unit: Test Validation【test-fix-gen → test-cycle-execute】
{ cmd: '/workflow:test-fix-gen', args: '', unit: 'test-validation' },
{ cmd: '/workflow:test-cycle-execute', args: '', unit: 'test-validation' }
],
// Issue Workflow
'issue': [
{ cmd: '/issue:discover', args: '' },
{ cmd: '/issue:plan', args: '--all-pending' },
{ cmd: '/issue:queue', args: '' },
{ cmd: '/issue:execute', args: '' }
]
};
return chains[workflow.flow] || chains['rapid'];
}
```
**Output**: `Level [X] - [flow] | Pipeline: [...] | Commands: [1. /cmd1 2. /cmd2 ...]`
---
### Phase 3: User Confirmation
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: "Execute this command chain?",
header: "Confirm",
options: [
{ label: "Confirm", description: "Start" },
{ label: "Adjust", description: "Modify" },
{ label: "Cancel", description: "Abort" }
]
}]
});
if (response.error === "Cancel") throw new Error("Cancelled");
if (response.error === "Adjust") return await adjustChain(chain);
return chain;
}
```
---
### Phase 4: Setup TODO Tracking & Status File
```javascript
function setupTodoTracking(chain, workflow, analysis) {
const sessionId = `ccw-${Date.now()}`;
const stateDir = `.workflow/.ccw/${sessionId}`;
Bash(`mkdir -p "${stateDir}"`);
const todos = chain.map((step, i) => ({
content: `CCW:${workflow}: [${i + 1}/${chain.length}] ${step.cmd}`,
status: i === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${step.cmd}`
}));
TodoWrite({ todos });
// Initialize status.json for hook tracking
const state = {
session_id: sessionId,
workflow: workflow,
status: 'running',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((step, idx) => ({
index: idx,
command: step.cmd,
status: idx === 0 ? 'running' : 'pending'
})),
current_index: 0
};
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { sessionId, stateDir, state };
}
```
**Output**:
- TODO: `-> CCW:rapid: [1/3] /workflow:lite-plan | CCW:rapid: [2/3] /workflow:lite-execute | ...`
- Status File: `.workflow/.ccw/{session_id}/status.json`
---
### Phase 5: Execute Command Chain
```javascript
async function executeCommandChain(chain, workflow, trackingState) {
let previousResult = null;
const { sessionId, stateDir, state } = trackingState;
for (let i = 0; i < chain.length; i++) {
try {
// Update status: mark current as running
state.command_chain[i].status = 'running';
state.current_index = i;
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
const fullCommand = assembleCommand(chain[i], previousResult);
const result = await Skill({ skill: fullCommand });
previousResult = { ...result, success: true };
// Update status: mark current as completed, next as running
state.command_chain[i].status = 'completed';
if (i + 1 < chain.length) {
state.command_chain[i + 1].status = 'running';
}
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
updateTodoStatus(i, chain.length, workflow, 'completed');
} catch (error) {
// Update status on error
state.command_chain[i].status = 'failed';
state.status = 'error';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
const action = await handleError(chain[i], error, i);
if (action === 'retry') {
state.command_chain[i].status = 'pending';
state.status = 'running';
i--; // Retry
} else if (action === 'abort') {
state.status = 'failed';
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { success: false, error: error.message };
}
// 'skip' - continue
state.status = 'running';
}
}
// Mark workflow as completed
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/status.json`, JSON.stringify(state, null, 2));
return { success: true, completed: chain.length, sessionId };
}
// Assemble full command with session/plan parameters
function assembleCommand(step, previousResult) {
let command = step.cmd;
if (step.args) {
command += ` ${step.args}`;
} else if (previousResult?.session_id) {
command += ` --session="${previousResult.session_id}"`;
}
return command;
}
// Update TODO: mark current as complete, next as in-progress
function updateTodoStatus(index, total, workflow, status) {
const todos = getAllCurrentTodos();
const updated = todos.map(todo => {
if (todo.content.startsWith(`CCW:${workflow}:`)) {
const stepNum = extractStepIndex(todo.content);
if (stepNum === index + 1) return { ...todo, status };
if (stepNum === index + 2 && status === 'completed') return { ...todo, status: 'in_progress' };
}
return todo;
});
TodoWrite({ todos: updated });
}
// Error handling: Retry/Skip/Abort
async function handleError(step, error, index) {
const response = await AskUserQuestion({
questions: [{
question: `${step.cmd} failed: ${error.message}`,
header: "Error",
options: [
{ label: "Retry", description: "Re-execute" },
{ label: "Skip", description: "Continue next" },
{ label: "Abort", description: "Stop" }
]
}]
});
return { Retry: 'retry', Skip: 'skip', Abort: 'abort' }[response.Error] || 'abort';
}
```
---
## Execution Flow Summary
```
User Input
|
Phase 1: Analyze Intent
|-- Extract: goal, scope, constraints, task_type, complexity, clarity
+-- If clarity < 2 -> Phase 1.5: Clarify Requirements
|
Phase 2: Select Workflow & Build Chain
|-- Map task_type -> Level (1/2/3/4/Issue)
|-- Select flow based on complexity
+-- Build command chain (port-based)
|
Phase 3: User Confirmation (optional)
|-- Show pipeline visualization
+-- Allow adjustment
|
Phase 4: Setup TODO Tracking & Status File
|-- Create todos with CCW prefix
+-- Initialize .workflow/.ccw/{session_id}/status.json
|
Phase 5: Execute Command Chain
|-- For each command:
| |-- Update status.json (current=running)
| |-- Assemble full command
| |-- Execute via Skill
| |-- Update status.json (current=completed, next=running)
| |-- Update TODO status
| +-- Handle errors (retry/skip/abort)
+-- Mark status.json as completed
```
---
## Pipeline Examples (with Minimum Execution Units)
**Note**: `【 】` marks Minimum Execution Units - commands execute together as atomic groups.
| Input | Type | Level | Pipeline (with Units) |
|-------|------|-------|-----------------------|
| "Add API endpoint" | feature (low) | 2 |【lite-plan → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Fix login timeout" | bugfix | 2 |【lite-fix → lite-execute】→【test-fix-gen → test-cycle-execute】|
| "Use issue workflow" | issue-transition | 2.5 |【lite-plan → convert-to-plan】→ queue → execute |
| "头脑风暴: 通知系统重构" | brainstorm | 4 | brainstorm-with-file → (built-in post-completion) |
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | from-brainstorm → queue → execute |
| "深度调试 WebSocket 连接断开" | debug-file | 3 | debug-with-file → (hypothesis iteration) |
| "协作分析: 认证架构优化" | analyze-file | 3 | analyze-with-file → (multi-round discussion) |
| "OAuth2 system" | feature (high) | 3 |【plan → plan-verify】→ execute →【review-session-cycle → review-cycle-fix】→【test-fix-gen → test-cycle-execute】|
| "Implement with TDD" | tdd | 3 |【tdd-plan → execute】→ tdd-verify |
| "Uncertain: real-time arch" | exploration | 4 | brainstorm:auto-parallel →【plan → plan-verify】→ execute →【test-fix-gen → test-cycle-execute】|
---
## Key Design Principles
1. **Main Process Execution** - Use Skill in main process, no external CLI
2. **Intent-Driven** - Auto-select workflow based on task intent
3. **Port-Based Chaining** - Build command chain using port matching
4. **Minimum Execution Units** - Commands grouped into atomic units, never split (e.g., lite-plan → lite-execute)
5. **Progressive Clarification** - Low clarity triggers clarification phase
6. **TODO Tracking** - Use CCW prefix to isolate workflow todos
7. **Unit-Aware Error Handling** - Retry/skip/abort affects whole unit, not individual commands
8. **User Control** - Optional user confirmation at each phase
---
## State Management
### Dual Tracking System
**1. TodoWrite-Based Tracking** (UI Display): All execution state tracked via TodoWrite with `CCW:` prefix.
```javascript
// Initial state
todos = [
{ content: "CCW:rapid: [1/3] /workflow:lite-plan", status: "in_progress" },
{ content: "CCW:rapid: [2/3] /workflow:lite-execute", status: "pending" },
{ content: "CCW:rapid: [3/3] /workflow:test-cycle-execute", status: "pending" }
];
// After command 1 completes
todos = [
{ content: "CCW:rapid: [1/3] /workflow:lite-plan", status: "completed" },
{ content: "CCW:rapid: [2/3] /workflow:lite-execute", status: "in_progress" },
{ content: "CCW:rapid: [3/3] /workflow:test-cycle-execute", status: "pending" }
];
```
**2. Status.json Tracking**: Persistent state file for workflow monitoring.
**Location**: `.workflow/.ccw/{session_id}/status.json`
**Structure**:
```json
{
"session_id": "ccw-1706123456789",
"workflow": "rapid",
"status": "running|completed|failed|error",
"created_at": "2025-02-01T10:30:00Z",
"updated_at": "2025-02-01T10:35:00Z",
"analysis": {
"goal": "Add user authentication",
"scope": ["auth"],
"constraints": [],
"task_type": "feature",
"complexity": "medium"
},
"command_chain": [
{
"index": 0,
"command": "/workflow:lite-plan",
"status": "completed"
},
{
"index": 1,
"command": "/workflow:lite-execute",
"status": "running"
},
{
"index": 2,
"command": "/workflow:test-cycle-execute",
"status": "pending"
}
],
"current_index": 1
}
```
**Status Values**:
- `running`: Workflow executing commands
- `completed`: All commands finished
- `failed`: User aborted or unrecoverable error
- `error`: Command execution failed (during error handling)
**Command Status Values**:
- `pending`: Not started
- `running`: Currently executing
- `completed`: Successfully finished
- `failed`: Execution failed
---
## With-File Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration. They are self-contained and generate comprehensive session artifacts.
| Workflow | Purpose | Key Features | Output Folder |
|----------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge cycles | `.workflow/.brainstorm/` |
| **debug-with-file** | Hypothesis-driven debugging | Gemini validation, understanding evolution, NDJSON logging | `.workflow/.debug/` |
| **analyze-with-file** | Collaborative analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
**Detection Keywords**:
- **brainstorm**: 头脑风暴, 创意, 发散思维, multi-perspective, compare perspectives
- **debug-file**: 深度调试, 假设验证, systematic debug, hypothesis debug
- **analyze-file**: 协作分析, 深度理解, collaborative analysis, explore concept
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, understanding.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, issue, etc.)
---
## Usage
```bash
# Auto-select workflow
/ccw "Add user authentication"
# Complex requirement (triggers clarification)
/ccw "Optimize system performance"
# Bug fix
/ccw "Fix memory leak in WebSocket handler"
# TDD development
/ccw "Implement user registration with TDD"
# Exploratory task
/ccw "Uncertain about architecture for real-time notifications"
# With-File workflows (documented exploration with multi-CLI collaboration)
/ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
/ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
/ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
/ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
```

View File

@@ -3,7 +3,6 @@ name: cli-init
description: Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection
argument-hint: "[--tool gemini|qwen|all] [--output path] [--preview]"
allowed-tools: Bash(*), Read(*), Write(*), Glob(*)
group: cli
---
# CLI Initialization Command (/cli:cli-init)
@@ -192,7 +191,7 @@ target/
### Step 2: Workspace Analysis (MANDATORY FIRST)
```bash
# Analyze workspace structure
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
bash(~/.claude/scripts/get_modules_by_depth.sh json)
```
### Step 3: Technology Detection

View File

@@ -1,361 +0,0 @@
---
name: codex-review
description: Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions
argument-hint: "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]"
allowed-tools: Bash(*), AskUserQuestion(*), Read(*)
---
# Codex Review Command (/cli:codex-review)
## Overview
Interactive code review command that invokes `codex review` via ccw cli endpoint with guided parameter selection.
**Codex Review Parameters** (from `codex review --help`):
| Parameter | Description |
|-----------|-------------|
| `[PROMPT]` | Custom review instructions (positional) |
| `-c model=<model>` | Override model via config |
| `--uncommitted` | Review staged, unstaged, and untracked changes |
| `--base <BRANCH>` | Review changes against base branch |
| `--commit <SHA>` | Review changes introduced by a commit |
| `--title <TITLE>` | Optional commit title for review summary |
## Prompt Template Format
Follow the standard ccw cli prompt template:
```
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
TASK: • [step 1] • [step 2] • [step 3]
MODE: review
CONTEXT: [review target description] | Memory: [relevant context]
EXPECTED: [deliverable format] + [quality criteria]
CONSTRAINTS: [focus constraints]
```
## EXECUTION INSTRUCTIONS - START HERE
**When this command is triggered, follow these exact steps:**
### Step 1: Parse Arguments
Check if user provided arguments directly:
- `--uncommitted` → Record target = uncommitted
- `--base <branch>` → Record target = base, branch name
- `--commit <sha>` → Record target = commit, sha value
- `--model <model>` → Record model selection
- `--title <title>` → Record title
- Remaining text → Use as custom focus/prompt
If no target specified → Continue to Step 2 for interactive selection.
### Step 2: Interactive Parameter Selection
**2.1 Review Target Selection**
```javascript
AskUserQuestion({
questions: [{
question: "What do you want to review?",
header: "Review Target",
options: [
{ label: "Uncommitted changes (Recommended)", description: "Review staged, unstaged, and untracked changes" },
{ label: "Compare to branch", description: "Review changes against a base branch (e.g., main)" },
{ label: "Specific commit", description: "Review changes introduced by a specific commit" }
],
multiSelect: false
}]
})
```
**2.2 Branch/Commit Input (if needed)**
If "Compare to branch" selected:
```javascript
AskUserQuestion({
questions: [{
question: "Which base branch to compare against?",
header: "Base Branch",
options: [
{ label: "main", description: "Compare against main branch" },
{ label: "master", description: "Compare against master branch" },
{ label: "develop", description: "Compare against develop branch" }
],
multiSelect: false
}]
})
```
If "Specific commit" selected:
- Run `git log --oneline -10` to show recent commits
- Ask user to provide commit SHA or select from list
**2.3 Model Selection (Optional)**
```javascript
AskUserQuestion({
questions: [{
question: "Which model to use for review?",
header: "Model",
options: [
{ label: "Default", description: "Use codex default model (gpt-5.2)" },
{ label: "o3", description: "OpenAI o3 reasoning model" },
{ label: "gpt-4.1", description: "GPT-4.1 model" },
{ label: "o4-mini", description: "OpenAI o4-mini (faster)" }
],
multiSelect: false
}]
})
```
**2.4 Review Focus Selection**
```javascript
AskUserQuestion({
questions: [{
question: "What should the review focus on?",
header: "Focus Area",
options: [
{ label: "General review (Recommended)", description: "Comprehensive review: correctness, style, bugs, docs" },
{ label: "Security focus", description: "Security vulnerabilities, input validation, auth issues" },
{ label: "Performance focus", description: "Performance bottlenecks, complexity, resource usage" },
{ label: "Code quality", description: "Readability, maintainability, SOLID principles" }
],
multiSelect: false
}]
})
```
### Step 3: Build Prompt and Command
**3.1 Construct Prompt Based on Focus**
**General Review Prompt:**
```
PURPOSE: Comprehensive code review to identify issues, improve quality, and ensure best practices; success = actionable feedback with clear priorities
TASK: • Review code correctness and logic errors • Check coding standards and consistency • Identify potential bugs and edge cases • Evaluate documentation completeness
MODE: review
CONTEXT: {target_description} | Memory: Project conventions from CLAUDE.md
EXPECTED: Structured review report with: severity levels (Critical/High/Medium/Low), file:line references, specific improvement suggestions, priority ranking
CONSTRAINTS: Focus on actionable feedback
```
**Security Focus Prompt:**
```
PURPOSE: Security-focused code review to identify vulnerabilities and security risks; success = all security issues documented with remediation
TASK: • Scan for injection vulnerabilities (SQL, XSS, command) • Check authentication and authorization logic • Evaluate input validation and sanitization • Identify sensitive data exposure risks
MODE: review
CONTEXT: {target_description} | Memory: Security best practices, OWASP Top 10
EXPECTED: Security report with: vulnerability classification, CVE references where applicable, remediation code snippets, risk severity matrix
CONSTRAINTS: Security-first analysis | Flag all potential vulnerabilities
```
**Performance Focus Prompt:**
```
PURPOSE: Performance-focused code review to identify bottlenecks and optimization opportunities; success = measurable improvement recommendations
TASK: • Analyze algorithmic complexity (Big-O) • Identify memory allocation issues • Check for N+1 queries and blocking operations • Evaluate caching opportunities
MODE: review
CONTEXT: {target_description} | Memory: Performance patterns and anti-patterns
EXPECTED: Performance report with: complexity analysis, bottleneck identification, optimization suggestions with expected impact, benchmark recommendations
CONSTRAINTS: Performance optimization focus
```
**Code Quality Focus Prompt:**
```
PURPOSE: Code quality review to improve maintainability and readability; success = cleaner, more maintainable code
TASK: • Assess SOLID principles adherence • Identify code duplication and abstraction opportunities • Review naming conventions and clarity • Evaluate test coverage implications
MODE: review
CONTEXT: {target_description} | Memory: Project coding standards
EXPECTED: Quality report with: principle violations, refactoring suggestions, naming improvements, maintainability score
CONSTRAINTS: Code quality and maintainability focus
```
**3.2 Build Target Description**
Based on selection, set `{target_description}`:
- Uncommitted: `Reviewing uncommitted changes (staged + unstaged + untracked)`
- Base branch: `Reviewing changes against {branch} branch`
- Commit: `Reviewing changes introduced by commit {sha}`
### Step 4: Execute via CCW CLI
Build and execute the ccw cli command:
```bash
# Base structure
ccw cli -p "<PROMPT>" --tool codex --mode review [OPTIONS]
```
**Command Construction:**
```bash
# Variables from user selection
TARGET_FLAG="" # --uncommitted | --base <branch> | --commit <sha>
MODEL_FLAG="" # --model <model> (if not default)
TITLE_FLAG="" # --title "<title>" (if provided)
# Build target flag
if [ "$target" = "uncommitted" ]; then
TARGET_FLAG="--uncommitted"
elif [ "$target" = "base" ]; then
TARGET_FLAG="--base $branch"
elif [ "$target" = "commit" ]; then
TARGET_FLAG="--commit $sha"
fi
# Build model flag (only if not default)
if [ "$model" != "default" ] && [ -n "$model" ]; then
MODEL_FLAG="--model $model"
fi
# Build title flag (if provided)
if [ -n "$title" ]; then
TITLE_FLAG="--title \"$title\""
fi
# Execute
ccw cli -p "$PROMPT" --tool codex --mode review $TARGET_FLAG $MODEL_FLAG $TITLE_FLAG
```
**Full Example Commands:**
**Option 1: With custom prompt (reviews uncommitted by default):**
```bash
ccw cli -p "
PURPOSE: Comprehensive code review to identify issues and improve quality; success = actionable feedback with priorities
TASK: • Review correctness and logic • Check standards compliance • Identify bugs and edge cases • Evaluate documentation
MODE: review
CONTEXT: Reviewing uncommitted changes | Memory: Project conventions
EXPECTED: Structured report with severity levels, file:line refs, improvement suggestions
CONSTRAINTS: Actionable feedback
" --tool codex --mode review --rule analysis-review-code-quality
```
**Option 2: Target flag only (no prompt allowed):**
```bash
ccw cli --tool codex --mode review --uncommitted
```
### Step 5: Execute and Display Results
```bash
Bash({
command: "ccw cli -p \"$PROMPT\" --tool codex --mode review $FLAGS",
run_in_background: true
})
```
Wait for completion and display formatted results.
## Quick Usage Examples
### Direct Execution (No Interaction)
```bash
# Review uncommitted changes with default settings
/cli:codex-review --uncommitted
# Review against main branch
/cli:codex-review --base main
# Review specific commit
/cli:codex-review --commit abc123
# Review with custom model
/cli:codex-review --uncommitted --model o3
# Review with security focus
/cli:codex-review --uncommitted security
# Full options
/cli:codex-review --base main --model o3 --title "Auth Feature" security
```
### Interactive Mode
```bash
# Start interactive selection (guided flow)
/cli:codex-review
```
## Focus Area Mapping
| User Selection | Prompt Focus | Key Checks |
|----------------|--------------|------------|
| General review | Comprehensive | Correctness, style, bugs, docs |
| Security focus | Security-first | Injection, auth, validation, exposure |
| Performance focus | Optimization | Complexity, memory, queries, caching |
| Code quality | Maintainability | SOLID, duplication, naming, tests |
## Error Handling
### No Changes to Review
```
No changes found for review target. Suggestions:
- For --uncommitted: Make some code changes first
- For --base: Ensure branch exists and has diverged
- For --commit: Verify commit SHA exists
```
### Invalid Branch
```bash
# Show available branches
git branch -a --list | head -20
```
### Invalid Commit
```bash
# Show recent commits
git log --oneline -10
```
## Integration Notes
- Uses `ccw cli --tool codex --mode review` endpoint
- Model passed via prompt (codex uses `-c model=` internally)
- Target flags (`--uncommitted`, `--base`, `--commit`) passed through to codex
- Prompt follows standard ccw cli template format for consistency
## Validation Constraints
**IMPORTANT: Target flags and prompt are mutually exclusive**
The codex CLI has a constraint where target flags (`--uncommitted`, `--base`, `--commit`) cannot be used with a positional `[PROMPT]` argument:
```
error: the argument '--uncommitted' cannot be used with '[PROMPT]'
error: the argument '--base <BRANCH>' cannot be used with '[PROMPT]'
error: the argument '--commit <SHA>' cannot be used with '[PROMPT]'
```
**Behavior:**
- When ANY target flag is specified, ccw cli automatically skips template concatenation (systemRules/roles)
- The review uses codex's default review behavior for the specified target
- Custom prompts are only supported WITHOUT target flags (reviews uncommitted changes by default)
**Valid combinations:**
| Command | Result |
|---------|--------|
| `codex review "Focus on security"` | ✓ Custom prompt, reviews uncommitted (default) |
| `codex review --uncommitted` | ✓ No prompt, uses default review |
| `codex review --base main` | ✓ No prompt, uses default review |
| `codex review --commit abc123` | ✓ No prompt, uses default review |
| `codex review --uncommitted "prompt"` | ✗ Invalid - mutually exclusive |
| `codex review --base main "prompt"` | ✗ Invalid - mutually exclusive |
| `codex review --commit abc123 "prompt"` | ✗ Invalid - mutually exclusive |
**Examples:**
```bash
# ✓ Valid: prompt only (reviews uncommitted by default)
ccw cli -p "Focus on security" --tool codex --mode review
# ✓ Valid: target flag only (no prompt)
ccw cli --tool codex --mode review --uncommitted
ccw cli --tool codex --mode review --base main
ccw cli --tool codex --mode review --commit abc123
# ✗ Invalid: target flag with prompt (will fail)
ccw cli -p "Review this" --tool codex --mode review --uncommitted
ccw cli -p "Review this" --tool codex --mode review --base main
ccw cli -p "Review this" --tool codex --mode review --commit abc123
```

View File

@@ -1,513 +0,0 @@
---
name: codex-coordinator
description: Command orchestration tool for Codex - analyze requirements, recommend command chain, execute sequentially with state persistence
argument-hint: "TASK=\"<task description>\" [--depth=standard|deep] [--auto-confirm] [--verbose]"
---
# Codex Coordinator Command
Interactive orchestration tool for Codex commands: analyze task → discover commands → recommend chain → execute sequentially → track state.
**Execution Model**: Intelligent agent-driven workflow. Claude analyzes each phase and orchestrates command execution.
## Core Concept: Minimum Execution Units (最小执行单元)
### What is a Minimum Execution Unit?
**Definition**: A set of commands that must execute together as an atomic group to achieve a meaningful workflow milestone. Splitting these commands breaks the logical flow and creates incomplete states.
**Why This Matters**:
- **Prevents Incomplete States**: Avoid stopping after task generation without execution
- **User Experience**: User gets complete results, not intermediate artifacts requiring manual follow-up
- **Workflow Integrity**: Maintains logical coherence of multi-step operations
### Codex Minimum Execution Units
**Planning + Execution Units** (规划+执行单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Quick Implementation** | lite-plan-a → execute | Lightweight plan and immediate execution | Working code |
| **Bug Fix** | lite-fix → execute | Quick bug diagnosis and fix execution | Fixed code |
| **Issue Workflow** | issue-discover → issue-plan → issue-queue → issue-execute | Complete issue lifecycle | Completed issues |
| **Discovery & Analysis** | issue-discover → issue-discover-by-prompt | Issue discovery with multiple perspectives | Generated issues |
| **Brainstorm to Execution** | brainstorm-with-file → execute | Brainstorm ideas then implement | Working code |
**With-File Workflows** (文档化单元):
| Unit Name | Commands | Purpose | Output |
|-----------|----------|---------|--------|
| **Brainstorm With File** | brainstorm-with-file | Multi-perspective ideation with documentation | brainstorm.md |
| **Debug With File** | debug-with-file | Hypothesis-driven debugging with documentation | understanding.md |
| **Analyze With File** | analyze-with-file | Collaborative analysis with documentation | discussion.md |
| **Clean & Analyze** | clean → analyze-with-file | Cleanup then analyze | Cleaned code + analysis |
### Command-to-Unit Mapping (命令与最小单元的映射)
| Command | Precedes | Atomic Units |
|---------|----------|--------------|
| lite-plan-a | execute, brainstorm-with-file | Quick Implementation |
| lite-fix | execute | Bug Fix |
| issue-discover | issue-plan | Issue Workflow |
| issue-plan | issue-queue | Issue Workflow |
| issue-queue | issue-execute | Issue Workflow |
| brainstorm-with-file | execute, issue-execute | Brainstorm to Execution |
| debug-with-file | execute | Debug With File |
| analyze-with-file | (standalone) | Analyze With File |
| clean | analyze-with-file, execute | Clean & Analyze |
| quick-plan-with-file | execute | Quick Planning with File |
| merge-plans-with-file | execute | Merge Multiple Plans |
| unified-execute-with-file | (terminal) | Execute with File Tracking |
### Atomic Group Rules
1. **Never Split Units**: Coordinator must recommend complete units, not partial chains
2. **Multi-Unit Participation**: Some commands can participate in multiple units
3. **User Override**: User can explicitly request partial execution (advanced mode)
4. **Visualization**: Pipeline view shows unit boundaries with 【 】markers
5. **Validation**: Before execution, verify all unit commands are included
**Example Pipeline with Units**:
```
需求 → 【lite-plan-a → execute】→ 代码 → 【issue-discover → issue-plan → issue-queue → issue-execute】→ 完成
└──── Quick Implementation ────┘ └────────── Issue Workflow ─────────┘
```
## 3-Phase Workflow
### Phase 1: Analyze Requirements
Parse task to extract: goal, scope, complexity, and task type.
```javascript
function analyzeRequirements(taskDescription) {
return {
goal: extractMainGoal(taskDescription), // e.g., "Fix login bug"
scope: extractScope(taskDescription), // e.g., ["auth", "login"]
complexity: determineComplexity(taskDescription), // 'simple' | 'medium' | 'complex'
task_type: detectTaskType(taskDescription) // See task type patterns below
};
}
// Task Type Detection Patterns
function detectTaskType(text) {
// Priority order (first match wins)
if (/fix|bug|error|crash|fail|debug|diagnose/.test(text)) return 'bugfix';
if (/生成|generate|discover|找出|issue|问题/.test(text)) return 'discovery';
if (/plan|规划|设计|design|analyze|分析/.test(text)) return 'analysis';
if (/清理|cleanup|clean|refactor|重构/.test(text)) return 'cleanup';
if (/头脑|brainstorm|创意|ideation/.test(text)) return 'brainstorm';
if (/合并|merge|combine|batch/.test(text)) return 'batch-planning';
return 'feature'; // Default
}
// Complexity Assessment
function determineComplexity(text) {
let score = 0;
if (/refactor|重构|migrate|迁移|architect|架构|system|系统/.test(text)) score += 2;
if (/multiple|多个|across|跨|all|所有|entire|整个/.test(text)) score += 2;
if (/integrate|集成|api|database|数据库/.test(text)) score += 1;
if (/security|安全|performance|性能|scale|扩展/.test(text)) score += 1;
return score >= 4 ? 'complex' : score >= 2 ? 'medium' : 'simple';
}
```
**Display to user**:
```
Analysis Complete:
Goal: [extracted goal]
Scope: [identified areas]
Complexity: [level]
Task Type: [detected type]
```
### Phase 2: Discover Commands & Recommend Chain
Dynamic command chain assembly using task type and complexity matching.
#### Available Codex Commands (Discovery)
All commands from `~/.codex/prompts/`:
- **Planning**: @~/.codex/prompts/lite-plan-a.md, @~/.codex/prompts/lite-plan-b.md, @~/.codex/prompts/lite-plan-c.md, @~/.codex/prompts/quick-plan-with-file.md, @~/.codex/prompts/merge-plans-with-file.md
- **Execution**: @~/.codex/prompts/execute.md, @~/.codex/prompts/unified-execute-with-file.md
- **Bug Fixes**: @~/.codex/prompts/lite-fix.md, @~/.codex/prompts/debug-with-file.md
- **Discovery**: @~/.codex/prompts/issue-discover.md, @~/.codex/prompts/issue-discover-by-prompt.md, @~/.codex/prompts/issue-plan.md, @~/.codex/prompts/issue-queue.md, @~/.codex/prompts/issue-execute.md
- **Analysis**: @~/.codex/prompts/analyze-with-file.md
- **Brainstorming**: @~/.codex/prompts/brainstorm-with-file.md, @~/.codex/prompts/brainstorm-to-cycle.md
- **Cleanup**: @~/.codex/prompts/clean.md, @~/.codex/prompts/compact.md
#### Recommendation Algorithm
```javascript
async function recommendCommandChain(analysis) {
// Step 1: 根据任务类型确定流程
const { inputPort, outputPort } = determinePortFlow(analysis.task_type, analysis.complexity);
// Step 2: Claude 根据命令特性和任务特征,智能选择命令序列
const chain = selectChainByTaskType(analysis);
return chain;
}
// 任务类型对应的端口流
function determinePortFlow(taskType, complexity) {
const flows = {
'bugfix': { flow: ['lite-fix', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'discovery': { flow: ['issue-discover', 'issue-plan', 'issue-queue', 'issue-execute'], depth: 'standard' },
'analysis': { flow: ['analyze-with-file'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'cleanup': { flow: ['clean'], depth: 'standard' },
'brainstorm': { flow: ['brainstorm-with-file', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' },
'batch-planning': { flow: ['merge-plans-with-file', 'execute'], depth: 'standard' },
'feature': { flow: complexity === 'complex' ? ['lite-plan-b'] : ['lite-plan-a', 'execute'], depth: complexity === 'complex' ? 'deep' : 'standard' }
};
return flows[taskType] || flows['feature'];
}
```
#### Display to User
```
Recommended Command Chain:
Pipeline (管道视图):
需求 → @~/.codex/prompts/lite-plan-a.md → 计划 → @~/.codex/prompts/execute.md → 代码完成
Commands (命令列表):
1. @~/.codex/prompts/lite-plan-a.md
2. @~/.codex/prompts/execute.md
Proceed? [Confirm / Show Details / Adjust / Cancel]
```
### Phase 2b: Get User Confirmation
Ask user for confirmation before proceeding with execution.
```javascript
async function getUserConfirmation(chain) {
const response = await AskUserQuestion({
questions: [{
question: 'Proceed with this command chain?',
header: 'Confirm Chain',
multiSelect: false,
options: [
{ label: 'Confirm and execute', description: 'Proceed with commands' },
{ label: 'Show details', description: 'View each command' },
{ label: 'Adjust chain', description: 'Remove or reorder' },
{ label: 'Cancel', description: 'Abort' }
]
}]
});
return response;
}
```
### Phase 3: Execute Sequential Command Chain
```javascript
async function executeCommandChain(chain, analysis) {
const sessionId = `codex-coord-${Date.now()}`;
const stateDir = `.workflow/.codex-coordinator/${sessionId}`;
// Create state directory
const state = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
analysis: analysis,
command_chain: chain.map((cmd, idx) => ({ ...cmd, index: idx, status: 'pending' })),
execution_results: [],
};
// Save initial state
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
for (let i = 0; i < chain.length; i++) {
const cmd = chain[i];
console.log(`[${i+1}/${chain.length}] Executing: @~/.codex/prompts/${cmd.name}.md`);
// Update status to running
state.command_chain[i].status = 'running';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
try {
// Build command with parameters using full path
let commandStr = `@~/.codex/prompts/${cmd.name}.md`;
// Add parameters based on previous results and task context
if (i > 0 && state.execution_results.length > 0) {
const lastResult = state.execution_results[state.execution_results.length - 1];
commandStr += ` --resume="${lastResult.session_id || lastResult.artifact}"`;
}
// For analysis-based commands, add depth parameter
if (analysis.complexity === 'complex' && (cmd.name.includes('analyze') || cmd.name.includes('plan'))) {
commandStr += ` --depth=deep`;
}
// Add task description for planning commands
if (cmd.type === 'planning' && i === 0) {
commandStr += ` TASK="${analysis.goal}"`;
}
// Execute command via Bash (spawning as background task)
// Format: @~/.codex/prompts/command-name.md [] parameters
// Note: This simulates the execution; actual implementation uses hook callbacks
console.log(`Executing: ${commandStr}`);
// Save execution record
state.execution_results.push({
index: i,
command: cmd.name,
status: 'in-progress',
started_at: new Date().toISOString(),
session_id: null,
artifact: null
});
state.command_chain[i].status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`[${i+1}/${chain.length}] ✓ Completed: @~/.codex/prompts/${cmd.name}.md`);
} catch (error) {
state.command_chain[i].status = 'failed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`❌ Command failed: ${error.message}`);
break;
}
}
state.status = 'completed';
state.updated_at = new Date().toISOString();
Write(`${stateDir}/state.json`, JSON.stringify(state, null, 2));
console.log(`\n✅ Orchestration Complete: ${state.session_id}`);
return state;
}
```
## State File Structure
**Location**: `.workflow/.codex-coordinator/{session_id}/state.json`
```json
{
"session_id": "codex-coord-20250129-143025",
"status": "running|waiting|completed|failed",
"created_at": "2025-01-29T14:30:25Z",
"updated_at": "2025-01-29T14:35:45Z",
"analysis": {
"goal": "Fix login authentication bug",
"scope": ["auth", "login"],
"complexity": "medium",
"task_type": "bugfix"
},
"command_chain": [
{
"index": 0,
"name": "lite-fix",
"type": "bugfix",
"status": "completed"
},
{
"index": 1,
"name": "execute",
"type": "execution",
"status": "pending"
}
],
"execution_results": [
{
"index": 0,
"command": "lite-fix",
"status": "completed",
"started_at": "2025-01-29T14:30:25Z",
"session_id": "fix-login-2025-01-29",
"artifact": ".workflow/.lite-fix/fix-login-2025-01-29/fix-plan.json"
}
]
}
```
### Status Values
- `running`: Orchestrator actively executing
- `waiting`: Paused, waiting for external events
- `completed`: All commands finished successfully
- `failed`: Error occurred or user aborted
## Task Type Routing (Pipeline Summary)
**Note**: 【 】marks Minimum Execution Units (最小执行单元) - these commands must execute together.
| Task Type | Pipeline | Minimum Units |
|-----------|----------|---|
| **bugfix** | Bug报告 →【@~/.codex/prompts/lite-fix.md → @~/.codex/prompts/execute.md】→ 修复代码 | Bug Fix |
| **discovery** | 需求 →【@~/.codex/prompts/issue-discover.md → @~/.codex/prompts/issue-plan.md → @~/.codex/prompts/issue-queue.md → @~/.codex/prompts/issue-execute.md】→ 完成 issues | Issue Workflow |
| **analysis** | 需求 → @~/.codex/prompts/analyze-with-file.md → 分析报告 | Analyze With File |
| **cleanup** | 代码库 → @~/.codex/prompts/clean.md → 清理完成 | Cleanup |
| **brainstorm** | 主题 →【@~/.codex/prompts/brainstorm-with-file.md → @~/.codex/prompts/execute.md】→ 实现代码 | Brainstorm to Execution |
| **batch-planning** | 需求集合 →【@~/.codex/prompts/merge-plans-with-file.md → @~/.codex/prompts/execute.md】→ 代码完成 | Merge Multiple Plans |
| **feature** (simple) | 需求 →【@~/.codex/prompts/lite-plan-a.md → @~/.codex/prompts/execute.md】→ 代码 | Quick Implementation |
| **feature** (complex) | 需求 → @~/.codex/prompts/lite-plan-b.md → 详细计划 → @~/.codex/prompts/execute.md → 代码 | Complex Planning |
## Available Commands Reference
### Planning Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-plan-a** | Lightweight merged-mode planning | `@~/.codex/prompts/lite-plan-a.md TASK="..."` | plan.json |
| **lite-plan-b** | Multi-angle exploration planning | `@~/.codex/prompts/lite-plan-b.md TASK="..."` | plan.json |
| **lite-plan-c** | Parallel angle planning | `@~/.codex/prompts/lite-plan-c.md TASK="..."` | plan.json |
| **quick-plan-with-file** | Quick planning with file tracking | `@~/.codex/prompts/quick-plan-with-file.md TASK="..."` | plan + docs |
| **merge-plans-with-file** | Merge multiple plans | `@~/.codex/prompts/merge-plans-with-file.md PLANS="..."` | merged-plan.json |
### Execution Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **execute** | Execute tasks from plan | `@~/.codex/prompts/execute.md SESSION=".../plan/"` | Working code |
| **unified-execute-with-file** | Execute with file tracking | `@~/.codex/prompts/unified-execute-with-file.md SESSION="..."` | Code + tracking |
### Bug Fix Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **lite-fix** | Quick bug diagnosis and planning | `@~/.codex/prompts/lite-fix.md BUG="..."` | fix-plan.json |
| **debug-with-file** | Hypothesis-driven debugging | `@~/.codex/prompts/debug-with-file.md BUG="..."` | understanding.md |
### Discovery Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **issue-discover** | Multi-perspective issue discovery | `@~/.codex/prompts/issue-discover.md PATTERN="src/**"` | issues.jsonl |
| **issue-discover-by-prompt** | Prompt-based discovery | `@~/.codex/prompts/issue-discover-by-prompt.md PROMPT="..."` | issues |
| **issue-plan** | Plan issue solutions | `@~/.codex/prompts/issue-plan.md --all-pending` | issue-plans.json |
| **issue-queue** | Form execution queue | `@~/.codex/prompts/issue-queue.md --from-plan` | queue.json |
| **issue-execute** | Execute issue queue | `@~/.codex/prompts/issue-execute.md QUEUE="..."` | Completed |
### Analysis Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **analyze-with-file** | Collaborative analysis | `@~/.codex/prompts/analyze-with-file.md TOPIC="..."` | discussion.md |
### Brainstorm Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **brainstorm-with-file** | Multi-perspective brainstorming | `@~/.codex/prompts/brainstorm-with-file.md TOPIC="..."` | brainstorm.md |
| **brainstorm-to-cycle** | Bridge brainstorm to execution | `@~/.codex/prompts/brainstorm-to-cycle.md` | Executable plan |
### Utility Commands
| Command | Purpose | Usage | Output |
|---------|---------|-------|--------|
| **clean** | Intelligent code cleanup | `@~/.codex/prompts/clean.md` | Cleaned code |
| **compact** | Compact session memory | `@~/.codex/prompts/compact.md SESSION="..."` | Compressed state |
## Execution Flow
```
User Input: TASK="..."
Phase 1: analyzeRequirements(task)
Phase 2: recommendCommandChain(analysis)
Display pipeline and commands
User Confirmation
Phase 3: executeCommandChain(chain, analysis)
├─ For each command:
│ ├─ Update state to "running"
│ ├─ Build command string with parameters
│ ├─ Execute @command [] with parameters
│ ├─ Save execution results
│ └─ Update state to "completed"
Output completion summary
```
## Key Design Principles
1. **Atomic Execution** - Never split minimum execution units
2. **State Persistence** - All state saved to JSON
3. **User Control** - Confirmation before execution
4. **Context Passing** - Parameters chain across commands
5. **Resume Support** - Can resume from state.json
6. **Intelligent Routing** - Task type determines command chain
7. **Complexity Awareness** - Different paths for simple vs complex tasks
## Command Invocation Format
**Format**: `@~/.codex/prompts/<command-name>.md <parameters>`
**Examples**:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Implement user authentication"
@~/.codex/prompts/execute.md SESSION=".workflow/.lite-plan/..."
@~/.codex/prompts/lite-fix.md BUG="Login fails with 404 error"
@~/.codex/prompts/issue-discover.md PATTERN="src/auth/**"
@~/.codex/prompts/brainstorm-with-file.md TOPIC="Improve user onboarding"
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Unknown task type | Default to feature implementation |
| Command not found | Error: command not available |
| Execution fails | Report error, offer retry or skip |
| Invalid parameters | Validate and ask for correction |
| Circular dependency | Detect and report |
| All commands fail | Report and suggest manual intervention |
## Session Management
**Resume Previous Session**:
```
1. Find session in .workflow/.codex-coordinator/
2. Load state.json
3. Identify last completed command
4. Restart from next pending command
```
**View Session Progress**:
```
cat .workflow/.codex-coordinator/{session-id}/state.json
```
---
## Execution Instructions
The coordinator workflow follows these steps:
1. **Parse Input**: Extract task description from TASK parameter
2. **Analyze**: Determine goal, scope, complexity, and task type
3. **Recommend**: Build optimal command chain based on analysis
4. **Confirm**: Display pipeline and request user approval
5. **Execute**: Run commands sequentially with state tracking
6. **Report**: Display final results and artifacts
To use this coordinator, invoke it as a Claude Code command (not a Codex command):
From the Claude Code CLI, you would call Codex commands like:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="Your task description"
```
Or with options:
```bash
@~/.codex/prompts/lite-plan-a.md TASK="..." --depth=deep
```
This coordinator orchestrates such Codex commands based on your task requirements.

View File

@@ -0,0 +1,93 @@
---
name: enhance-prompt
description: Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection
argument-hint: "user input to enhance"
---
## Overview
Systematically enhances user prompts by leveraging session memory context and intent analysis, translating ambiguous requests into actionable specifications.
## Core Protocol
**Enhancement Pipeline:**
`Intent Translation``Context Integration``Structured Output`
**Context Sources:**
- Session memory (conversation history, previous analysis)
- Implicit technical requirements
- User intent patterns
## Enhancement Rules
### Intent Translation
| User Says | Translate To | Focus |
|-----------|--------------|-------|
| "fix" | Debug and resolve | Root cause → preserve behavior |
| "improve" | Enhance/optimize | Performance/readability |
| "add" | Implement feature | Integration + edge cases |
| "refactor" | Restructure quality | Maintain behavior |
| "update" | Modernize | Version compatibility |
### Context Integration Strategy
**Session Memory:**
- Reference recent conversation context
- Reuse previously identified patterns
- Build on established understanding
- Infer technical requirements from discussion
**Example:**
```bash
# User: "add login"
# Session Memory: Previous auth discussion, JWT mentioned
# Inferred: JWT-based auth, integrate with existing session management
# Action: Implement JWT authentication with session persistence
```
## Output Structure
```bash
INTENT: [Clear technical goal]
CONTEXT: [Session memory + codebase patterns]
ACTION: [Specific implementation steps]
ATTENTION: [Critical constraints]
```
### Output Examples
**Example 1:**
```bash
# Input: "fix login button"
INTENT: Debug non-functional login button
CONTEXT: From session - OAuth flow discussed, known state issue
ACTION: Check event binding → verify state updates → test auth flow
ATTENTION: Preserve existing OAuth integration
```
**Example 2:**
```bash
# Input: "refactor payment code"
INTENT: Restructure payment module for maintainability
CONTEXT: Session memory - PCI compliance requirements, Stripe integration patterns
ACTION: Extract reusable validators → isolate payment gateway logic → maintain adapter pattern
ATTENTION: Zero behavior change, maintain PCI compliance, full test coverage
```
## Enhancement Triggers
- Ambiguous language: "fix", "improve", "clean up"
- Vague requests requiring clarification
- Complex technical requirements
- Architecture changes
- Critical systems: auth, payment, security
- Multi-step refactoring
## Key Principles
1. **Session Memory First**: Leverage conversation context and established understanding
2. **Context Reuse**: Build on previous discussions and decisions
3. **Clear Output**: Structured, actionable specifications
4. **Intent Clarification**: Transform vague requests into specific technical goals
5. **Avoid Duplication**: Reference existing context, don't repeat

View File

@@ -1,675 +0,0 @@
# Flow Template Generator
Generate workflow templates for meta-skill/flow-coordinator.
## Usage
```
/meta-skill:flow-create [template-name] [--output <path>]
```
**Examples**:
```bash
/meta-skill:flow-create bugfix-v2
/meta-skill:flow-create my-workflow --output ~/.claude/skills/my-skill/templates/
```
## Execution Flow
```
User Input → Phase 1: Template Design → Phase 2: Step Definition → Phase 3: Generate JSON
↓ ↓ ↓
Name + Description Define workflow steps Write template file
```
---
## Phase 1: Template Design
Gather basic template information:
```javascript
async function designTemplate(input) {
const templateName = parseTemplateName(input) || await askTemplateName();
const metadata = await AskUserQuestion({
questions: [
{
question: "What is the purpose of this workflow template?",
header: "Purpose",
options: [
{ label: "Feature Development", description: "Implement new features with planning and testing" },
{ label: "Bug Fix", description: "Diagnose and fix bugs with verification" },
{ label: "TDD Development", description: "Test-driven development workflow" },
{ label: "Code Review", description: "Review cycle with findings and fixes" },
{ label: "Testing", description: "Test generation and validation" },
{ label: "Issue Workflow", description: "Complete issue lifecycle (discover → plan → queue → execute)" },
{ label: "With-File Workflow", description: "Documented exploration (brainstorm/debug/analyze)" },
{ label: "Custom", description: "Define custom workflow purpose" }
],
multiSelect: false
},
{
question: "What complexity level?",
header: "Level",
options: [
{ label: "Level 1 (Rapid)", description: "1-2 steps, ultra-lightweight (lite-lite-lite)" },
{ label: "Level 2 (Lightweight)", description: "2-4 steps, quick implementation" },
{ label: "Level 3 (Standard)", description: "4-6 steps, with verification and testing" },
{ label: "Level 4 (Full)", description: "6+ steps, brainstorm + full workflow" }
],
multiSelect: false
}
]
});
return {
name: templateName,
description: generateDescription(templateName, metadata.Purpose),
level: parseLevel(metadata.Level),
purpose: metadata.Purpose
};
}
```
---
## Phase 2: Step Definition
### Step 2.1: Select Command Category
```javascript
async function selectCommandCategory() {
return await AskUserQuestion({
questions: [{
question: "Select command category",
header: "Category",
options: [
{ label: "Planning", description: "lite-plan, plan, multi-cli-plan, tdd-plan, quick-plan-with-file" },
{ label: "Execution", description: "lite-execute, execute, unified-execute-with-file" },
{ label: "Testing", description: "test-fix-gen, test-cycle-execute, test-gen, tdd-verify" },
{ label: "Review", description: "review-session-cycle, review-module-cycle, review-cycle-fix" },
{ label: "Bug Fix", description: "lite-fix, debug-with-file" },
{ label: "Brainstorm", description: "brainstorm-with-file, brainstorm:auto-parallel" },
{ label: "Analysis", description: "analyze-with-file" },
{ label: "Issue", description: "discover, plan, queue, execute, from-brainstorm, convert-to-plan" },
{ label: "Utility", description: "clean, init, replan, status" }
],
multiSelect: false
}]
});
}
```
### Step 2.2: Select Specific Command
```javascript
async function selectCommand(category) {
const commandOptions = {
'Planning': [
{ label: "/workflow:lite-plan", description: "Lightweight merged-mode planning" },
{ label: "/workflow:plan", description: "Full planning with architecture design" },
{ label: "/workflow:multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
{ label: "/workflow:tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
{ label: "/workflow:quick-plan-with-file", description: "Rapid planning with minimal docs" },
{ label: "/workflow:plan-verify", description: "Verify plan against requirements" },
{ label: "/workflow:replan", description: "Update plan and execute changes" }
],
'Execution': [
{ label: "/workflow:lite-execute", description: "Execute from in-memory plan" },
{ label: "/workflow:execute", description: "Execute from planning session" },
{ label: "/workflow:unified-execute-with-file", description: "Universal execution engine" },
{ label: "/workflow:lite-lite-lite", description: "Ultra-lightweight multi-tool execution" }
],
'Testing': [
{ label: "/workflow:test-fix-gen", description: "Generate test tasks for specific issues" },
{ label: "/workflow:test-cycle-execute", description: "Execute iterative test-fix cycle (>=95% pass)" },
{ label: "/workflow:test-gen", description: "Generate comprehensive test suite" },
{ label: "/workflow:tdd-verify", description: "Verify TDD workflow compliance" }
],
'Review': [
{ label: "/workflow:review-session-cycle", description: "Session-based multi-dimensional code review" },
{ label: "/workflow:review-module-cycle", description: "Module-focused code review" },
{ label: "/workflow:review-cycle-fix", description: "Fix review findings with prioritization" },
{ label: "/workflow:review", description: "Post-implementation review" }
],
'Bug Fix': [
{ label: "/workflow:lite-fix", description: "Lightweight bug diagnosis and fix" },
{ label: "/workflow:debug-with-file", description: "Hypothesis-driven debugging with documentation" }
],
'Brainstorm': [
{ label: "/workflow:brainstorm-with-file", description: "Multi-perspective ideation with documentation" },
{ label: "/workflow:brainstorm:auto-parallel", description: "Parallel multi-role brainstorming" }
],
'Analysis': [
{ label: "/workflow:analyze-with-file", description: "Collaborative analysis with documentation" }
],
'Issue': [
{ label: "/issue:discover", description: "Multi-perspective issue discovery" },
{ label: "/issue:discover-by-prompt", description: "Prompt-based issue discovery with Gemini" },
{ label: "/issue:plan", description: "Plan issue solutions" },
{ label: "/issue:queue", description: "Form execution queue with conflict analysis" },
{ label: "/issue:execute", description: "Execute issue queue with DAG orchestration" },
{ label: "/issue:from-brainstorm", description: "Convert brainstorm to issue" },
{ label: "/issue:convert-to-plan", description: "Convert planning artifacts to issue solutions" }
],
'Utility': [
{ label: "/workflow:clean", description: "Intelligent code cleanup" },
{ label: "/workflow:init", description: "Initialize project-level state" },
{ label: "/workflow:replan", description: "Interactive workflow replanning" },
{ label: "/workflow:status", description: "Generate workflow status views" }
]
};
return await AskUserQuestion({
questions: [{
question: `Select ${category} command`,
header: "Command",
options: commandOptions[category] || commandOptions['Planning'],
multiSelect: false
}]
});
}
```
### Step 2.3: Select Execution Unit
```javascript
async function selectExecutionUnit() {
return await AskUserQuestion({
questions: [{
question: "Select execution unit (atomic command group)",
header: "Unit",
options: [
// Planning + Execution Units
{ label: "quick-implementation", description: "【lite-plan → lite-execute】" },
{ label: "multi-cli-planning", description: "【multi-cli-plan → lite-execute】" },
{ label: "full-planning-execution", description: "【plan → execute】" },
{ label: "verified-planning-execution", description: "【plan → plan-verify → execute】" },
{ label: "replanning-execution", description: "【replan → execute】" },
{ label: "tdd-planning-execution", description: "【tdd-plan → execute】" },
// Testing Units
{ label: "test-validation", description: "【test-fix-gen → test-cycle-execute】" },
{ label: "test-generation-execution", description: "【test-gen → execute】" },
// Review Units
{ label: "code-review", description: "【review-*-cycle → review-cycle-fix】" },
// Bug Fix Units
{ label: "bug-fix", description: "【lite-fix → lite-execute】" },
// Issue Units
{ label: "issue-workflow", description: "【discover → plan → queue → execute】" },
{ label: "rapid-to-issue", description: "【lite-plan → convert-to-plan → queue → execute】" },
{ label: "brainstorm-to-issue", description: "【from-brainstorm → queue → execute】" },
// With-File Units (self-contained)
{ label: "brainstorm-with-file", description: "Self-contained brainstorming workflow" },
{ label: "debug-with-file", description: "Self-contained debugging workflow" },
{ label: "analyze-with-file", description: "Self-contained analysis workflow" },
// Standalone
{ label: "standalone", description: "Single command, no atomic grouping" }
],
multiSelect: false
}]
});
}
```
### Step 2.4: Select Execution Mode
```javascript
async function selectExecutionMode() {
return await AskUserQuestion({
questions: [{
question: "Execution mode for this step?",
header: "Mode",
options: [
{ label: "mainprocess", description: "Run in main process (blocking, synchronous)" },
{ label: "async", description: "Run asynchronously (background, hook callbacks)" }
],
multiSelect: false
}]
});
}
```
### Complete Step Definition Flow
```javascript
async function defineSteps(templateDesign) {
// Suggest steps based on purpose
const suggestedSteps = getSuggestedSteps(templateDesign.purpose);
const customize = await AskUserQuestion({
questions: [{
question: "Use suggested steps or customize?",
header: "Steps",
options: [
{ label: "Use Suggested", description: `Suggested: ${suggestedSteps.map(s => s.cmd).join(' → ')}` },
{ label: "Customize", description: "Modify or add custom steps" },
{ label: "Start Empty", description: "Define all steps from scratch" }
],
multiSelect: false
}]
});
if (customize.Steps === "Use Suggested") {
return suggestedSteps;
}
// Interactive step definition
const steps = [];
let addMore = true;
while (addMore) {
const category = await selectCommandCategory();
const command = await selectCommand(category.Category);
const unit = await selectExecutionUnit();
const execMode = await selectExecutionMode();
const contextHint = await askContextHint(command.Command);
steps.push({
cmd: command.Command,
args: command.Command.includes('plan') || command.Command.includes('fix') ? '"{{goal}}"' : undefined,
unit: unit.Unit,
execution: {
type: "slash-command",
mode: execMode.Mode
},
contextHint: contextHint
});
const continueAdding = await AskUserQuestion({
questions: [{
question: `Added step ${steps.length}: ${command.Command}. Add another?`,
header: "Continue",
options: [
{ label: "Add More", description: "Define another step" },
{ label: "Done", description: "Finish step definition" }
],
multiSelect: false
}]
});
addMore = continueAdding.Continue === "Add More";
}
return steps;
}
```
---
## Suggested Step Templates
### Feature Development (Level 2 - Rapid)
```json
{
"name": "rapid",
"description": "Quick implementation with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute implementation based on plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
]
}
```
### Feature Development (Level 3 - Coupled)
```json
{
"name": "coupled",
"description": "Full workflow with verification, review, and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Bug Fix (Level 2)
```json
{
"name": "bugfix",
"description": "Bug diagnosis and fix with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "\"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute bug fix" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
]
}
```
### Bug Fix Hotfix (Level 2)
```json
{
"name": "bugfix-hotfix",
"description": "Urgent production bug fix (no tests)",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-fix", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
]
}
```
### TDD Development (Level 3)
```json
{
"name": "tdd",
"description": "Test-driven development with Red-Green-Refactor",
"level": 3,
"steps": [
{ "cmd": "/workflow:tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
{ "cmd": "/workflow:execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
{ "cmd": "/workflow:tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
]
}
```
### Code Review (Level 3)
```json
{
"name": "review",
"description": "Code review cycle with fixes and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
]
}
```
### Test Fix (Level 3)
```json
{
"name": "test-fix",
"description": "Fix failing tests",
"level": 3,
"steps": [
{ "cmd": "/workflow:test-fix-gen", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
### Issue Workflow (Level Issue)
```json
{
"name": "issue",
"description": "Complete issue lifecycle",
"level": "Issue",
"steps": [
{ "cmd": "/issue:discover", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Discover issues from codebase" },
{ "cmd": "/issue:plan", "args": "--all-pending", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Plan issue solutions" },
{ "cmd": "/issue:queue", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "unit": "issue-workflow", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Rapid to Issue (Level 2.5)
```json
{
"name": "rapid-to-issue",
"description": "Bridge lightweight planning to issue workflow",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
{ "cmd": "/issue:convert-to-plan", "args": "--latest-lite-plan -y", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert to issue plan" },
{ "cmd": "/issue:queue", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### Brainstorm to Issue (Level 4)
```json
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow",
"level": 4,
"steps": [
{ "cmd": "/issue:from-brainstorm", "args": "SESSION=\"{{session}}\" --auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert brainstorm to issue" },
{ "cmd": "/issue:queue", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "brainstorm-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
]
}
```
### With-File: Brainstorm (Level 4)
```json
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm-with-file", "args": "\"{{goal}}\"", "unit": "brainstorm-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-CLI brainstorming with documented diverge-converge cycles" }
]
}
```
### With-File: Debug (Level 3)
```json
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:debug-with-file", "args": "\"{{goal}}\"", "unit": "debug-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Hypothesis-driven debugging with Gemini validation" }
]
}
```
### With-File: Analyze (Level 3)
```json
{
"name": "analyze",
"description": "Collaborative analysis with documentation",
"level": 3,
"steps": [
{ "cmd": "/workflow:analyze-with-file", "args": "\"{{goal}}\"", "unit": "analyze-with-file", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-round collaborative analysis with CLI exploration" }
]
}
```
### Full Workflow (Level 4)
```json
{
"name": "full",
"description": "Complete workflow: brainstorm → plan → execute → test",
"level": 4,
"steps": [
{ "cmd": "/workflow:brainstorm:auto-parallel", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Parallel multi-perspective brainstorming" },
{ "cmd": "/workflow:plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Multi-CLI Planning (Level 3)
```json
{
"name": "multi-cli-plan",
"description": "Multi-CLI collaborative planning with cross-verification",
"level": 3,
"steps": [
{ "cmd": "/workflow:multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute converged plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
### Ultra-Lightweight (Level 1)
```json
{
"name": "lite-lite-lite",
"description": "Ultra-lightweight multi-tool execution",
"level": 1,
"steps": [
{ "cmd": "/workflow:lite-lite-lite", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Direct execution with minimal overhead" }
]
}
```
---
## Command Port Reference
Each command has input/output ports for pipeline composition:
| Command | Input Port | Output Port | Atomic Unit |
|---------|------------|-------------|-------------|
| **Planning** |
| lite-plan | requirement | plan | quick-implementation |
| plan | requirement | detailed-plan | full-planning-execution |
| plan-verify | detailed-plan | verified-plan | verified-planning-execution |
| multi-cli-plan | requirement | multi-cli-plan | multi-cli-planning |
| tdd-plan | requirement | tdd-tasks | tdd-planning-execution |
| replan | session, feedback | replan | replanning-execution |
| **Execution** |
| lite-execute | plan, multi-cli-plan, lite-fix | code | (multiple) |
| execute | detailed-plan, verified-plan, replan, tdd-tasks | code | (multiple) |
| **Testing** |
| test-fix-gen | failing-tests, session | test-tasks | test-validation |
| test-cycle-execute | test-tasks | test-passed | test-validation |
| test-gen | code, session | test-tasks | test-generation-execution |
| tdd-verify | code | tdd-verified | standalone |
| **Review** |
| review-session-cycle | code, session | review-verified | code-review |
| review-module-cycle | module-pattern | review-verified | code-review |
| review-cycle-fix | review-findings | fixed-code | code-review |
| **Bug Fix** |
| lite-fix | bug-report | lite-fix | bug-fix |
| debug-with-file | bug-report | understanding-document | debug-with-file |
| **With-File** |
| brainstorm-with-file | exploration-topic | brainstorm-document | brainstorm-with-file |
| analyze-with-file | analysis-topic | discussion-document | analyze-with-file |
| **Issue** |
| issue:discover | codebase | pending-issues | issue-workflow |
| issue:plan | pending-issues | issue-plans | issue-workflow |
| issue:queue | issue-plans, converted-plan | execution-queue | issue-workflow |
| issue:execute | execution-queue | completed-issues | issue-workflow |
| issue:convert-to-plan | plan | converted-plan | rapid-to-issue |
| issue:from-brainstorm | brainstorm-document | converted-plan | brainstorm-to-issue |
---
## Minimum Execution Units (最小执行单元)
**Definition**: Commands that must execute together as an atomic group.
| Unit Name | Commands | Purpose |
|-----------|----------|---------|
| **quick-implementation** | lite-plan → lite-execute | Lightweight plan and execution |
| **multi-cli-planning** | multi-cli-plan → lite-execute | Multi-perspective planning and execution |
| **bug-fix** | lite-fix → lite-execute | Bug diagnosis and fix |
| **full-planning-execution** | plan → execute | Detailed planning and execution |
| **verified-planning-execution** | plan → plan-verify → execute | Planning with verification |
| **replanning-execution** | replan → execute | Update plan and execute |
| **tdd-planning-execution** | tdd-plan → execute | TDD planning and execution |
| **test-validation** | test-fix-gen → test-cycle-execute | Test generation and fix cycle |
| **test-generation-execution** | test-gen → execute | Generate and execute tests |
| **code-review** | review-*-cycle → review-cycle-fix | Review and fix findings |
| **issue-workflow** | discover → plan → queue → execute | Complete issue lifecycle |
| **rapid-to-issue** | lite-plan → convert-to-plan → queue → execute | Bridge to issue workflow |
| **brainstorm-to-issue** | from-brainstorm → queue → execute | Brainstorm to issue bridge |
| **brainstorm-with-file** | (self-contained) | Multi-perspective ideation |
| **debug-with-file** | (self-contained) | Hypothesis-driven debugging |
| **analyze-with-file** | (self-contained) | Collaborative analysis |
---
## Phase 3: Generate JSON
```javascript
async function generateTemplate(design, steps, outputPath) {
const template = {
name: design.name,
description: design.description,
level: design.level,
steps: steps
};
const finalPath = outputPath || `~/.claude/skills/flow-coordinator/templates/${design.name}.json`;
// Write template
Write(finalPath, JSON.stringify(template, null, 2));
// Validate
const validation = validateTemplate(template);
console.log(`✅ Template created: ${finalPath}`);
console.log(` Steps: ${template.steps.length}`);
console.log(` Level: ${template.level}`);
console.log(` Units: ${[...new Set(template.steps.map(s => s.unit))].join(', ')}`);
return { path: finalPath, template, validation };
}
```
---
## Output Format
```json
{
"name": "template-name",
"description": "Template description",
"level": 2,
"steps": [
{
"cmd": "/workflow:command",
"args": "\"{{goal}}\"",
"unit": "unit-name",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Description of what this step does"
}
]
}
```
---
## Examples
**Create a quick bugfix template**:
```
/meta-skill:flow-create hotfix-simple
→ Purpose: Bug Fix
→ Level: 2 (Lightweight)
→ Steps: Use Suggested
→ Output: ~/.claude/skills/flow-coordinator/templates/hotfix-simple.json
```
**Create a custom multi-stage workflow**:
```
/meta-skill:flow-create complex-feature --output ~/.claude/skills/my-project/templates/
→ Purpose: Feature Development
→ Level: 3 (Standard)
→ Steps: Customize
→ Step 1: /workflow:brainstorm:auto-parallel (standalone, mainprocess)
→ Step 2: /workflow:plan (verified-planning-execution, mainprocess)
→ Step 3: /workflow:plan-verify (verified-planning-execution, mainprocess)
→ Step 4: /workflow:execute (verified-planning-execution, async)
→ Step 5: /workflow:review-session-cycle (code-review, mainprocess)
→ Step 6: /workflow:review-cycle-fix (code-review, mainprocess)
→ Done
→ Output: ~/.claude/skills/my-project/templates/complex-feature.json
```

View File

@@ -1,718 +0,0 @@
---
name: convert-to-plan
description: Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions
argument-hint: "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Skip confirmation, auto-create issue and bind solution.
# Issue Convert-to-Plan Command (/issue:convert-to-plan)
## Overview
Converts various planning artifact formats into issue workflow solutions with intelligent detection and automatic binding.
**Supported Sources** (auto-detected):
- **lite-plan**: `.workflow/.lite-plan/{slug}/plan.json`
- **workflow-session**: `WFS-xxx` ID or `.workflow/active/{session}/` folder
- **markdown**: Any `.md` file with implementation/task content
- **json**: Direct JSON files matching plan-json-schema
## Quick Reference
```bash
# Convert lite-plan to new issue (auto-creates issue)
/issue:convert-to-plan ".workflow/.lite-plan/implement-auth-2026-01-25"
# Convert workflow session to existing issue
/issue:convert-to-plan WFS-auth-impl --issue GH-123
# Supplement existing solution with additional tasks
/issue:convert-to-plan "./docs/additional-tasks.md" --issue ISS-001 --supplement
# Auto mode - skip confirmations
/issue:convert-to-plan ".workflow/.lite-plan/my-plan" -y
```
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `<SOURCE>` | Planning artifact path or WFS-xxx ID | Required |
| `--issue <id>` | Bind to existing issue instead of creating new | Auto-create |
| `--supplement` | Add tasks to existing solution (requires --issue) | false |
| `-y, --yes` | Skip all confirmations | false |
## Core Data Access Principle
**⚠️ Important**: Use CLI commands for all issue/solution operations.
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| Get issue | `ccw issue status <id> --json` | Read issues.jsonl directly |
| Create issue | `ccw issue init <id> --title "..."` | Write to issues.jsonl |
| Bind solution | `ccw issue bind <id> <sol-id>` | Edit issues.jsonl |
| List solutions | `ccw issue solutions --issue <id> --brief` | Read solutions/*.jsonl |
## Solution Schema Reference
Target format for all extracted data (from solution-schema.json):
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description?: string; // High-level summary
approach?: string; // Technical strategy
tasks: Task[]; // Required: at least 1 task
exploration_context?: object; // Optional: source context
analysis?: { risk, impact, complexity };
score?: number; // 0.0-1.0
is_bound: boolean;
created_at: string;
bound_at?: string;
}
interface Task {
id: string; // T1, T2, T3... (pattern: ^T[0-9]+$)
title: string; // Required: action verb + target
scope: string; // Required: module path or feature area
action: Action; // Required: Create|Update|Implement|...
description?: string;
modification_points?: Array<{file, target, change}>;
implementation: string[]; // Required: step-by-step guide
test?: { unit?, integration?, commands?, coverage_target? };
acceptance: { criteria: string[], verification: string[] }; // Required
commit?: { type, scope, message_template, breaking? };
depends_on?: string[];
priority?: number; // 1-5 (default: 3)
}
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' | 'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
```
## Implementation
### Phase 1: Parse Arguments & Detect Source Type
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --issue, --supplement, -y/--yes
// Extract source path (first non-flag argument)
const source = extractSourceArg(input);
// Detect source type
function detectSourceType(source) {
// Check for WFS-xxx pattern (workflow session ID)
if (source.match(/^WFS-[\w-]+$/)) {
return { type: 'workflow-session-id', path: `.workflow/active/${source}` };
}
// Check if directory
const isDir = Bash(`test -d "${source}" && echo "dir" || echo "file"`).trim() === 'dir';
if (isDir) {
// Check for lite-plan indicator
const hasPlanJson = Bash(`test -f "${source}/plan.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasPlanJson) {
return { type: 'lite-plan', path: source };
}
// Check for workflow session indicator
const hasSession = Bash(`test -f "${source}/workflow-session.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasSession) {
return { type: 'workflow-session', path: source };
}
}
// Check file extensions
if (source.endsWith('.json')) {
return { type: 'json-file', path: source };
}
if (source.endsWith('.md')) {
return { type: 'markdown-file', path: source };
}
// Check if path exists at all
const exists = Bash(`test -e "${source}" && echo "yes" || echo "no"`).trim() === 'yes';
if (!exists) {
throw new Error(`E001: Source not found: ${source}`);
}
return { type: 'unknown', path: source };
}
const sourceInfo = detectSourceType(source);
if (sourceInfo.type === 'unknown') {
throw new Error(`E002: Unable to detect source format for: ${source}`);
}
console.log(`Detected source type: ${sourceInfo.type}`);
```
### Phase 2: Extract Data Using Format-Specific Extractor
```javascript
let extracted = { title: '', approach: '', tasks: [], metadata: {} };
switch (sourceInfo.type) {
case 'lite-plan':
extracted = extractFromLitePlan(sourceInfo.path);
break;
case 'workflow-session':
case 'workflow-session-id':
extracted = extractFromWorkflowSession(sourceInfo.path);
break;
case 'markdown-file':
extracted = await extractFromMarkdownAI(sourceInfo.path);
break;
case 'json-file':
extracted = extractFromJsonFile(sourceInfo.path);
break;
}
// Validate extraction
if (!extracted.tasks || extracted.tasks.length === 0) {
throw new Error('E006: No tasks extracted from source');
}
// Ensure task IDs are normalized to T1, T2, T3...
extracted.tasks = normalizeTaskIds(extracted.tasks);
console.log(`Extracted: ${extracted.tasks.length} tasks`);
```
#### Extractor: Lite-Plan
```javascript
function extractFromLitePlan(folderPath) {
const planJson = Read(`${folderPath}/plan.json`);
const plan = JSON.parse(planJson);
return {
title: plan.summary?.split('.')[0]?.trim() || 'Untitled Plan',
description: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(t => ({
id: t.id,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.verification ? {
unit: t.verification.unit_tests,
integration: t.verification.integration_tests,
commands: t.verification.manual_checks
} : {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification?.manual_checks || []
},
depends_on: t.depends_on || [],
priority: 3
})),
metadata: {
source_type: 'lite-plan',
source_path: folderPath,
complexity: plan.complexity,
estimated_time: plan.estimated_time,
exploration_angles: plan._metadata?.exploration_angles || [],
original_timestamp: plan._metadata?.timestamp
}
};
}
```
#### Extractor: Workflow Session
```javascript
function extractFromWorkflowSession(sessionPath) {
// Load session metadata
const sessionJson = Read(`${sessionPath}/workflow-session.json`);
const session = JSON.parse(sessionJson);
// Load IMPL_PLAN.md for approach (if exists)
let approach = '';
const implPlanPath = `${sessionPath}/IMPL_PLAN.md`;
const hasImplPlan = Bash(`test -f "${implPlanPath}" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasImplPlan) {
const implPlan = Read(implPlanPath);
// Extract overview/approach section
const overviewMatch = implPlan.match(/##\s*(?:Overview|Approach|Strategy)\s*\n([\s\S]*?)(?=\n##|$)/i);
approach = overviewMatch?.[1]?.trim() || implPlan.split('\n').slice(0, 10).join('\n');
}
// Load all task JSONs from .task folder
const taskFiles = Glob({ pattern: `${sessionPath}/.task/IMPL-*.json` });
const tasks = taskFiles.map(f => {
const taskJson = Read(f);
const task = JSON.parse(taskJson);
return {
id: task.id?.replace(/^IMPL-0*/, 'T') || 'T1', // IMPL-001 → T1
title: task.title,
scope: task.scope || inferScopeFromTask(task),
action: capitalizeAction(task.type) || 'Implement',
description: task.description,
modification_points: task.implementation?.modification_points || [],
implementation: task.implementation?.steps || [],
test: task.implementation?.test || {},
acceptance: {
criteria: task.acceptance_criteria || [],
verification: task.verification_steps || []
},
commit: task.commit,
depends_on: (task.depends_on || []).map(d => d.replace(/^IMPL-0*/, 'T')),
priority: task.priority || 3
};
});
return {
title: session.name || session.description?.split('.')[0] || 'Workflow Session',
description: session.description || session.name,
approach: approach || session.description,
tasks: tasks,
metadata: {
source_type: 'workflow-session',
source_path: sessionPath,
session_id: session.id,
created_at: session.created_at
}
};
}
function inferScopeFromTask(task) {
if (task.implementation?.modification_points?.length) {
const files = task.implementation.modification_points.map(m => m.file);
// Find common directory prefix
const dirs = files.map(f => f.split('/').slice(0, -1).join('/'));
return [...new Set(dirs)][0] || '';
}
return '';
}
function capitalizeAction(type) {
if (!type) return 'Implement';
const map = { feature: 'Implement', bugfix: 'Fix', refactor: 'Refactor', test: 'Test', docs: 'Update' };
return map[type.toLowerCase()] || type.charAt(0).toUpperCase() + type.slice(1);
}
```
#### Extractor: Markdown (AI-Assisted via Gemini)
```javascript
async function extractFromMarkdownAI(filePath) {
const fileContent = Read(filePath);
// Use Gemini CLI for intelligent extraction
const cliPrompt = `PURPOSE: Extract implementation plan from markdown document for issue solution conversion. Must output ONLY valid JSON.
TASK: • Analyze document structure • Identify title/summary • Extract approach/strategy section • Parse tasks from any format (lists, tables, sections, code blocks) • Normalize each task to solution schema
MODE: analysis
CONTEXT: Document content provided below
EXPECTED: Valid JSON object with format:
{
"title": "extracted title",
"approach": "extracted approach/strategy",
"tasks": [
{
"id": "T1",
"title": "task title",
"scope": "module or feature area",
"action": "Implement|Update|Create|Fix|Refactor|Add|Delete|Configure|Test",
"description": "what to do",
"implementation": ["step 1", "step 2"],
"acceptance": ["criteria 1", "criteria 2"]
}
]
}
CONSTRAINTS: Output ONLY valid JSON - no markdown, no explanation | Action must be one of: Create, Update, Implement, Refactor, Add, Delete, Configure, Test, Fix | Tasks must have id, title, scope, action, implementation (array), acceptance (array)
DOCUMENT CONTENT:
${fileContent}`;
// Execute Gemini CLI
const result = Bash(`ccw cli -p '${cliPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis`, { timeout: 120000 });
// Parse JSON from result (may be wrapped in markdown code block)
let jsonText = result.trim();
const jsonMatch = jsonText.match(/```(?:json)?\s*([\s\S]*?)```/);
if (jsonMatch) {
jsonText = jsonMatch[1].trim();
}
try {
const extracted = JSON.parse(jsonText);
// Normalize tasks
const tasks = (extracted.tasks || []).map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title || 'Untitled task',
scope: t.scope || '',
action: validateAction(t.action) || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || {},
acceptance: {
criteria: Array.isArray(t.acceptance) ? t.acceptance : [t.acceptance || ''],
verification: t.verification || []
},
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: extracted.title || 'Extracted Plan',
description: extracted.summary || extracted.title,
approach: extracted.approach || '',
tasks: tasks,
metadata: {
source_type: 'markdown',
source_path: filePath,
extraction_method: 'gemini-ai'
}
};
} catch (e) {
// Provide more context for debugging
throw new Error(`E005: Failed to extract tasks from markdown. Gemini response was not valid JSON. Error: ${e.message}. Response preview: ${jsonText.substring(0, 200)}...`);
}
}
function validateAction(action) {
const validActions = ['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete', 'Configure', 'Test', 'Fix'];
if (!action) return null;
const normalized = action.charAt(0).toUpperCase() + action.slice(1).toLowerCase();
return validActions.includes(normalized) ? normalized : null;
}
```
#### Extractor: JSON File
```javascript
function extractFromJsonFile(filePath) {
const content = Read(filePath);
const plan = JSON.parse(content);
// Detect if it's already solution format or plan format
if (plan.tasks && Array.isArray(plan.tasks)) {
// Map tasks to normalized format
const tasks = plan.tasks.map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
modification_points: t.modification_points || [],
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || t.verification || {},
acceptance: normalizeAcceptance(t.acceptance),
depends_on: t.depends_on || [],
priority: t.priority || 3
}));
return {
title: plan.summary?.split('.')[0] || plan.title || 'JSON Plan',
description: plan.summary || plan.description,
approach: plan.approach,
tasks: tasks,
metadata: {
source_type: 'json',
source_path: filePath,
complexity: plan.complexity,
original_metadata: plan._metadata
}
};
}
throw new Error('E002: JSON file does not contain valid plan structure (missing tasks array)');
}
function normalizeAcceptance(acceptance) {
if (!acceptance) return { criteria: [], verification: [] };
if (typeof acceptance === 'object' && acceptance.criteria) return acceptance;
if (Array.isArray(acceptance)) return { criteria: acceptance, verification: [] };
return { criteria: [String(acceptance)], verification: [] };
}
```
### Phase 3: Normalize Task IDs
```javascript
function normalizeTaskIds(tasks) {
return tasks.map((t, i) => ({
...t,
id: `T${i + 1}`,
// Also normalize depends_on references
depends_on: (t.depends_on || []).map(d => {
// Handle various ID formats: IMPL-001, T1, 1, etc.
const num = d.match(/\d+/)?.[0];
return num ? `T${parseInt(num)}` : d;
})
}));
}
```
### Phase 4: Resolve Issue (Create or Find)
```javascript
let issueId = flags.issue;
let existingSolution = null;
if (issueId) {
// Validate issue exists
let issueCheck;
try {
issueCheck = Bash(`ccw issue status ${issueId} --json 2>/dev/null`).trim();
if (!issueCheck || issueCheck === '') {
throw new Error('empty response');
}
} catch (e) {
throw new Error(`E003: Issue not found: ${issueId}`);
}
const issue = JSON.parse(issueCheck);
// Check if issue already has bound solution
if (issue.bound_solution_id && !flags.supplement) {
throw new Error(`E004: Issue ${issueId} already has bound solution (${issue.bound_solution_id}). Use --supplement to add tasks.`);
}
// Load existing solution for supplement mode
if (flags.supplement && issue.bound_solution_id) {
try {
const solResult = Bash(`ccw issue solution ${issue.bound_solution_id} --json`).trim();
existingSolution = JSON.parse(solResult);
console.log(`Loaded existing solution with ${existingSolution.tasks.length} tasks`);
} catch (e) {
throw new Error(`Failed to load existing solution: ${e.message}`);
}
}
} else {
// Create new issue via ccw issue create (auto-generates correct ID)
// Smart extraction: title from content, priority from complexity
const title = extracted.title || 'Converted Plan';
const context = extracted.description || extracted.approach || title;
// Auto-determine priority based on complexity
const complexityMap = { high: 2, medium: 3, low: 4 };
const priority = complexityMap[extracted.metadata.complexity?.toLowerCase()] || 3;
try {
// Use heredoc to avoid shell escaping issues
const createResult = Bash(`ccw issue create << 'EOF'
{
"title": ${JSON.stringify(title)},
"context": ${JSON.stringify(context)},
"priority": ${priority},
"source": "converted"
}
EOF`).trim();
// Parse result to get created issue ID
const created = JSON.parse(createResult);
issueId = created.id;
console.log(`Created issue: ${issueId} (priority: ${priority})`);
} catch (e) {
throw new Error(`Failed to create issue: ${e.message}`);
}
}
```
### Phase 5: Generate Solution
```javascript
// Generate solution ID
function generateSolutionId(issueId) {
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789';
let uid = '';
for (let i = 0; i < 4; i++) {
uid += chars[Math.floor(Math.random() * chars.length)];
}
return `SOL-${issueId}-${uid}`;
}
let solution;
const solutionId = generateSolutionId(issueId);
if (flags.supplement && existingSolution) {
// Supplement mode: merge with existing solution
const maxTaskId = Math.max(...existingSolution.tasks.map(t => parseInt(t.id.slice(1))));
const newTasks = extracted.tasks.map((t, i) => ({
...t,
id: `T${maxTaskId + i + 1}`
}));
solution = {
...existingSolution,
tasks: [...existingSolution.tasks, ...newTasks],
approach: existingSolution.approach + '\n\n[Supplementary] ' + (extracted.approach || ''),
updated_at: new Date().toISOString()
};
console.log(`Supplementing: ${existingSolution.tasks.length} existing + ${newTasks.length} new = ${solution.tasks.length} total tasks`);
} else {
// New solution
solution = {
id: solutionId,
description: extracted.description || extracted.title,
approach: extracted.approach,
tasks: extracted.tasks,
exploration_context: extracted.metadata.exploration_angles ? {
exploration_angles: extracted.metadata.exploration_angles
} : undefined,
analysis: {
risk: 'medium',
impact: 'medium',
complexity: extracted.metadata.complexity?.toLowerCase() || 'medium'
},
is_bound: false,
created_at: new Date().toISOString(),
_conversion_metadata: {
source_type: extracted.metadata.source_type,
source_path: extracted.metadata.source_path,
converted_at: new Date().toISOString()
}
};
}
```
### Phase 6: Confirm & Persist
```javascript
// Display preview
console.log(`
## Conversion Summary
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Mode**: ${flags.supplement ? 'Supplement' : 'New'}
### Tasks:
${solution.tasks.map(t => `- ${t.id}: ${t.title} [${t.action}]`).join('\n')}
`);
// Confirm if not auto mode
if (!flags.yes && !flags.y) {
const confirm = AskUserQuestion({
questions: [{
question: `Create solution for issue ${issueId} with ${solution.tasks.length} tasks?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Yes, create solution', description: 'Create and bind solution' },
{ label: 'Cancel', description: 'Abort without changes' }
]
}]
});
if (!confirm.answers?.['Confirm']?.includes('Yes')) {
console.log('Cancelled.');
return;
}
}
// Persist solution (following issue-plan-agent pattern)
Bash(`mkdir -p .workflow/issues/solutions`);
const solutionFile = `.workflow/issues/solutions/${issueId}.jsonl`;
if (flags.supplement) {
// Supplement mode: update existing solution line atomically
try {
const existingContent = Read(solutionFile);
const lines = existingContent.trim().split('\n').filter(l => l);
const updatedLines = lines.map(line => {
const sol = JSON.parse(line);
if (sol.id === existingSolution.id) {
return JSON.stringify(solution);
}
return line;
});
// Atomic write: write entire content at once
Write({ file_path: solutionFile, content: updatedLines.join('\n') + '\n' });
console.log(`✓ Updated solution: ${existingSolution.id}`);
} catch (e) {
throw new Error(`Failed to update solution: ${e.message}`);
}
// Note: No need to rebind - solution is already bound to issue
} else {
// New solution: append to JSONL file (following issue-plan-agent pattern)
try {
const solutionLine = JSON.stringify(solution);
// Read existing content, append new line, write atomically
const existing = Bash(`test -f "${solutionFile}" && cat "${solutionFile}" || echo ""`).trim();
const newContent = existing ? existing + '\n' + solutionLine + '\n' : solutionLine + '\n';
Write({ file_path: solutionFile, content: newContent });
console.log(`✓ Created solution: ${solutionId}`);
} catch (e) {
throw new Error(`Failed to write solution: ${e.message}`);
}
// Bind solution to issue
try {
Bash(`ccw issue bind ${issueId} ${solutionId}`);
console.log(`✓ Bound solution to issue`);
} catch (e) {
// Cleanup: remove solution file on bind failure
try {
Bash(`rm -f "${solutionFile}"`);
} catch (cleanupError) {
// Ignore cleanup errors
}
throw new Error(`Failed to bind solution: ${e.message}`);
}
// Update issue status to planned
try {
Bash(`ccw issue update ${issueId} --status planned`);
} catch (e) {
throw new Error(`Failed to update issue status: ${e.message}`);
}
}
```
### Phase 7: Summary
```javascript
console.log(`
## Done
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Status**: planned
### Next Steps:
- \`/issue:queue\` → Form execution queue
- \`ccw issue status ${issueId}\` → View issue details
- \`ccw issue solution ${flags.supplement ? existingSolution.id : solutionId}\` → View solution
`);
```
## Error Handling
| Error | Code | Resolution |
|-------|------|------------|
| Source not found | E001 | Check path exists |
| Invalid source format | E002 | Verify file contains valid plan structure |
| Issue not found | E003 | Check issue ID or omit --issue to create new |
| Solution already bound | E004 | Use --supplement to add tasks |
| AI extraction failed | E005 | Check markdown structure, try simpler format |
| No tasks extracted | E006 | Source must contain at least 1 task |
## Related Commands
- `/issue:plan` - Generate solutions from issue exploration
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with DAG parallelism
- `ccw issue status <id>` - View issue details
- `ccw issue solution <id>` - View solution details

View File

@@ -1,768 +0,0 @@
---
name: issue:discover-by-prompt
description: Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).
argument-hint: "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-continue all iterations, skip confirmations.
# Issue Discovery by Prompt
## Quick Start
```bash
# Discover issues based on user description
/issue:discover-by-prompt "Check if frontend API calls match backend implementations"
# Compare specific modules
/issue:discover-by-prompt "Verify auth flow consistency between mobile and web clients" --scope=src/auth/**,src/mobile/**
# Deep exploration with more iterations
/issue:discover-by-prompt "Find all places where error handling is inconsistent" --depth=deep --max-iterations=8
# Focused backend-frontend contract check
/issue:discover-by-prompt "Compare REST API definitions with frontend fetch calls"
```
**Core Difference from `/issue:discover`**:
- `discover`: Pre-defined perspectives (bug, security, etc.), parallel execution
- `discover-by-prompt`: User-driven prompt, Gemini-planned strategy, iterative exploration
## What & Why
### Core Concept
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command:
1. **Analyzes user intent** via Gemini to understand what to find
2. **Plans exploration strategy** dynamically based on codebase structure
3. **Executes iterative multi-agent exploration** with feedback loops
4. **Performs cross-module comparison** when detecting comparison intent
### Value Proposition
1. **Natural Language Input**: Describe what you want to find, not how to find it
2. **Intelligent Planning**: Gemini designs optimal exploration strategy
3. **Iterative Refinement**: Each round builds on previous discoveries
4. **Cross-Module Analysis**: Compare frontend/backend, mobile/web, old/new implementations
5. **Adaptive Exploration**: Adjusts direction based on findings
### Use Cases
| Scenario | Example Prompt |
|----------|----------------|
| API Contract | "Check if frontend calls match backend endpoints" |
| Error Handling | "Find inconsistent error handling patterns" |
| Migration Gap | "Compare old auth with new auth implementation" |
| Feature Parity | "Verify mobile has all web features" |
| Schema Drift | "Check if TypeScript types match API responses" |
| Integration | "Find mismatches between service A and service B" |
## How It Works
### Execution Flow
```
Phase 1: Prompt Analysis & Initialization
├─ Parse user prompt and flags
├─ Detect exploration intent (comparison/search/verification)
└─ Initialize discovery session
Phase 1.5: ACE Context Gathering
├─ Use ACE semantic search to understand codebase structure
├─ Identify relevant modules based on prompt keywords
├─ Collect architecture context for Gemini planning
└─ Build initial context package
Phase 2: Gemini Strategy Planning
├─ Feed ACE context + prompt to Gemini CLI
├─ Gemini analyzes and generates exploration strategy
├─ Create exploration dimensions with search targets
├─ Define comparison matrix (if comparison intent)
└─ Set success criteria and iteration limits
Phase 3: Iterative Agent Exploration (with ACE)
├─ Iteration 1: Initial exploration by assigned agents
│ ├─ Agent A: ACE search + explore dimension 1
│ ├─ Agent B: ACE search + explore dimension 2
│ └─ Collect findings, update shared context
├─ Iteration 2-N: Refined exploration
│ ├─ Analyze previous findings
│ ├─ ACE search for related code paths
│ ├─ Execute targeted exploration
│ └─ Update cumulative findings
└─ Termination: Max iterations or convergence
Phase 4: Cross-Analysis & Synthesis
├─ Compare findings across dimensions
├─ Identify discrepancies and issues
├─ Calculate confidence scores
└─ Generate issue candidates
Phase 5: Issue Generation & Summary
├─ Convert findings to issue format
├─ Write discovery outputs
└─ Prompt user for next action
```
### Exploration Dimensions
Dimensions are **dynamically generated by Gemini** based on the user prompt. Not limited to predefined categories.
**Examples**:
| Prompt | Generated Dimensions |
|--------|---------------------|
| "Check API contracts" | frontend-calls, backend-handlers |
| "Find auth issues" | auth-module (single dimension) |
| "Compare old/new implementations" | legacy-code, new-code |
| "Audit payment flow" | payment-service, validation, logging |
| "Find error handling gaps" | error-handlers, error-types, recovery-logic |
Gemini analyzes the prompt + ACE context to determine:
- How many dimensions are needed (1 to N)
- What each dimension should focus on
- Whether comparison is needed between dimensions
### Iteration Strategy
```
┌─────────────────────────────────────────────────────────────┐
│ Iteration Loop │
├─────────────────────────────────────────────────────────────┤
│ 1. Plan: What to explore this iteration │
│ └─ Based on: previous findings + unexplored areas │
│ │
│ 2. Execute: Launch agents for this iteration │
│ └─ Each agent: explore → collect → return summary │
│ │
│ 3. Analyze: Process iteration results │
│ └─ New findings? Gaps? Contradictions? │
│ │
│ 4. Decide: Continue or terminate │
│ └─ Terminate if: max iterations OR convergence OR │
│ high confidence on all questions │
└─────────────────────────────────────────────────────────────┘
```
## Core Responsibilities
### Phase 1: Prompt Analysis & Initialization
```javascript
// Step 1: Parse arguments
const { prompt, scope, depth, maxIterations } = parseArgs(args);
// Step 2: Generate discovery ID
const discoveryId = `DBP-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
// Step 3: Create output directory
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
await mkdir(outputDir, { recursive: true });
await mkdir(`${outputDir}/iterations`, { recursive: true });
// Step 4: Detect intent type from prompt
const intentType = detectIntent(prompt);
// Returns: 'comparison' | 'search' | 'verification' | 'audit'
// Step 5: Initialize discovery state
await writeJson(`${outputDir}/discovery-state.json`, {
discovery_id: discoveryId,
type: 'prompt-driven',
prompt: prompt,
intent_type: intentType,
scope: scope || '**/*',
depth: depth || 'standard',
max_iterations: maxIterations || 5,
phase: 'initialization',
created_at: new Date().toISOString(),
iterations: [],
cumulative_findings: [],
comparison_matrix: null // filled for comparison intent
});
```
### Phase 1.5: ACE Context Gathering
**Purpose**: Use ACE semantic search to gather codebase context before Gemini planning.
```javascript
// Step 1: Extract keywords from prompt for semantic search
const keywords = extractKeywords(prompt);
// e.g., "frontend API calls match backend" → ["frontend", "API", "backend", "endpoints"]
// Step 2: Use ACE to understand codebase structure
const aceQueries = [
`Project architecture and module structure for ${keywords.join(', ')}`,
`Where are ${keywords[0]} implementations located?`,
`How does ${keywords.slice(0, 2).join(' ')} work in this codebase?`
];
const aceResults = [];
for (const query of aceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: query
});
aceResults.push({ query, result });
}
// Step 3: Build context package for Gemini (kept in memory)
const aceContext = {
prompt_keywords: keywords,
codebase_structure: aceResults[0].result,
relevant_modules: aceResults.slice(1).map(r => r.result),
detected_patterns: extractPatterns(aceResults)
};
// Step 4: Update state (no separate file)
await updateDiscoveryState(outputDir, {
phase: 'context-gathered',
ace_context: {
queries_executed: aceQueries.length,
modules_identified: aceContext.relevant_modules.length
}
});
// aceContext passed to Phase 2 in memory
```
**ACE Query Strategy by Intent Type**:
| Intent | ACE Queries |
|--------|-------------|
| **comparison** | "frontend API calls", "backend API handlers", "API contract definitions" |
| **search** | "{keyword} implementations", "{keyword} usage patterns" |
| **verification** | "expected behavior for {feature}", "test coverage for {feature}" |
| **audit** | "all {category} patterns", "{category} security concerns" |
### Phase 2: Gemini Strategy Planning
**Purpose**: Gemini analyzes user prompt + ACE context to design optimal exploration strategy.
```javascript
// Step 1: Load ACE context gathered in Phase 1.5
const aceContext = await readJson(`${outputDir}/ace-context.json`);
// Step 2: Build Gemini planning prompt with ACE context
const planningPrompt = `
PURPOSE: Analyze discovery prompt and create exploration strategy based on codebase context
TASK:
• Parse user intent from prompt: "${prompt}"
• Use codebase context to identify specific modules and files to explore
• Create exploration dimensions with precise search targets
• Define comparison matrix structure (if comparison intent)
• Set success criteria and iteration strategy
MODE: analysis
CONTEXT: @${scope || '**/*'} | Discovery type: ${intentType}
## Codebase Context (from ACE semantic search)
${JSON.stringify(aceContext, null, 2)}
EXPECTED: JSON exploration plan following exploration-plan-schema.json:
{
"intent_analysis": { "type": "${intentType}", "primary_question": "...", "sub_questions": [...] },
"dimensions": [{ "name": "...", "description": "...", "search_targets": [...], "focus_areas": [...], "agent_prompt": "..." }],
"comparison_matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
"success_criteria": [...],
"estimated_iterations": N,
"termination_conditions": [...]
}
CONSTRAINTS: Use ACE context to inform targets | Focus on actionable plan
`;
// Step 3: Execute Gemini planning
Bash({
command: `ccw cli -p "${planningPrompt}" --tool gemini --mode analysis`,
run_in_background: true,
timeout: 300000
});
// Step 4: Parse Gemini output and validate against schema
const explorationPlan = await parseGeminiPlanOutput(geminiResult);
validateAgainstSchema(explorationPlan, 'exploration-plan-schema.json');
// Step 5: Enhance plan with ACE-discovered file paths
explorationPlan.dimensions = explorationPlan.dimensions.map(dim => ({
...dim,
ace_suggested_files: aceContext.relevant_modules
.filter(m => m.relevance_to === dim.name)
.map(m => m.file_path)
}));
// Step 6: Update state (plan kept in memory, not persisted)
await updateDiscoveryState(outputDir, {
phase: 'planned',
exploration_plan: {
dimensions_count: explorationPlan.dimensions.length,
has_comparison_matrix: !!explorationPlan.comparison_matrix,
estimated_iterations: explorationPlan.estimated_iterations
}
});
// explorationPlan passed to Phase 3 in memory
```
**Gemini Planning Responsibilities**:
| Responsibility | Input | Output |
|----------------|-------|--------|
| Intent Analysis | User prompt | type, primary_question, sub_questions |
| Dimension Design | ACE context + prompt | dimensions with search_targets |
| Comparison Matrix | Intent type + modules | comparison_points (if applicable) |
| Iteration Strategy | Depth setting | estimated_iterations, termination_conditions |
**Gemini Planning Output Schema**:
```json
{
"intent_analysis": {
"type": "comparison|search|verification|audit",
"primary_question": "string",
"sub_questions": ["string"]
},
"dimensions": [
{
"name": "frontend",
"description": "Client-side API calls and error handling",
"search_targets": ["src/api/**", "src/hooks/**"],
"focus_areas": ["fetch calls", "error boundaries", "response parsing"],
"agent_prompt": "Explore frontend API consumption patterns..."
},
{
"name": "backend",
"description": "Server-side API implementations",
"search_targets": ["src/server/**", "src/routes/**"],
"focus_areas": ["endpoint handlers", "response schemas", "error responses"],
"agent_prompt": "Explore backend API implementations..."
}
],
"comparison_matrix": {
"dimension_a": "frontend",
"dimension_b": "backend",
"comparison_points": [
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"},
{"aspect": "errors", "frontend_check": "error handling", "backend_check": "error responses"}
]
},
"success_criteria": [
"All API endpoints mapped between frontend and backend",
"Discrepancies identified with file:line references",
"Each finding includes remediation suggestion"
],
"estimated_iterations": 3,
"termination_conditions": [
"All comparison points verified",
"No new findings in last iteration",
"Confidence > 0.8 on primary question"
]
}
```
### Phase 3: Iterative Agent Exploration (with ACE)
**Purpose**: Multi-agent iterative exploration using ACE for semantic search within each iteration.
```javascript
let iteration = 0;
let cumulativeFindings = [];
let sharedContext = { aceDiscoveries: [], crossReferences: [] };
let shouldContinue = true;
while (shouldContinue && iteration < maxIterations) {
iteration++;
const iterationDir = `${outputDir}/iterations/${iteration}`;
await mkdir(iterationDir, { recursive: true });
// Step 1: ACE-assisted iteration planning
// Use previous findings to guide ACE queries for this iteration
const iterationAceQueries = iteration === 1
? explorationPlan.dimensions.map(d => d.focus_areas[0]) // Initial queries from plan
: deriveQueriesFromFindings(cumulativeFindings); // Follow-up queries from findings
// Execute ACE searches to find related code
const iterationAceResults = [];
for (const query of iterationAceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${query} in ${explorationPlan.scope}`
});
iterationAceResults.push({ query, result });
}
// Update shared context with ACE discoveries
sharedContext.aceDiscoveries.push(...iterationAceResults);
// Step 2: Plan this iteration based on ACE results
const iterationPlan = planIteration(iteration, explorationPlan, cumulativeFindings, iterationAceResults);
// Step 3: Launch dimension agents with ACE context
const agentPromises = iterationPlan.dimensions.map(dimension =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore ${dimension.name} (iteration ${iteration})`,
prompt: buildDimensionPromptWithACE(dimension, iteration, cumulativeFindings, iterationAceResults, iterationDir)
})
);
// Wait for iteration agents
const iterationResults = await Promise.all(agentPromises);
// Step 4: Collect and analyze iteration findings
const iterationFindings = await collectIterationFindings(iterationDir, iterationPlan.dimensions);
// Step 5: Cross-reference findings between dimensions
if (iterationPlan.dimensions.length > 1) {
const crossRefs = findCrossReferences(iterationFindings, iterationPlan.dimensions);
sharedContext.crossReferences.push(...crossRefs);
}
cumulativeFindings.push(...iterationFindings);
// Step 6: Decide whether to continue
const convergenceCheck = checkConvergence(iterationFindings, cumulativeFindings, explorationPlan);
shouldContinue = !convergenceCheck.converged;
// Step 7: Update state (iteration summary embedded in state)
await updateDiscoveryState(outputDir, {
iterations: [...state.iterations, {
number: iteration,
findings_count: iterationFindings.length,
ace_queries: iterationAceQueries.length,
cross_references: sharedContext.crossReferences.length,
new_discoveries: convergenceCheck.newDiscoveries,
confidence: convergenceCheck.confidence,
continued: shouldContinue
}],
cumulative_findings: cumulativeFindings
});
}
```
**ACE in Iteration Loop**:
```
Iteration N
├─→ ACE Search (based on previous findings)
│ └─ Query: "related code paths for {finding.category}"
│ └─ Result: Additional files to explore
├─→ Agent Exploration (with ACE context)
│ └─ Agent receives: dimension targets + ACE suggestions
│ └─ Agent can call ACE for deeper search
├─→ Cross-Reference Analysis
│ └─ Compare findings between dimensions
│ └─ Identify discrepancies
└─→ Convergence Check
└─ New findings? Continue
└─ No new findings? Terminate
```
**Dimension Agent Prompt Template (with ACE)**:
```javascript
function buildDimensionPromptWithACE(dimension, iteration, previousFindings, aceResults, outputDir) {
// Filter ACE results relevant to this dimension
const relevantAceResults = aceResults.filter(r =>
r.query.includes(dimension.name) || dimension.focus_areas.some(fa => r.query.includes(fa))
);
return `
## Task Objective
Explore ${dimension.name} dimension for issue discovery (Iteration ${iteration})
## Context
- Dimension: ${dimension.name}
- Description: ${dimension.description}
- Search Targets: ${dimension.search_targets.join(', ')}
- Focus Areas: ${dimension.focus_areas.join(', ')}
## ACE Semantic Search Results (Pre-gathered)
The following files/code sections were identified by ACE as relevant to this dimension:
${JSON.stringify(relevantAceResults.map(r => ({ query: r.query, files: r.result.slice(0, 5) })), null, 2)}
**Use ACE for deeper exploration**: You have access to mcp__ace-tool__search_context.
When you find something interesting, use ACE to find related code:
- mcp__ace-tool__search_context({ project_root_path: ".", query: "related to {finding}" })
${iteration > 1 ? `
## Previous Findings to Build Upon
${summarizePreviousFindings(previousFindings, dimension.name)}
## This Iteration Focus
- Explore areas not yet covered (check ACE results for new files)
- Verify/deepen previous findings
- Follow leads from previous discoveries
- Use ACE to find cross-references between dimensions
` : ''}
## MANDATORY FIRST STEPS
1. Read exploration plan: ${outputDir}/../exploration-plan.json
2. Read schema: ~/.ccw/workflows/cli-templates/schemas/discovery-finding-schema.json
3. Review ACE results above for starting points
4. Explore files identified by ACE
## Exploration Instructions
${dimension.agent_prompt}
## ACE Usage Guidelines
- Use ACE when you need to find:
- Where a function/class is used
- Related implementations in other modules
- Cross-module dependencies
- Similar patterns elsewhere in codebase
- Query format: Natural language, be specific
- Example: "Where is UserService.authenticate called from?"
## Output Requirements
**1. Write JSON file**: ${outputDir}/${dimension.name}.json
Follow discovery-finding-schema.json:
- findings: [{id, title, category, description, file, line, snippet, confidence, related_dimension}]
- coverage: {files_explored, areas_covered, areas_remaining}
- leads: [{description, suggested_search}] // for next iteration
- ace_queries_used: [{query, result_count}] // track ACE usage
**2. Return summary**:
- Total findings this iteration
- Key discoveries
- ACE queries that revealed important code
- Recommended next exploration areas
## Success Criteria
- [ ] JSON written to ${outputDir}/${dimension.name}.json
- [ ] Each finding has file:line reference
- [ ] ACE used for cross-references where applicable
- [ ] Coverage report included
- [ ] Leads for next iteration identified
`;
}
```
### Phase 4: Cross-Analysis & Synthesis
```javascript
// For comparison intent, perform cross-analysis
if (intentType === 'comparison' && explorationPlan.comparison_matrix) {
const comparisonResults = [];
for (const point of explorationPlan.comparison_matrix.comparison_points) {
const dimensionAFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_a &&
f.category.includes(point.aspect)
);
const dimensionBFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_b &&
f.category.includes(point.aspect)
);
// Compare and find discrepancies
const discrepancies = findDiscrepancies(dimensionAFindings, dimensionBFindings, point);
comparisonResults.push({
aspect: point.aspect,
dimension_a_count: dimensionAFindings.length,
dimension_b_count: dimensionBFindings.length,
discrepancies: discrepancies,
match_rate: calculateMatchRate(dimensionAFindings, dimensionBFindings)
});
}
// Write comparison analysis
await writeJson(`${outputDir}/comparison-analysis.json`, {
matrix: explorationPlan.comparison_matrix,
results: comparisonResults,
summary: {
total_discrepancies: comparisonResults.reduce((sum, r) => sum + r.discrepancies.length, 0),
overall_match_rate: average(comparisonResults.map(r => r.match_rate)),
critical_mismatches: comparisonResults.filter(r => r.match_rate < 0.5)
}
});
}
// Prioritize all findings
const prioritizedFindings = prioritizeFindings(cumulativeFindings, explorationPlan);
```
### Phase 5: Issue Generation & Summary
```javascript
// Convert high-confidence findings to issues
const issueWorthy = prioritizedFindings.filter(f =>
f.confidence >= 0.7 || f.priority === 'critical' || f.priority === 'high'
);
const issues = issueWorthy.map(finding => ({
id: `ISS-${discoveryId}-${finding.id}`,
title: finding.title,
description: finding.description,
source: {
discovery_id: discoveryId,
finding_id: finding.id,
dimension: finding.related_dimension
},
file: finding.file,
line: finding.line,
priority: finding.priority,
category: finding.category,
suggested_fix: finding.suggested_fix,
confidence: finding.confidence,
status: 'discovered',
created_at: new Date().toISOString()
}));
// Write issues
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
// Update final state (summary embedded in state, no separate file)
await updateDiscoveryState(outputDir, {
phase: 'complete',
updated_at: new Date().toISOString(),
results: {
total_iterations: iteration,
total_findings: cumulativeFindings.length,
issues_generated: issues.length,
comparison_match_rate: comparisonResults
? average(comparisonResults.map(r => r.match_rate))
: null
}
});
// Prompt user for next action
await AskUserQuestion({
questions: [{
question: `Discovery complete: ${issues.length} issues from ${cumulativeFindings.length} findings across ${iteration} iterations. What next?`,
header: "Next Step",
multiSelect: false,
options: [
{ label: "Export to Issues (Recommended)", description: `Export ${issues.length} issues for planning` },
{ label: "Review Details", description: "View comparison analysis and iteration details" },
{ label: "Run Deeper", description: "Continue with more iterations" },
{ label: "Skip", description: "Complete without exporting" }
]
}]
});
```
## Output File Structure
```
.workflow/issues/discoveries/
└── {DBP-YYYYMMDD-HHmmss}/
├── discovery-state.json # Session state with iteration tracking
├── iterations/
│ ├── 1/
│ │ └── {dimension}.json # Dimension findings
│ ├── 2/
│ │ └── {dimension}.json
│ └── ...
├── comparison-analysis.json # Cross-dimension comparison (if applicable)
└── discovery-issues.jsonl # Generated issue candidates
```
**Simplified Design**:
- ACE context and Gemini plan kept in memory, not persisted
- Iteration summaries embedded in state
- No separate summary.md (state.json contains all needed info)
## Schema References
| Schema | Path | Used By |
|--------|------|---------|
| **Discovery State** | `discovery-state-schema.json` | Orchestrator (state tracking) |
| **Discovery Finding** | `discovery-finding-schema.json` | Dimension agents (output) |
| **Exploration Plan** | `exploration-plan-schema.json` | Gemini output validation (memory only) |
## Configuration Options
| Flag | Default | Description |
|------|---------|-------------|
| `--scope` | `**/*` | File pattern to explore |
| `--depth` | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
| `--max-iterations` | 5 | Maximum exploration iterations |
| `--tool` | `gemini` | Planning tool (gemini/qwen) |
| `--plan-only` | `false` | Stop after Phase 2 (Gemini planning), show plan for user review |
## Examples
### Example 1: Single Module Deep Dive
```bash
/issue:discover-by-prompt "Find all potential issues in the auth module" --scope=src/auth/**
```
**Gemini plans** (single dimension):
- Dimension: auth-module
- Focus: security vulnerabilities, edge cases, error handling, test gaps
**Iterations**: 2-3 (until no new findings)
### Example 2: API Contract Comparison
```bash
/issue:discover-by-prompt "Check if API calls match implementations" --scope=src/**
```
**Gemini plans** (comparison):
- Dimension 1: api-consumers (fetch calls, hooks, services)
- Dimension 2: api-providers (handlers, routes, controllers)
- Comparison matrix: endpoints, methods, payloads, responses
### Example 3: Multi-Module Audit
```bash
/issue:discover-by-prompt "Audit the payment flow for issues" --scope=src/payment/**
```
**Gemini plans** (multi-dimension):
- Dimension 1: payment-logic (calculations, state transitions)
- Dimension 2: validation (input checks, business rules)
- Dimension 3: error-handling (failure modes, recovery)
### Example 4: Plan Only Mode
```bash
/issue:discover-by-prompt "Find inconsistent patterns" --plan-only
```
Stops after Gemini planning, outputs:
```
Gemini Plan:
- Intent: search
- Dimensions: 2 (pattern-definitions, pattern-usages)
- Estimated iterations: 3
Continue with exploration? [Y/n]
```
## Related Commands
```bash
# After discovery, plan solutions
/issue:plan DBP-001-01,DBP-001-02
# View all discoveries
/issue:manage
# Standard perspective-based discovery
/issue:discover src/auth/** --perspectives=security,bug
```
## Best Practices
1. **Be Specific in Prompts**: More specific prompts lead to better Gemini planning
2. **Scope Appropriately**: Narrow scope for focused comparison, wider for audits
3. **Review Exploration Plan**: Check `exploration-plan.json` before long explorations
4. **Use Standard Depth First**: Start with standard, go deep only if needed
5. **Combine with `/issue:discover`**: Use prompt-based for comparisons, perspective-based for audits

View File

@@ -1,472 +0,0 @@
---
name: issue:discover
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.
argument-hint: "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]"
allowed-tools: Skill(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select all perspectives, skip confirmations.
# Issue Discovery Command
## Quick Start
```bash
# Discover issues in specific module (interactive perspective selection)
/issue:discover src/auth/**
# Discover with specific perspectives
/issue:discover src/payment/** --perspectives=bug,security,test
# Discover with external research for all perspectives
/issue:discover src/api/** --external
# Discover in multiple modules
/issue:discover src/auth/**,src/payment/**
```
**Discovery Scope**: Specified modules/files only
**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/`
**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
**Exa Integration**: Auto-enabled for security and best-practices perspectives
**CLI Tools**: Gemini → Qwen → Codex (fallback chain)
## What & Why
### Core Concept
Multi-perspective issue discovery orchestrator that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items. Unlike code review (which assesses existing code quality), discovery focuses on **finding opportunities for improvement and potential problems**.
**vs Code Review**:
- **Code Review** (`review-module-cycle`): Evaluates code quality against standards
- **Issue Discovery** (`issue:discover`): Finds actionable issues, bugs, and improvement opportunities
### Value Proposition
1. **Proactive Issue Detection**: Find problems before they become bugs
2. **Multi-Perspective Analysis**: Each perspective surfaces different types of issues
3. **External Benchmarking**: Compare against industry best practices via Exa
4. **Direct Issue Integration**: Discoveries can be exported to issue tracker
5. **Dashboard Management**: View, filter, and export discoveries via CCW dashboard
## How It Works
### Execution Flow
```
Phase 1: Discovery & Initialization
└─ Parse target pattern, create session, initialize output structure
Phase 2: Interactive Perspective Selection
└─ AskUserQuestion for perspective selection (or use --perspectives)
Phase 3: Parallel Perspective Analysis
├─ Launch N @cli-explore-agent instances (one per perspective)
├─ Security & Best-Practices auto-trigger Exa research
├─ Agent writes perspective JSON, returns summary
└─ Update discovery-progress.json
Phase 4: Aggregation & Prioritization
├─ Collect agent return summaries
├─ Load perspective JSON files
├─ Merge findings, deduplicate by file+line
└─ Calculate priority scores
Phase 5: Issue Generation & Summary
├─ Convert high-priority discoveries to issue format
├─ Write to discovery-issues.jsonl
├─ Generate single summary.md from agent returns
└─ Update discovery-state.json to complete
Phase 6: User Action Prompt
└─ AskUserQuestion for next step (export/dashboard/skip)
```
## Perspectives
### Available Perspectives
| Perspective | Focus | Categories | Exa |
|-------------|-------|------------|-----|
| **bug** | Potential Bugs | edge-case, null-check, resource-leak, race-condition, boundary, exception-handling | - |
| **ux** | User Experience | error-message, loading-state, feedback, accessibility, interaction, consistency | - |
| **test** | Test Coverage | missing-test, edge-case-test, integration-gap, coverage-hole, assertion-quality | - |
| **quality** | Code Quality | complexity, duplication, naming, documentation, code-smell, readability | - |
| **security** | Security Issues | injection, auth, encryption, input-validation, data-exposure, access-control | ✓ |
| **performance** | Performance | n-plus-one, memory-usage, caching, algorithm, blocking-operation, resource | - |
| **maintainability** | Maintainability | coupling, cohesion, tech-debt, extensibility, module-boundary, interface-design | - |
| **best-practices** | Best Practices | convention, pattern, framework-usage, anti-pattern, industry-standard | ✓ |
### Interactive Perspective Selection
When no `--perspectives` flag is provided, the command uses AskUserQuestion:
```javascript
AskUserQuestion({
questions: [{
question: "Select primary discovery focus:",
header: "Focus",
multiSelect: false,
options: [
{ label: "Bug + Test + Quality", description: "Quick scan: potential bugs, test gaps, code quality (Recommended)" },
{ label: "Security + Performance", description: "System audit: security issues, performance bottlenecks" },
{ label: "Maintainability + Best-practices", description: "Long-term health: coupling, tech debt, conventions" },
{ label: "Full analysis", description: "All 7 perspectives (comprehensive, takes longer)" }
]
}]
})
```
**Recommended Combinations**:
- Quick scan: bug, test, quality
- Full analysis: all perspectives
- Security audit: security, bug, quality
## Core Responsibilities
### Orchestrator
**Phase 1: Discovery & Initialization**
```javascript
// Step 1: Parse target pattern and resolve files
const resolvedFiles = await expandGlobPattern(targetPattern);
if (resolvedFiles.length === 0) {
throw new Error(`No files matched pattern: ${targetPattern}`);
}
// Step 2: Generate discovery ID
const discoveryId = `DSC-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
// Step 3: Create output directory
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
await mkdir(outputDir, { recursive: true });
await mkdir(`${outputDir}/perspectives`, { recursive: true });
// Step 4: Initialize unified discovery state (merged state+progress)
await writeJson(`${outputDir}/discovery-state.json`, {
discovery_id: discoveryId,
target_pattern: targetPattern,
phase: "initialization",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
target: { files_count: { total: resolvedFiles.length }, project: {} },
perspectives: [], // filled after selection: [{name, status, findings}]
external_research: { enabled: false, completed: false },
results: { total_findings: 0, issues_generated: 0, priority_distribution: {} }
});
```
**Phase 2: Perspective Selection**
```javascript
// Check for --perspectives flag
let selectedPerspectives = [];
if (args.perspectives) {
selectedPerspectives = args.perspectives.split(',').map(p => p.trim());
} else {
// Interactive selection via AskUserQuestion
const response = await AskUserQuestion({...});
selectedPerspectives = parseSelectedPerspectives(response);
}
// Validate and update state
await updateDiscoveryState(outputDir, {
'metadata.perspectives': selectedPerspectives,
phase: 'parallel'
});
```
**Phase 3: Parallel Perspective Analysis**
Launch N agents in parallel (one per selected perspective):
```javascript
// Launch agents in parallel - agents write JSON and return summary
const agentPromises = selectedPerspectives.map(perspective =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Discover ${perspective} issues`,
prompt: buildPerspectivePrompt(perspective, discoveryId, resolvedFiles, outputDir)
})
);
// Wait for all agents - collect their return summaries
const results = await Promise.all(agentPromises);
// results contain agent summaries for final report
```
**Phase 4: Aggregation & Prioritization**
```javascript
// Load all perspective JSON files written by agents
const allFindings = [];
for (const perspective of selectedPerspectives) {
const jsonPath = `${outputDir}/perspectives/${perspective}.json`;
if (await fileExists(jsonPath)) {
const data = await readJson(jsonPath);
allFindings.push(...data.findings.map(f => ({ ...f, perspective })));
}
}
// Deduplicate and prioritize
const prioritizedFindings = deduplicateAndPrioritize(allFindings);
// Update unified state
await updateDiscoveryState(outputDir, {
phase: 'aggregation',
'results.total_findings': prioritizedFindings.length,
'results.priority_distribution': countByPriority(prioritizedFindings)
});
```
**Phase 5: Issue Generation & Summary**
```javascript
// Convert high-priority findings to issues
const issueWorthy = prioritizedFindings.filter(f =>
f.priority === 'critical' || f.priority === 'high' || f.priority_score >= 0.7
);
// Write discovery-issues.jsonl
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
// Generate single summary.md from agent return summaries
// Orchestrator briefly summarizes what agents returned (NO detailed reports)
await writeSummaryFromAgentReturns(outputDir, results, prioritizedFindings, issues);
// Update final state
await updateDiscoveryState(outputDir, {
phase: 'complete',
updated_at: new Date().toISOString(),
'results.issues_generated': issues.length
});
```
**Phase 6: User Action Prompt**
```javascript
// Prompt user for next action based on discovery results
const hasHighPriority = issues.some(i => i.priority === 'critical' || i.priority === 'high');
const hasMediumFindings = prioritizedFindings.some(f => f.priority === 'medium');
await AskUserQuestion({
questions: [{
question: `Discovery complete: ${issues.length} issues generated, ${prioritizedFindings.length} total findings. What would you like to do next?`,
header: "Next Step",
multiSelect: false,
options: hasHighPriority ? [
{ label: "Export to Issues (Recommended)", description: `${issues.length} high-priority issues found - export to issue tracker for planning` },
{ label: "Open Dashboard", description: "Review findings in ccw view before exporting" },
{ label: "Skip", description: "Complete discovery without exporting" }
] : hasMediumFindings ? [
{ label: "Open Dashboard (Recommended)", description: "Review medium-priority findings in ccw view to decide which to export" },
{ label: "Export to Issues", description: `Export ${issues.length} issues to tracker` },
{ label: "Skip", description: "Complete discovery without exporting" }
] : [
{ label: "Skip (Recommended)", description: "No significant issues found - complete discovery" },
{ label: "Open Dashboard", description: "Review all findings in ccw view" },
{ label: "Export to Issues", description: `Export ${issues.length} issues anyway` }
]
}]
});
// Handle response
if (response === "Export to Issues") {
// Append to issues.jsonl
await appendJsonl('.workflow/issues/issues.jsonl', issues);
console.log(`Exported ${issues.length} issues. Run /issue:plan to continue.`);
} else if (response === "Open Dashboard") {
console.log('Run `ccw view` and navigate to Issues > Discovery to manage findings.');
}
```
### Output File Structure
```
.workflow/issues/discoveries/
├── index.json # Discovery session index
└── {discovery-id}/
├── discovery-state.json # Unified state (merged state+progress)
├── perspectives/
│ └── {perspective}.json # Per-perspective findings
├── external-research.json # Exa research results (if enabled)
├── discovery-issues.jsonl # Generated candidate issues
└── summary.md # Single summary (from agent returns)
```
### Schema References
**External Schema Files** (agent MUST read and follow exactly):
| Schema | Path | Purpose |
|--------|------|---------|
| **Discovery State** | `~/.ccw/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state machine |
| **Discovery Finding** | `~/.ccw/workflows/cli-templates/schemas/discovery-finding-schema.json` | Perspective analysis results |
### Agent Invocation Template
**Perspective Analysis Agent**:
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Discover ${perspective} issues`,
prompt: `
## Task Objective
Discover potential ${perspective} issues in specified module files.
## Discovery Context
- Discovery ID: ${discoveryId}
- Perspective: ${perspective}
- Target Pattern: ${targetPattern}
- Resolved Files: ${resolvedFiles.length} files
- Output Directory: ${outputDir}
## MANDATORY FIRST STEPS
1. Read discovery state: ${outputDir}/discovery-state.json
2. Read schema: ~/.ccw/workflows/cli-templates/schemas/discovery-finding-schema.json
3. Analyze target files for ${perspective} concerns
## Output Requirements
**1. Write JSON file**: ${outputDir}/perspectives/${perspective}.json
- Follow discovery-finding-schema.json exactly
- Each finding: id, title, priority, category, description, file, line, snippet, suggested_issue, confidence
**2. Return summary** (DO NOT write report file):
- Return a brief text summary of findings
- Include: total findings, priority breakdown, key issues
- This summary will be used by orchestrator for final report
## Perspective-Specific Guidance
${getPerspectiveGuidance(perspective)}
## Success Criteria
- [ ] JSON written to ${outputDir}/perspectives/${perspective}.json
- [ ] Summary returned with findings count and key issues
- [ ] Each finding includes actionable suggested_issue
- [ ] Priority uses lowercase enum: critical/high/medium/low
`
})
```
**Exa Research Agent** (for security and best-practices):
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `External research for ${perspective} via Exa`,
prompt: `
## Task Objective
Research industry best practices for ${perspective} using Exa search
## Research Steps
1. Read project tech stack: .workflow/project-tech.json
2. Use Exa to search for best practices
3. Synthesize findings relevant to this project
## Output Requirements
**1. Write JSON file**: ${outputDir}/external-research.json
- Include sources, key_findings, gap_analysis, recommendations
**2. Return summary** (DO NOT write report file):
- Brief summary of external research findings
- Key recommendations for the project
## Success Criteria
- [ ] JSON written to ${outputDir}/external-research.json
- [ ] Summary returned with key recommendations
- [ ] Findings are relevant to project's tech stack
`
})
```
### Perspective Guidance Reference
```javascript
function getPerspectiveGuidance(perspective) {
const guidance = {
bug: `
Focus: Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling
Priority: Critical=data corruption/crash, High=malfunction, Medium=edge case issues, Low=minor
`,
ux: `
Focus: Error messages, loading states, feedback, accessibility, interaction patterns, form validation
Priority: Critical=inaccessible, High=confusing, Medium=inconsistent, Low=cosmetic
`,
test: `
Focus: Missing unit tests, edge case coverage, integration gaps, assertion quality, test isolation
Priority: Critical=no security tests, High=no core logic tests, Medium=weak coverage, Low=minor gaps
`,
quality: `
Focus: Complexity, duplication, naming, documentation, code smells, readability
Priority: Critical=unmaintainable, High=significant issues, Medium=naming/docs, Low=minor refactoring
`,
security: `
Focus: Input validation, auth/authz, injection, XSS/CSRF, data exposure, access control
Priority: Critical=auth bypass/injection, High=missing authz, Medium=weak validation, Low=headers
`,
performance: `
Focus: N+1 queries, memory leaks, caching, algorithm efficiency, blocking operations
Priority: Critical=memory leaks, High=N+1/inefficient, Medium=missing cache, Low=minor optimization
`,
maintainability: `
Focus: Coupling, interface design, tech debt, extensibility, module boundaries, configuration
Priority: Critical=unrelated code changes, High=unclear boundaries, Medium=coupling, Low=refactoring
`,
'best-practices': `
Focus: Framework conventions, language patterns, anti-patterns, deprecated APIs, coding standards
Priority: Critical=anti-patterns causing bugs, High=convention violations, Medium=style, Low=cosmetic
`
};
return guidance[perspective] || 'General code discovery analysis';
}
```
## Dashboard Integration
### Viewing Discoveries
Open CCW dashboard to manage discoveries:
```bash
ccw view
```
Navigate to **Issues > Discovery** to:
- View all discovery sessions
- Filter findings by perspective and priority
- Preview finding details
- Select and export findings as issues
### Exporting to Issues
From the dashboard, select findings and click "Export as Issues" to:
1. Convert discoveries to standard issue format
2. Append to `.workflow/issues/issues.jsonl`
3. Set status to `registered`
4. Continue with `/issue:plan` workflow
## Related Commands
```bash
# After discovery, plan solutions for exported issues
/issue:plan DSC-001,DSC-002,DSC-003
# Or use interactive management
/issue:manage
```
## Best Practices
1. **Start Focused**: Begin with specific modules rather than entire codebase
2. **Use Quick Scan First**: Start with bug, test, quality for fast results
3. **Review Before Export**: Not all discoveries warrant issues - use dashboard to filter
4. **Combine Perspectives**: Run related perspectives together (e.g., security + bug)
5. **Enable Exa for New Tech**: When using unfamiliar frameworks, enable external research

View File

@@ -1,608 +0,0 @@
---
name: execute
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
argument-hint: "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm execution, use recommended settings.
# Issue Execute Command (/issue:execute)
## Overview
Minimal orchestrator that dispatches **solution IDs** to executors. Each executor receives a complete solution with all its tasks.
**Design Principles:**
- `queue dag` → returns parallel batches with solution IDs (S-1, S-2, ...)
- `detail <id>` → READ-ONLY solution fetch (returns full solution with all tasks)
- `done <id>` → update solution completion status
- No race conditions: status changes only via `done`
- **Executor handles all tasks within a solution sequentially**
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
### If Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = Bash('ccw issue queue list --brief --json');
const index = JSON.parse(result);
```
2. **Display available queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: index.queues
.filter(q => q.status === 'active')
.map(q => ({
label: q.id,
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
})
```
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Usage
```bash
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
```
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
**Executor & Dry-run**: Selected via interactive prompt (AskUserQuestion)
**Worktree**: Creates ONE worktree for the entire queue execution (not per-solution)
**⭐ Recommended Executor**: **Codex** - Best for long-running autonomous work (2hr timeout), supports background execution and full write access
**Worktree Options**:
- `--worktree` - Create a new worktree with timestamp-based name
- `--worktree <existing-path>` - Resume in an existing worktree (for recovery/continuation)
**Resume**: Use `git worktree list` to find existing worktrees from interrupted executions
## Execution Flow
```
Phase 0: Validate Queue ID (REQUIRED)
├─ If --queue provided → use specified queue
├─ If --queue missing → list queues, prompt user to select
└─ Store QUEUE_ID for all subsequent commands
Phase 0.5 (if --worktree): Setup Queue Worktree
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
├─ All subsequent execution happens in this worktree
└─ Main workspace remains clean and untouched
Phase 1: Get DAG & User Selection
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
Phase 2: Dispatch Parallel Batch (DAG-driven)
├─ Parallelism determined by DAG (no manual limit)
├─ All executors work in the SAME worktree (or main if no worktree)
├─ For each solution ID in batch (parallel - all at once):
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
│ ├─ Executor gets FULL SOLUTION with all tasks
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
│ ├─ Executor tests + verifies each task
│ ├─ Executor commits ONCE per solution (with formatted summary)
│ └─ Executor calls: ccw issue done <id>
└─ Wait for batch completion
Phase 3: Next Batch (repeat Phase 2)
└─ ccw issue queue dag → check for newly-ready solutions
Phase 4 (if --worktree): Worktree Completion
├─ All batches complete → prompt for merge strategy
└─ Options: Create PR / Merge to main / Keep branch
```
## Implementation
### Phase 0: Validate Queue ID
```javascript
// Check if --queue was provided
let QUEUE_ID = args.queue;
if (!QUEUE_ID) {
// List available queues
const listResult = Bash('ccw issue queue list --brief --json').trim();
const index = JSON.parse(listResult);
if (index.queues.length === 0) {
console.log('No queues found. Use /issue:queue to create one first.');
return;
}
// Filter active queues only
const activeQueues = index.queues.filter(q => q.status === 'active');
if (activeQueues.length === 0) {
console.log('No active queues found.');
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
return;
}
// Display and prompt user
console.log('\nAvailable Queues:');
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
console.log('-'.repeat(70));
for (const q of index.queues) {
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
q.issue_ids.join(', '));
}
const answer = AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: activeQueues.map(q => ({
label: q.id,
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
});
QUEUE_ID = answer['Queue'];
}
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
```
### Phase 1: Get DAG & User Selection
```javascript
// Get dependency graph and parallel batches (QUEUE_ID required)
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
console.log(dag.error || 'No solutions ready for execution');
console.log('Use /issue:queue to form a queue first');
return;
}
console.log(`
## Queue DAG (Solution-Level)
- Total Solutions: ${dag.total}
- Ready: ${dag.ready_count}
- Completed: ${dag.completed_count}
- Parallel in batch 1: ${dag.parallel_batches[0]?.length || 0}
`);
// Interactive selection via AskUserQuestion
const answer = AskUserQuestion({
questions: [
{
question: 'Select executor type:',
header: 'Executor',
multiSelect: false,
options: [
{ label: 'Codex (Recommended)', description: 'Autonomous coding with full write access' },
{ label: 'Gemini', description: 'Large context analysis and implementation' },
{ label: 'Agent', description: 'Claude Code sub-agent for complex tasks' }
]
},
{
question: 'Execution mode:',
header: 'Mode',
multiSelect: false,
options: [
{ label: 'Execute (Recommended)', description: 'Run all ready solutions' },
{ label: 'Dry-run', description: 'Show DAG and batches without executing' }
]
},
{
question: 'Use git worktree for queue isolation?',
header: 'Worktree',
multiSelect: false,
options: [
{ label: 'Yes (Recommended)', description: 'Create ONE worktree for entire queue - main stays clean' },
{ label: 'No', description: 'Work directly in current directory' }
]
}
]
});
const executor = answer['Executor'].toLowerCase().split(' ')[0]; // codex|gemini|agent
const isDryRun = answer['Mode'].includes('Dry-run');
const useWorktree = answer['Worktree'].includes('Yes');
// Dry run mode
if (isDryRun) {
console.log('### Parallel Batches (Dry-run):\n');
dag.parallel_batches.forEach((batch, i) => {
console.log(`Batch ${i + 1}: ${batch.join(', ')}`);
});
return;
}
```
### Phase 0 & 2: Setup Queue Worktree & Dispatch
```javascript
// Parallelism determined by DAG - no manual limit
// All solutions in same batch have NO file conflicts and can run in parallel
const batch = dag.parallel_batches[0] || [];
// Initialize TodoWrite
TodoWrite({
todos: batch.map(id => ({
content: `Execute solution ${id}`,
status: 'pending',
activeForm: `Executing solution ${id}`
}))
});
console.log(`\n### Executing Solutions (DAG batch 1): ${batch.join(', ')}`);
// Parse existing worktree path from args if provided
// Example: --worktree /path/to/existing/worktree
const existingWorktree = args.worktree && typeof args.worktree === 'string' ? args.worktree : null;
// Setup ONE worktree for entire queue (not per-solution)
let worktreePath = null;
let worktreeBranch = null;
if (useWorktree) {
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
const worktreeBase = `${repoRoot}/.ccw/worktrees`;
Bash(`mkdir -p "${worktreeBase}"`);
Bash('git worktree prune'); // Cleanup stale worktrees
if (existingWorktree) {
// Resume mode: Use existing worktree
worktreePath = existingWorktree;
worktreeBranch = Bash(`git -C "${worktreePath}" branch --show-current`).trim();
console.log(`Resuming in existing worktree: ${worktreePath} (branch: ${worktreeBranch})`);
} else {
// Create mode: ONE worktree for the entire queue
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
worktreeBranch = `queue-exec-${dag.queue_id || timestamp}`;
worktreePath = `${worktreeBase}/${worktreeBranch}`;
Bash(`git worktree add "${worktreePath}" -b "${worktreeBranch}"`);
console.log(`Created queue worktree: ${worktreePath}`);
}
}
// Launch ALL solutions in batch in parallel (DAG guarantees no conflicts)
// All executors work in the SAME worktree (or main if no worktree)
const executions = batch.map(solutionId => {
updateTodo(solutionId, 'in_progress');
return dispatchExecutor(solutionId, executor, worktreePath);
});
await Promise.all(executions);
batch.forEach(id => updateTodo(id, 'completed'));
```
### Executor Dispatch
```javascript
// worktreePath: path to shared worktree (null if not using worktree)
function dispatchExecutor(solutionId, executorType, worktreePath = null) {
// If worktree is provided, executor works in that directory
// No per-solution worktree creation - ONE worktree for entire queue
// Pre-defined values (replaced at dispatch time, NOT by executor)
const SOLUTION_ID = solutionId;
const WORK_DIR = worktreePath || null;
// Build prompt without markdown code blocks to avoid escaping issues
const prompt = `
## Execute Solution: ${SOLUTION_ID}
${WORK_DIR ? `Working Directory: ${WORK_DIR}` : ''}
### Step 1: Get Solution Details
Run this command to get the full solution with all tasks:
ccw issue detail ${SOLUTION_ID}
### Step 2: Execute All Tasks Sequentially
The detail command returns a FULL SOLUTION with all tasks.
Execute each task in order (T1 → T2 → T3 → ...):
For each task:
- Follow task.implementation steps
- Run task.test commands
- Verify task.acceptance criteria
- Do NOT commit after each task
### Step 3: Commit Solution (Once)
After ALL tasks pass, commit once with clean conventional format.
Command:
git add -A
git commit -m "<type>(<scope>): <brief description>"
Examples:
git commit -m "feat(auth): add token refresh mechanism"
git commit -m "fix(payment): resolve timeout in checkout flow"
git commit -m "refactor(api): simplify error handling"
Replace <type> with: feat|fix|refactor|docs|test|chore
Replace <scope> with: affected module name
Replace <description> with: brief summary (NO solution/issue IDs)
### Step 4: Report Completion
On success, run:
ccw issue done ${SOLUTION_ID} --result '{
"solution_id": "<solution-id>",
"issue_id": "<issue-id>",
"commit": {
"hash": "<commit-hash>",
"type": "<commit-type>",
"scope": "<commit-scope>",
"message": "<commit-message>"
},
"analysis": {
"risk": "<low|medium|high>",
"impact": "<low|medium|high>",
"complexity": "<low|medium|high>"
},
"tasks_completed": [
{"id": "T1", "title": "...", "action": "...", "scope": "..."},
{"id": "T2", "title": "...", "action": "...", "scope": "..."}
],
"files_modified": ["<file1>", "<file2>"],
"tests_passed": true,
"verification": {
"all_tests_passed": true,
"acceptance_criteria_met": true,
"regression_checked": true
},
"summary": "<brief description of accomplishment>"
}'
On failure, run:
ccw issue done ${SOLUTION_ID} --fail --reason '{
"task_id": "<TX>",
"error_type": "<test_failure|build_error|other>",
"message": "<error details>",
"files_attempted": ["<file1>", "<file2>"],
"commit": null
}'
### Important Notes
- Do NOT cleanup worktree - it is shared by all solutions in the queue
- Replace all <placeholder> values with actual values from your execution
`;
// For CLI tools, pass --cd to set working directory
const cdOption = worktreePath ? ` --cd "${worktreePath}"` : '';
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
);
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}${cdOption}`,
{ timeout: 3600000, run_in_background: true }
);
} else {
return Task({
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute solution ${solutionId}`,
prompt: worktreePath ? `Working directory: ${worktreePath}\n\n${prompt}` : prompt
});
}
}
```
### Phase 3: Check Next Batch
```javascript
// Refresh DAG after batch completes (use same QUEUE_ID)
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
console.log(`
## Batch Complete
- Solutions Completed: ${refreshedDag.completed_count}/${refreshedDag.total}
- Next ready: ${refreshedDag.ready_count}
`);
if (refreshedDag.ready_count > 0) {
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
// Note: If resuming, pass existing worktree path:
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
}
```
### Phase 4: Worktree Completion (after ALL batches)
```javascript
// Only run when ALL solutions completed AND using worktree
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
console.log('\n## All Solutions Completed - Worktree Cleanup');
const answer = AskUserQuestion({
questions: [{
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
header: 'Merge',
multiSelect: false,
options: [
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
]
}]
});
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
if (answer['Merge'].includes('Create PR')) {
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
Bash(`git worktree remove "${worktreePath}"`);
console.log(`PR created for branch: ${worktreeBranch}`);
} else if (answer['Merge'].includes('Merge to main')) {
// Check main is clean
const mainDirty = Bash('git status --porcelain').trim();
if (mainDirty) {
console.log('Warning: Main has uncommitted changes. Falling back to PR.');
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution (main had uncommitted changes)" --head "${worktreeBranch}"`);
} else {
Bash(`git merge --no-ff "${worktreeBranch}" -m "Merge queue ${dag.queue_id}"`);
Bash(`git branch -d "${worktreeBranch}"`);
}
Bash(`git worktree remove "${worktreePath}"`);
} else {
Bash(`git worktree remove "${worktreePath}"`);
console.log(`Branch ${worktreeBranch} kept for manual handling`);
}
}
```
## Parallel Execution Model
```
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────────────┤
│ 0. Validate QUEUE_ID (required, or prompt user to select) │
│ │
│ 0.5 (if --worktree) Create ONE worktree for entire queue │
│ → .ccw/worktrees/queue-exec-<queue-id> │
│ │
│ 1. ccw issue queue dag --queue ${QUEUE_ID} │
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
│ │
│ 2. Dispatch batch 1 (parallel, SAME worktree): │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Shared Queue Worktree (or main) │ │
│ │ ┌──────────────────┐ ┌──────────────────┐ │ │
│ │ │ Executor 1 │ │ Executor 2 │ │ │
│ │ │ detail S-1 │ │ detail S-2 │ │ │
│ │ │ [T1→T2→T3] │ │ [T1→T2] │ │ │
│ │ │ commit S-1 │ │ commit S-2 │ │ │
│ │ │ done S-1 │ │ done S-2 │ │ │
│ │ └──────────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ 3. ccw issue queue dag (refresh) │
│ → S-3 now ready → dispatch batch 2 (same worktree) │
│ │
│ 4. (if --worktree) ALL batches complete → cleanup worktree │
│ → Prompt: Create PR / Merge to main / Keep branch │
└─────────────────────────────────────────────────────────────────┘
```
**Why this works for parallel:**
- **ONE worktree for entire queue** → all solutions share same isolated workspace
- `detail <id>` is READ-ONLY → no race conditions
- Each executor handles **all tasks within a solution** sequentially
- **One commit per solution** with formatted summary (not per-task)
- `done <id>` updates only its own solution status
- `queue dag` recalculates ready solutions after each batch
- Solutions in same batch have NO file conflicts (DAG guarantees)
- **Main workspace stays clean** until merge/PR decision
## CLI Endpoint Contract
### `ccw issue queue list --brief --json`
Returns queue index for selection (used when --queue not provided):
```json
{
"active_queue_id": "QUE-20251215-001",
"queues": [
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
]
}
```
### `ccw issue queue dag --queue <queue-id>`
Returns dependency graph with parallel batches (solution-level, **--queue required**):
```json
{
"queue_id": "QUE-...",
"total": 3,
"ready_count": 2,
"completed_count": 0,
"nodes": [
{ "id": "S-1", "issue_id": "ISS-xxx", "status": "pending", "ready": true, "task_count": 3 },
{ "id": "S-2", "issue_id": "ISS-yyy", "status": "pending", "ready": true, "task_count": 2 },
{ "id": "S-3", "issue_id": "ISS-zzz", "status": "pending", "ready": false, "depends_on": ["S-1"] }
],
"parallel_batches": [["S-1", "S-2"], ["S-3"]]
}
```
### `ccw issue detail <item_id>`
Returns FULL SOLUTION with all tasks (READ-ONLY):
```json
{
"item_id": "S-1",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"status": "pending",
"solution": {
"id": "SOL-xxx",
"approach": "...",
"tasks": [
{ "id": "T1", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T2", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T3", "title": "...", "implementation": [...], "test": {...} }
],
"exploration_context": { "relevant_files": [...] }
},
"execution_hints": { "executor": "codex", "estimated_minutes": 180 }
}
```
### `ccw issue done <item_id>`
Marks solution completed/failed, updates queue state, checks for queue completion.
## Error Handling
| Error | Resolution |
|-------|------------|
| No queue | Run /issue:queue first |
| No ready solutions | Dependencies blocked, check DAG |
| Executor timeout | Solution not marked done, can retry |
| Solution failure | Use `ccw issue retry` to reset |
| Partial task failure | Executor reports which task failed via `done --fail` |
## Related Commands
- `/issue:plan` - Plan issues with solutions
- `/issue:queue` - Form execution queue
- `ccw issue queue dag` - View dependency graph
- `ccw issue detail <id>` - View task details
- `ccw issue retry` - Reset failed tasks

View File

@@ -1,382 +0,0 @@
---
name: from-brainstorm
description: Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle
argument-hint: "SESSION=\"<session-id>\" [--idea=<index>] [--auto] [-y|--yes]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-select highest-scored idea, skip confirmations, create issue directly.
# Issue From-Brainstorm Command (/issue:from-brainstorm)
## Overview
Bridge command that converts **brainstorm-with-file** session output into executable **issue + solution** for parallel-dev-cycle consumption.
**Core workflow**: Load Session → Select Idea → Convert to Issue → Generate Solution → Bind & Ready
**Input sources**:
- **synthesis.json** - Main brainstorm results with top_ideas
- **perspectives.json** - Multi-CLI perspectives (creative/pragmatic/systematic)
- **.brainstorming/** - Synthesis artifacts (clarifications, enhancements from role analyses)
**Output**:
- **Issue** (ISS-YYYYMMDD-NNN) - Full context with clarifications
- **Solution** (SOL-{issue-id}-{uid}) - Structured tasks for parallel-dev-cycle
## Quick Reference
```bash
# Interactive mode - select idea, confirm before creation
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Pre-select idea by index
/issue:from-brainstorm SESSION="BS-auth-system-2025-01-28" --idea=0
# Auto mode - select highest scored, no confirmations
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto -y
```
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| SESSION | Yes | String | - | Session ID or path to `.workflow/.brainstorm/BS-xxx` |
| --idea | No | Integer | - | Pre-select idea by index (0-based) |
| --auto | No | Flag | false | Auto-select highest-scored idea |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Data Structures
### Issue Schema (Output)
```typescript
interface Issue {
id: string; // ISS-YYYYMMDD-NNN
title: string; // From idea.title
status: 'planned'; // Auto-set after solution binding
priority: number; // 1-5 (derived from idea.score)
context: string; // Full description with clarifications
source: 'brainstorm';
labels: string[]; // ['brainstorm', perspective, feasibility]
// Structured fields
expected_behavior: string; // From key_strengths
actual_behavior: string; // From main_challenges
affected_components: string[]; // Extracted from description
_brainstorm_metadata: {
session_id: string;
idea_score: number;
novelty: number;
feasibility: string;
clarifications_count: number;
};
}
```
### Solution Schema (Output)
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string; // idea.title
approach: string; // idea.description
tasks: Task[]; // Generated from idea.next_steps
analysis: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
is_bound: boolean; // true
created_at: string;
bound_at: string;
}
interface Task {
id: string; // T1, T2, T3...
title: string; // Actionable task name
scope: string; // design|implementation|testing|documentation
action: string; // Implement|Design|Research|Test|Document
description: string;
implementation: string[]; // Step-by-step guide
acceptance: {
criteria: string[]; // What defines success
verification: string[]; // How to verify
};
priority: number; // 1-5
depends_on: string[]; // Task dependencies
}
```
## Execution Flow
```
Phase 1: Session Loading
├─ Validate session path
├─ Load synthesis.json (required)
├─ Load perspectives.json (optional - multi-CLI insights)
├─ Load .brainstorming/** (optional - synthesis artifacts)
└─ Validate top_ideas array exists
Phase 2: Idea Selection
├─ Auto mode: Select highest scored idea
├─ Pre-selected: Use --idea=N index
└─ Interactive: Display table, ask user to select
Phase 3: Enrich Issue Context
├─ Base: idea.description + key_strengths + main_challenges
├─ Add: Relevant clarifications (Requirements/Architecture/Feasibility)
├─ Add: Multi-perspective insights (creative/pragmatic/systematic)
└─ Add: Session metadata (session_id, completion date, clarification count)
Phase 4: Create Issue
├─ Generate issue data with enriched context
├─ Calculate priority from idea.score (0-10 → 1-5)
├─ Create via: ccw issue create (heredoc for JSON)
└─ Returns: ISS-YYYYMMDD-NNN
Phase 5: Generate Solution Tasks
├─ T1: Research & Validate (if main_challenges exist)
├─ T2: Design & Specification (if key_strengths exist)
├─ T3+: Implementation tasks (from idea.next_steps)
└─ Each task includes: implementation steps + acceptance criteria
Phase 6: Bind Solution
├─ Write solution to .workflow/issues/solutions/{issue-id}.jsonl
├─ Bind via: ccw issue bind {issue-id} {solution-id}
├─ Update issue status to 'planned'
└─ Returns: SOL-{issue-id}-{uid}
Phase 7: Next Steps
└─ Offer: Form queue | Convert another idea | View details | Done
```
## Context Enrichment Logic
### Base Context (Always Included)
- **Description**: `idea.description`
- **Why This Idea**: `idea.key_strengths[]`
- **Challenges to Address**: `idea.main_challenges[]`
- **Implementation Steps**: `idea.next_steps[]`
### Enhanced Context (If Available)
**From Synthesis Artifacts** (`.brainstorming/*/analysis*.md`):
- Extract clarifications matching categories: Requirements, Architecture, Feasibility
- Format: `**{Category}** ({role}): {question} → {answer}`
- Limit: Top 3 most relevant
**From Perspectives** (`perspectives.json`):
- **Creative**: First insight from `perspectives.creative.insights[0]`
- **Pragmatic**: First blocker from `perspectives.pragmatic.blockers[0]`
- **Systematic**: First pattern from `perspectives.systematic.patterns[0]`
**Session Metadata**:
- Session ID, Topic, Completion Date
- Clarifications count (if synthesis artifacts loaded)
## Task Generation Strategy
### Task 1: Research & Validation
**Trigger**: `idea.main_challenges.length > 0`
- **Title**: "Research & Validate Approach"
- **Scope**: design
- **Action**: Research
- **Implementation**: Investigate blockers, review similar implementations, validate with team
- **Acceptance**: Blockers documented, feasibility assessed, approach validated
### Task 2: Design & Specification
**Trigger**: `idea.key_strengths.length > 0`
- **Title**: "Design & Create Specification"
- **Scope**: design
- **Action**: Design
- **Implementation**: Create design doc, define success criteria, plan phases
- **Acceptance**: Design complete, metrics defined, plan outlined
### Task 3+: Implementation Tasks
**Trigger**: `idea.next_steps[]`
- **Title**: From `next_steps[i]` (max 60 chars)
- **Scope**: Inferred from keywords (test→testing, api→backend, ui→frontend)
- **Action**: Detected from verbs (implement, create, update, fix, test, document)
- **Implementation**: Execute step + follow design + write tests
- **Acceptance**: Step implemented + tests passing + code reviewed
### Fallback Task
**Trigger**: No tasks generated from above
- **Title**: `idea.title`
- **Scope**: implementation
- **Action**: Implement
- **Generic implementation + acceptance criteria**
## Priority Calculation
### Issue Priority (1-5)
```
idea.score: 0-10
priority = max(1, min(5, ceil((10 - score) / 2)))
Examples:
score 9-10 → priority 1 (critical)
score 7-8 → priority 2 (high)
score 5-6 → priority 3 (medium)
score 3-4 → priority 4 (low)
score 0-2 → priority 5 (lowest)
```
### Task Priority (1-5)
- Research task: 1 (highest)
- Design task: 2
- Implementation tasks: 3 by default, decrement for later tasks
- Testing/documentation: 4-5
### Complexity Analysis
```
risk: main_challenges.length > 2 ? 'high' : 'medium'
impact: score >= 8 ? 'high' : score >= 6 ? 'medium' : 'low'
complexity: main_challenges > 3 OR tasks > 5 ? 'high'
tasks > 3 ? 'medium' : 'low'
```
## CLI Integration
### Issue Creation
```bash
# Uses heredoc to avoid shell escaping
ccw issue create << 'EOF'
{
"title": "...",
"context": "...",
"priority": 3,
"source": "brainstorm",
"labels": ["brainstorm", "creative", "feasibility-high"],
...
}
EOF
```
### Solution Binding
```bash
# Append solution to JSONL file
echo '{"id":"SOL-xxx","tasks":[...]}' >> .workflow/issues/solutions/{issue-id}.jsonl
# Bind to issue
ccw issue bind {issue-id} {solution-id}
# Update status
ccw issue update {issue-id} --status planned
```
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| Session not found | synthesis.json missing | Check session ID, list available sessions |
| No ideas | top_ideas array empty | Complete brainstorm workflow first |
| Invalid idea index | Index out of range | Check valid range 0 to N-1 |
| Issue creation failed | ccw issue create error | Verify CLI endpoint working |
| Solution binding failed | Bind error | Check issue exists, retry |
## Examples
### Interactive Mode
```bash
/issue:from-brainstorm SESSION="BS-rate-limiting-2025-01-28"
# Output:
# | # | Title | Score | Feasibility |
# |---|-------|-------|-------------|
# | 0 | Token Bucket Algorithm | 8.5 | High |
# | 1 | Sliding Window Counter | 7.2 | Medium |
# | 2 | Fixed Window | 6.1 | High |
# User selects: #0
# Result:
# ✓ Created issue: ISS-20250128-001
# ✓ Created solution: SOL-ISS-20250128-001-ab3d
# ✓ Bound solution to issue
# → Next: /issue:queue
```
### Auto Mode
```bash
/issue:from-brainstorm SESSION="BS-caching-2025-01-28" --auto
# Result:
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
# ✓ Created issue: ISS-20250128-002
# ✓ Solution with 4 tasks
# → Status: planned
```
## Integration Flow
```
brainstorm-with-file
├─ synthesis.json
├─ perspectives.json
└─ .brainstorming/** (optional)
/issue:from-brainstorm ◄─── This command
├─ ISS-YYYYMMDD-NNN (enriched issue)
└─ SOL-{issue-id}-{uid} (structured solution)
/issue:queue
/parallel-dev-cycle
RA → EP → CD → VAS
```
## Session Files Reference
### Input Files
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── synthesis.json # REQUIRED - Top ideas with scores
├── perspectives.json # OPTIONAL - Multi-CLI insights
├── brainstorm.md # Reference only
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
├── system-architect/
│ └── analysis.md # Contains clarifications + enhancements
├── api-designer/
│ └── analysis.md
└── ...
```
### Output Files
```
.workflow/issues/
├── solutions/
│ └── ISS-YYYYMMDD-001.jsonl # Created solution (JSONL)
└── (managed by ccw issue CLI)
```
## Related Commands
- `/workflow:brainstorm-with-file` - Generate brainstorm sessions
- `/workflow:brainstorm:synthesis` - Add clarifications to brainstorm
- `/issue:new` - Create issues from GitHub or text
- `/issue:plan` - Generate solutions via exploration
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute with parallel-dev-cycle
- `ccw issue status <id>` - View issue
- `ccw issue solution <id>` - View solution

View File

@@ -1,416 +0,0 @@
---
name: new
description: Create structured issue from GitHub URL or text description
argument-hint: "[-y|--yes] <github-url | text-description> [--priority 1-5]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*), mcp__ace-tool__search_context(*)
---
## Auto Mode
When `--yes` or `-y`: Skip clarification questions, create issue with inferred details.
# Issue New Command (/issue:new)
## Core Principle
**Requirement Clarity Detection** → Ask only when needed
```
Clear Input (GitHub URL, structured text) → Direct creation
Unclear Input (vague description) → Minimal clarifying questions
```
## Issue Structure
```typescript
interface Issue {
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
title: string;
status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed';
priority: number; // 1 (critical) to 5 (low)
context: string; // Problem description (single source of truth)
source: 'github' | 'text' | 'discovery';
source_url?: string;
labels?: string[];
// GitHub binding (for non-GitHub sources that publish to GitHub)
github_url?: string; // https://github.com/owner/repo/issues/123
github_number?: number; // 123
// Optional structured fields
expected_behavior?: string;
actual_behavior?: string;
affected_components?: string[];
// Feedback history (failures + human clarifications)
feedback?: {
type: 'failure' | 'clarification' | 'rejection';
stage: string; // new/plan/execute
content: string;
created_at: string;
}[];
// Solution binding
bound_solution_id: string | null;
// Timestamps
created_at: string;
updated_at: string;
}
```
## Quick Reference
```bash
# Clear inputs - direct creation
/issue:new https://github.com/owner/repo/issues/123
/issue:new "Login fails with special chars. Expected: success. Actual: 500 error"
# Vague input - will ask clarifying questions
/issue:new "something wrong with auth"
```
## Implementation
### Phase 1: Input Analysis & Clarity Detection
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --priority
// Detect input type and clarity
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
const isGitHubShort = input.match(/^#(\d+)$/);
const hasStructure = input.match(/(expected|actual|affects|steps):/i);
// Clarity score: 0-3
let clarityScore = 0;
if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear
else if (hasStructure) clarityScore = 2; // Structured text = clear
else if (input.length > 50) clarityScore = 1; // Long text = somewhat clear
else clarityScore = 0; // Vague
let issueData = {};
```
### Phase 2: Data Extraction (GitHub or Text)
```javascript
if (isGitHubUrl || isGitHubShort) {
// GitHub - fetch via gh CLI
const result = Bash(`gh issue view ${extractIssueRef(input)} --json number,title,body,labels,url`);
const gh = JSON.parse(result);
issueData = {
id: `GH-${gh.number}`,
title: gh.title,
source: 'github',
source_url: gh.url,
labels: gh.labels.map(l => l.name),
context: gh.body?.substring(0, 500) || gh.title,
...parseMarkdownBody(gh.body)
};
} else {
// Text description
issueData = {
id: `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`,
source: 'text',
...parseTextDescription(input)
};
}
```
### Phase 3: Lightweight Context Hint (Conditional)
```javascript
// ACE search ONLY for medium clarity (1-2) AND missing components
// Skip for: GitHub (has context), vague (needs clarification first)
// Note: Deep exploration happens in /issue:plan, this is just a quick hint
if (clarityScore >= 1 && clarityScore <= 2 && !issueData.affected_components?.length) {
const keywords = extractKeywords(issueData.context);
if (keywords.length >= 2) {
try {
const aceResult = mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: keywords.slice(0, 3).join(' ')
});
issueData.affected_components = aceResult.files?.slice(0, 3) || [];
} catch {
// ACE failure is non-blocking
}
}
}
```
### Phase 4: Conditional Clarification (Only if Unclear)
```javascript
// ONLY ask questions if clarity is low - simple open-ended prompt
if (clarityScore < 2 && (!issueData.context || issueData.context.length < 20)) {
const answer = AskUserQuestion({
questions: [{
question: 'Please describe the issue in more detail:',
header: 'Clarify',
multiSelect: false,
options: [
{ label: 'Provide details', description: 'Describe what, where, and expected behavior' }
]
}]
});
// Use custom text input (via "Other")
if (answer.customText) {
issueData.context = answer.customText;
issueData.title = answer.customText.split(/[.\n]/)[0].substring(0, 60);
issueData.feedback = [{
type: 'clarification',
stage: 'new',
content: answer.customText,
created_at: new Date().toISOString()
}];
}
}
```
### Phase 5: GitHub Publishing Decision (Non-GitHub Sources)
```javascript
// For non-GitHub sources, ask if user wants to publish to GitHub
let publishToGitHub = false;
if (issueData.source !== 'github') {
const publishAnswer = AskUserQuestion({
questions: [{
question: 'Would you like to publish this issue to GitHub?',
header: 'Publish',
multiSelect: false,
options: [
{ label: 'Yes, publish to GitHub', description: 'Create issue on GitHub and link it' },
{ label: 'No, keep local only', description: 'Store as local issue without GitHub sync' }
]
}]
});
publishToGitHub = publishAnswer.answers?.['Publish']?.includes('Yes');
}
```
### Phase 6: Create Issue
**Summary Display:**
- Show ID, title, source, affected files (if any)
**Confirmation** (only for vague inputs, clarityScore < 2):
- Use `AskUserQuestion` to confirm before creation
**Issue Creation** (via CLI endpoint):
```bash
# Option 1: Pipe input (recommended for complex JSON - avoids shell escaping)
echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create
# Option 2: Heredoc (for multi-line JSON)
ccw issue create << 'EOF'
{"title":"...", "context":"含\"引号\"的内容", "priority":3}
EOF
# Option 3: --data parameter (simple cases only)
ccw issue create --data '{"title":"...", "priority":3}'
```
**CLI Endpoint Features:**
| Feature | Description |
|---------|-------------|
| Auto-increment ID | `ISS-YYYYMMDD-NNN` (e.g., `ISS-20251229-001`) |
| Trailing newline | Proper JSONL format, no corruption |
| JSON output | Returns created issue with all fields |
**Example:**
```bash
# Create issue via pipe (recommended)
echo '{"title": "Login fails with special chars", "context": "500 error when password contains quotes", "priority": 2}' | ccw issue create
# Or with heredoc for complex JSON
ccw issue create << 'EOF'
{
"title": "Login fails with special chars",
"context": "500 error when password contains \"quotes\"",
"priority": 2,
"source": "text",
"expected_behavior": "Login succeeds",
"actual_behavior": "500 Internal Server Error"
}
EOF
# Output (JSON)
{
"id": "ISS-20251229-001",
"title": "Login fails with special chars",
"status": "registered",
...
}
```
**GitHub Publishing** (if user opted in):
```javascript
// Step 1: Create local issue FIRST
const localIssue = createLocalIssue(issueData); // ccw issue create
// Step 2: Publish to GitHub if requested
if (publishToGitHub) {
const ghResult = Bash(`gh issue create --title "${issueData.title}" --body "${issueData.context}"`);
// Parse GitHub URL from output
const ghUrl = ghResult.match(/https:\/\/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/)?.[0];
const ghNumber = parseInt(ghUrl?.match(/\/issues\/(\d+)/)?.[1]);
if (ghNumber) {
// Step 3: Update local issue with GitHub binding
Bash(`ccw issue update ${localIssue.id} --github-url "${ghUrl}" --github-number ${ghNumber}`);
// Or via pipe:
// echo '{"github_url":"${ghUrl}","github_number":${ghNumber}}' | ccw issue update ${localIssue.id}
}
}
```
**Workflow:**
```
1. Create local issue (ISS-YYYYMMDD-NNN) → stored in .workflow/issues.jsonl
2. If publishToGitHub:
a. gh issue create → returns GitHub URL
b. Update local issue with github_url + github_number binding
3. Both local and GitHub issues exist, linked together
```
**Example with GitHub Publishing:**
```bash
# User creates text issue
/issue:new "Login fails with special chars. Expected: success. Actual: 500"
# System asks: "Would you like to publish this issue to GitHub?"
# User selects: "Yes, publish to GitHub"
# Output:
# ✓ Local issue created: ISS-20251229-001
# ✓ Published to GitHub: https://github.com/org/repo/issues/123
# ✓ GitHub binding saved to local issue
# → Next step: /issue:plan ISS-20251229-001
# Resulting issue JSON:
{
"id": "ISS-20251229-001",
"title": "Login fails with special chars",
"source": "text",
"github_url": "https://github.com/org/repo/issues/123",
"github_number": 123,
...
}
```
**Completion:**
- Display created issue ID
- Show GitHub URL (if published)
- Show next step: `/issue:plan <id>`
## Execution Flow
```
Phase 1: Input Analysis
└─ Detect clarity score (GitHub URL? Structured text? Keywords?)
Phase 2: Data Extraction (branched by clarity)
┌────────────┬─────────────────┬──────────────┐
│ Score 3 │ Score 1-2 │ Score 0 │
│ GitHub │ Text + ACE │ Vague │
├────────────┼─────────────────┼──────────────┤
│ gh CLI │ Parse struct │ AskQuestion │
│ → parse │ + quick hint │ (1 question) │
│ │ (3 files max) │ → feedback │
└────────────┴─────────────────┴──────────────┘
Phase 3: GitHub Publishing Decision (non-GitHub only)
├─ Source = github: Skip (already from GitHub)
└─ Source ≠ github: AskUserQuestion
├─ Yes → publishToGitHub = true
└─ No → publishToGitHub = false
Phase 4: Create Issue
├─ Score ≥ 2: Direct creation
└─ Score < 2: Confirm first → Create
└─ If publishToGitHub: gh issue create → link URL
Note: Deep exploration & lifecycle deferred to /issue:plan
```
## Helper Functions
```javascript
function extractKeywords(text) {
const stopWords = new Set(['the', 'a', 'an', 'is', 'are', 'was', 'were', 'not', 'with']);
return text
.toLowerCase()
.split(/\W+/)
.filter(w => w.length > 3 && !stopWords.has(w))
.slice(0, 5);
}
function parseTextDescription(text) {
const result = { title: '', context: '' };
const sentences = text.split(/\.(?=\s|$)/);
result.title = sentences[0]?.trim().substring(0, 60) || 'Untitled';
result.context = text.substring(0, 500);
// Extract structured fields if present
const expected = text.match(/expected:?\s*([^.]+)/i);
const actual = text.match(/actual:?\s*([^.]+)/i);
const affects = text.match(/affects?:?\s*([^.]+)/i);
if (expected) result.expected_behavior = expected[1].trim();
if (actual) result.actual_behavior = actual[1].trim();
if (affects) {
result.affected_components = affects[1].split(/[,\s]+/).filter(c => c.includes('/') || c.includes('.'));
}
return result;
}
function parseMarkdownBody(body) {
if (!body) return {};
const result = {};
const problem = body.match(/##?\s*(problem|description)[:\s]*([\s\S]*?)(?=##|$)/i);
const expected = body.match(/##?\s*expected[:\s]*([\s\S]*?)(?=##|$)/i);
const actual = body.match(/##?\s*actual[:\s]*([\s\S]*?)(?=##|$)/i);
if (problem) result.context = problem[2].trim().substring(0, 500);
if (expected) result.expected_behavior = expected[2].trim();
if (actual) result.actual_behavior = actual[2].trim();
return result;
}
```
## Examples
### Clear Input (No Questions)
```bash
/issue:new https://github.com/org/repo/issues/42
# → Fetches, parses, creates immediately
/issue:new "Login fails with special chars. Expected: success. Actual: 500"
# → Parses structure, creates immediately
```
### Vague Input (1 Question)
```bash
/issue:new "auth broken"
# → Asks: "Input unclear. What is the issue about?"
# → User provides details → saved to feedback[]
# → Creates issue
```
## Related Commands
- `/issue:plan` - Plan solution for issue

View File

@@ -1,335 +0,0 @@
---
name: plan
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]"
allowed-tools: TodoWrite(*), Task(*), Skill(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-bind solutions without confirmation, use recommended settings.
# Issue Plan Command (/issue:plan)
## Overview
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
**Behavior:**
- Single solution per issue → auto-bind
- Multiple solutions → return for user selection
- Agent handles file generation
## Core Guidelines
**⚠️ Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
**Output Options**:
- `--brief`: JSON with minimal fields (id, title, status, priority, tags)
- `--json`: Full JSON (agent use only)
**Orchestration vs Execution**:
- **Command (orchestrator)**: Use `--brief` for minimal context
- **Agent (executor)**: Fetch full details → `ccw issue status <id> --json`
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
## Usage
```bash
/issue:plan [<issue-id>[,<issue-id>,...]] [FLAGS]
# Examples
/issue:plan # Default: --all-pending
/issue:plan GH-123 # Single issue
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
/issue:plan --all-pending # All pending issues (explicit)
# Flags
--batch-size <n> Max issues per agent batch (default: 3)
```
## Execution Process
```
Phase 1: Issue Loading & Intelligent Grouping
├─ Parse input (single, comma-separated, or --all-pending)
├─ Fetch issue metadata (ID, title, tags)
├─ Validate issues exist (create if needed)
└─ Intelligent grouping via Gemini (semantic similarity, max 3 per batch)
Phase 2: Unified Explore + Plan (issue-plan-agent)
├─ Launch issue-plan-agent per batch
├─ Agent performs:
│ ├─ ACE semantic search for each issue
│ ├─ Codebase exploration (files, patterns, dependencies)
│ ├─ Solution generation with task breakdown
│ └─ Conflict detection across issues
└─ Output: solution JSON per issue
Phase 3: Solution Registration & Binding
├─ Append solutions to solutions/{issue-id}.jsonl
├─ Single solution per issue → auto-bind
├─ Multiple candidates → AskUserQuestion to select
└─ Update issues.jsonl with bound_solution_id
Phase 4: Summary
├─ Display bound solutions
├─ Show task counts per issue
└─ Display next steps (/issue:queue)
```
## Implementation
### Phase 1: Issue Loading (Brief Info Only)
```javascript
const batchSize = flags.batchSize || 3;
let issues = []; // {id, title, tags} - brief info for grouping only
// Default to --all-pending if no input provided
const useAllPending = flags.allPending || !userInput || userInput.trim() === '';
if (useAllPending) {
// Get pending issues with brief metadata via CLI
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
const parsed = result ? JSON.parse(result) : [];
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
if (issues.length === 0) {
console.log('No pending issues found.');
return;
}
console.log(`Found ${issues.length} pending issues`);
} else {
// Parse comma-separated issue IDs, fetch brief metadata
const ids = userInput.includes(',')
? userInput.split(',').map(s => s.trim())
: [userInput.trim()];
for (const id of ids) {
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
const info = Bash(`ccw issue status ${id} --json`).trim();
const parsed = info ? JSON.parse(info) : {};
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
}
}
// Note: Agent fetches full issue content via `ccw issue status <id> --json`
// Intelligent grouping: Analyze issues by title/tags, group semantically similar ones
// Strategy: Same module/component, related bugs, feature clusters
// Constraint: Max ${batchSize} issues per batch
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es)`);
TodoWrite({
todos: batches.map((_, i) => ({
content: `Plan batch ${i+1}`,
status: 'pending',
activeForm: `Planning batch ${i+1}`
}))
});
```
### Phase 2: Unified Explore + Plan (issue-plan-agent) - PARALLEL
```javascript
Bash(`mkdir -p .workflow/issues/solutions`);
const pendingSelections = []; // Collect multi-solution issues for user selection
const agentResults = []; // Collect all agent results for conflict aggregation
// Build prompts for all batches
const agentTasks = batches.map((batch, batchIndex) => {
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
const batchIds = batch.map(i => i.id);
const issuePrompt = `
## Plan Issues
**Issues** (grouped by similarity):
${issueList}
**Project Root**: ${process.cwd()}
### Project Context (MANDATORY)
1. Read: .workflow/project-tech.json (technology stack, architecture)
2. Read: .workflow/project-guidelines.json (constraints and conventions)
### Workflow
1. Fetch issue details: ccw issue status <id> --json
2. **Analyze failure history** (if issue.feedback exists):
- Extract failure details from issue.feedback (type='failure', stage='execute')
- Parse error_type, message, task_id, solution_id from content JSON
- Identify failure patterns: repeated errors, root causes, blockers
- **Constraint**: Avoid repeating failed approaches
3. Load project context files
4. Explore codebase (ACE semantic search)
5. Plan solution with tasks (schema: solution-schema.json)
- **If previous solution failed**: Reference failure analysis in solution.approach
- Add explicit verification steps to prevent same failure mode
6. **If github_url exists**: Add final task to comment on GitHub issue
7. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
8. **CRITICAL - Binding Decision**:
- Single solution → **MUST execute**: ccw issue bind <issue-id> <solution-id>
- Multiple solutions → Return pending_selection only (no bind)
### Failure-Aware Planning Rules
- **Extract failure patterns**: Parse issue.feedback where type='failure' and stage='execute'
- **Identify root causes**: Analyze error_type (test_failure, compilation, timeout, etc.)
- **Design alternative approach**: Create solution that addresses root cause
- **Add prevention steps**: Include explicit verification to catch same error earlier
- **Document lessons**: Reference previous failures in solution.approach
### Rules
- Solution ID format: SOL-{issue-id}-{uid} (uid: 4 random alphanumeric chars, e.g., a7x9)
- Single solution per issue → auto-bind via ccw issue bind
- Multiple solutions → register only, return pending_selection
- Tasks must have quantified acceptance.criteria
### Return Summary
{"bound":[{"issue_id":"...","solution_id":"...","task_count":N}],"pending_selection":[{"issue_id":"...","solutions":[{"id":"...","description":"...","task_count":N}]}]}
`;
return { batchIndex, batchIds, issuePrompt, batch };
});
// Launch agents in parallel (max 10 concurrent)
const MAX_PARALLEL = 10;
for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
const chunk = agentTasks.slice(i, i + MAX_PARALLEL);
const taskIds = [];
// Launch chunk in parallel
for (const { batchIndex, batchIds, issuePrompt, batch } of chunk) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
const taskId = Task(
subagent_type="issue-plan-agent",
run_in_background=true,
description=`Explore & plan ${batch.length} issues: ${batchIds.join(', ')}`,
prompt=issuePrompt
);
taskIds.push({ taskId, batchIndex });
}
console.log(`Launched ${taskIds.length} agents (batch ${i/MAX_PARALLEL + 1}/${Math.ceil(agentTasks.length/MAX_PARALLEL)})...`);
// Collect results from this chunk
for (const { taskId, batchIndex } of taskIds) {
const result = TaskOutput(task_id=taskId, block=true);
// Extract JSON from potential markdown code blocks (agent may wrap in ```json...```)
const jsonText = extractJsonFromMarkdown(result);
let summary;
try {
summary = JSON.parse(jsonText);
} catch (e) {
console.log(`⚠ Batch ${batchIndex + 1}: Failed to parse agent result, skipping`);
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
continue;
}
agentResults.push(summary); // Store for Phase 3 conflict aggregation
// Verify binding for bound issues (agent should have executed bind)
for (const item of summary.bound || []) {
const status = JSON.parse(Bash(`ccw issue status ${item.issue_id} --json`).trim());
if (status.bound_solution_id === item.solution_id) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
} else {
// Fallback: agent failed to bind, execute here
Bash(`ccw issue bind ${item.issue_id} ${item.solution_id}`);
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks) [recovered]`);
}
}
// Collect pending selections for Phase 3
for (const pending of summary.pending_selection || []) {
pendingSelections.push(pending);
}
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
}
```
### Phase 3: Solution Selection (if pending)
```javascript
// Handle multi-solution issues
for (const pending of pendingSelections) {
if (pending.solutions.length === 0) continue;
const options = pending.solutions.slice(0, 4).map(sol => ({
label: `${sol.id} (${sol.task_count} tasks)`,
description: sol.description || sol.approach || 'No description'
}));
const answer = AskUserQuestion({
questions: [{
question: `Issue ${pending.issue_id}: which solution to bind?`,
header: pending.issue_id,
options: options,
multiSelect: false
}]
});
const selected = answer[Object.keys(answer)[0]];
if (!selected || selected === 'Other') continue;
const solId = selected.split(' ')[0];
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
console.log(`${pending.issue_id}: ${solId} bound`);
}
```
### Phase 4: Summary
```javascript
// Count planned issues via CLI
const planned = JSON.parse(Bash(`ccw issue list --status planned --brief`) || '[]');
const plannedCount = planned.length;
console.log(`
## Done: ${issues.length} issues → ${plannedCount} planned
Next: \`/issue:queue\`\`/issue:execute\`
`);
```
## Error Handling
| Error | Resolution |
|-------|------------|
| Issue not found | Auto-create in issues.jsonl |
| ACE search fails | Agent falls back to ripgrep |
| No solutions generated | Display error, suggest manual planning |
| User cancels selection | Skip issue, continue with others |
| File conflicts | Agent detects and suggests resolution order |
## Bash Compatibility
**Avoid**: `$(cmd)`, `$var`, `for` loops — will be escaped incorrectly
**Use**: Simple commands + `&&` chains, quote comma params `"pending,registered"`
## Quality Checklist
Before completing, verify:
- [ ] All input issues have solutions in `solutions/{issue-id}.jsonl`
- [ ] Single solution issues are auto-bound (`bound_solution_id` set)
- [ ] Multi-solution issues returned in `pending_selection` for user choice
- [ ] Each solution has executable tasks with `modification_points`
- [ ] Task acceptance criteria are quantified (not vague)
- [ ] Conflicts detected and reported (if multiple issues touch same files)
- [ ] Issue status updated to `planned` after binding
## Related Commands
- `/issue:queue` - Form execution queue from bound solutions
- `ccw issue list` - List all issues
- `ccw issue status` - View issue and solution details

View File

@@ -1,445 +0,0 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent (solution-level)
argument-hint: "[-y|--yes] [--queues <n>] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm queue formation, use recommended conflict resolutions.
# Issue Queue Command (/issue:queue)
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves **inter-solution** conflicts, and creates an ordered execution queue at **solution level**.
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
## Core Capabilities
- **Agent-driven**: issue-queue-agent handles all ordering logic
- **Solution-level granularity**: Queue items are solutions, not tasks
- **Conflict clarification**: High-severity conflicts prompt user decision
- Semantic priority calculation per solution (0.0-1.0)
- Parallel/Sequential group assignment for solutions
## Core Guidelines
**⚠️ Data Access Principle**: Issues and queue files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status planned --brief` | `Read('issues.jsonl')` |
| **Batch solutions (NEW)** | `ccw issue solutions --status planned --brief` | Loop `ccw issue solution <id>` |
| List queue (brief) | `ccw issue queue --brief` | `Read('queues/*.json')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Get next item | `ccw issue next --json` | `Read('queues/*.json')` |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Sync from queue | `ccw issue update --from-queue` | Direct file edit |
| Read solution (single) | `ccw issue solution <id> --brief` | `Read('solutions/*.jsonl')` |
**Output Options**:
- `--brief`: JSON with minimal fields (id, status, counts)
- `--json`: Full JSON (agent use only)
**Orchestration vs Execution**:
- **Command (orchestrator)**: Use `--brief` for minimal context
- **Agent (executor)**: Fetch full details → `ccw issue status <id> --json`
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `queues/*.json` directly.
## Usage
```bash
/issue:queue [FLAGS]
# Examples
/issue:queue # Form NEW queue from all bound solutions
/issue:queue --queues 3 # Form 3 parallel queues (solutions distributed)
/issue:queue --issue GH-123 # Form queue for specific issue only
/issue:queue --append GH-124 # Append to active queue
/issue:queue --list # List all queues (history)
/issue:queue --switch QUE-xxx # Switch active queue
/issue:queue --archive # Archive completed active queue
# Flags
--queues <n> Number of parallel queues (default: 1)
--issue <id> Form queue for specific issue only
--append <id> Append issue to active queue (don't create new)
--force Skip active queue check, always create new queue
# CLI subcommands (ccw issue queue ...)
ccw issue queue list List all queues with status
ccw issue queue add <issue-id> Add issue to queue (interactive if active queue exists)
ccw issue queue add <issue-id> -f Add to new queue without prompt (force)
ccw issue queue merge <src> --queue <target> Merge source queue into target queue
ccw issue queue switch <queue-id> Switch active queue
ccw issue queue archive Archive current queue
ccw issue queue delete <queue-id> Delete queue from history
```
## Execution Process
```
Phase 1: Solution Loading & Distribution
├─ Load issues.jsonl, filter by status='planned' + bound_solution_id
├─ Read solutions/{issue-id}.jsonl, find bound solution
├─ Extract files_touched from task modification_points
├─ Build solution objects array
└─ If --queues > 1: Partition solutions into N groups (minimize cross-group file conflicts)
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
├─ Generate N queue IDs (QUE-xxx-1, QUE-xxx-2, ...)
├─ If --queues == 1: Launch single issue-queue-agent
├─ If --queues > 1: Launch N issue-queue-agents IN PARALLEL
├─ Each agent performs:
│ ├─ Conflict analysis (5 types via Gemini CLI)
│ ├─ Build dependency DAG from conflicts
│ ├─ Calculate semantic priority per solution
│ └─ Assign execution groups (parallel/sequential)
└─ Each agent writes: queue JSON + index update (NOT active yet)
Phase 5: Conflict Clarification (if needed)
├─ Collect `clarifications` arrays from all agents
├─ If clarifications exist → AskUserQuestion (batched)
├─ Pass user decisions back to respective agents (resume)
└─ Agents update queues with resolved conflicts
Phase 6: Status Update & Summary
├─ Update issue statuses to 'queued'
└─ Display new queue summary (N queues)
Phase 7: Active Queue Check & Decision (REQUIRED)
├─ Read queue index: ccw issue queue list --brief
├─ Get generated queue ID from agent output
├─ If NO active queue exists:
│ ├─ Set generated queue as active_queue_id
│ ├─ Update index.json
│ └─ Display: "Queue created and activated"
└─ If active queue exists with items:
├─ Display both queues to user
├─ Use AskUserQuestion to prompt:
│ ├─ "Use new queue (keep existing)" → Set new as active, keep old inactive
│ ├─ "Merge: add new items to existing" → Merge new → existing, delete new
│ ├─ "Merge: add existing items to new" → Merge existing → new, archive old
│ └─ "Cancel" → Delete new queue, keep existing active
└─ Execute chosen action
```
## Implementation
### Phase 1: Solution Loading & Distribution
**Data Loading:**
- Use `ccw issue solutions --status planned --brief` to get all planned issues with solutions in **one call**
- Returns: Array of `{ issue_id, solution_id, is_bound, task_count, files_touched[], priority }`
- If no bound solutions found → display message, suggest `/issue:plan`
**Build Solution Objects:**
```javascript
// Single CLI call replaces N individual queries
const result = Bash(`ccw issue solutions --status planned --brief`).trim();
const solutions = result ? JSON.parse(result) : [];
if (solutions.length === 0) {
console.log('No bound solutions found. Run /issue:plan first.');
return;
}
// solutions already in correct format:
// { issue_id, solution_id, is_bound, task_count, files_touched[], priority }
```
**Multi-Queue Distribution** (if `--queues > 1`):
- Use `files_touched` from brief output for partitioning
- Group solutions with overlapping files into same queue
**Output:** Array of solution objects (or N arrays if multi-queue)
### Phase 2-4: Agent-Driven Queue Formation
**Generate Queue IDs** (command layer, pass to agent):
```javascript
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
const numQueues = args.queues || 1;
const queueIds = numQueues === 1
? [`QUE-${timestamp}`]
: Array.from({length: numQueues}, (_, i) => `QUE-${timestamp}-${i + 1}`);
```
**Agent Prompt** (same for each queue, with assigned solutions):
```
## Order Solutions into Execution Queue
**Queue ID**: ${queueId}
**Solutions**: ${solutions.length} from ${issues.length} issues
**Project Root**: ${cwd}
**Queue Index**: ${queueIndex} of ${numQueues}
### Input
${JSON.stringify(solutions)}
// Each object: { issue_id, solution_id, task_count, files_touched[], priority }
### Workflow
Step 1: Build dependency graph from solutions (nodes=solutions, edges=file conflicts via files_touched)
Step 2: Use Gemini CLI for conflict analysis (5 types: file, API, data, dependency, architecture)
Step 3: For high-severity conflicts without clear resolution → add to `clarifications`
Step 4: Calculate semantic priority (base from issue priority + task_count boost)
Step 5: Assign execution groups: P* (parallel, no overlaps) / S* (sequential, shared files)
Step 6: Write queue JSON + update index
### Output Requirements
**Write files** (exactly 2):
- `.workflow/issues/queues/${queueId}.json` - Full queue with solutions, conflicts, groups
- `.workflow/issues/queues/index.json` - Update with new queue entry
**Return JSON**:
\`\`\`json
{
"queue_id": "${queueId}",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{"id": "P1", "type": "parallel", "count": N}],
"issues_queued": ["ISS-xxx"],
"clarifications": [{"conflict_id": "CFT-1", "question": "...", "options": [...]}]
}
\`\`\`
### Rules
- Solution granularity (NOT individual tasks)
- Queue Item ID format: S-1, S-2, S-3, ...
- Use provided Queue ID (do NOT generate new)
- `clarifications` only present if high-severity unresolved conflicts exist
- Use `files_touched` from input (already extracted by orchestrator)
### Done Criteria
- [ ] Queue JSON written with all solutions ordered
- [ ] Index updated with active_queue_id
- [ ] No circular dependencies
- [ ] Parallel groups have no file overlaps
- [ ] Return JSON matches required shape
```
**Launch Agents** (parallel if multi-queue):
```javascript
const numQueues = args.queues || 1;
if (numQueues === 1) {
// Single queue: single agent call
const result = Task(
subagent_type="issue-queue-agent",
prompt=buildPrompt(queueIds[0], solutions),
description=`Order ${solutions.length} solutions`
);
} else {
// Multi-queue: parallel agent calls (single message with N Task calls)
const agentPromises = solutionGroups.map((group, i) =>
Task(
subagent_type="issue-queue-agent",
prompt=buildPrompt(queueIds[i], group, i + 1, numQueues),
description=`Queue ${i + 1}/${numQueues}: ${group.length} solutions`
)
);
// All agents launched in parallel via single message with multiple Task tool calls
}
```
**Multi-Queue Index Update:**
- First queue sets `active_queue_id`
- All queues added to `queues` array with `queue_group` field linking them
### Phase 5: Conflict Clarification
**Collect Agent Results** (multi-queue):
```javascript
// Collect clarifications from all agents
const allClarifications = results.flatMap((r, i) =>
(r.clarifications || []).map(c => ({ ...c, queue_id: queueIds[i], agent_id: agentIds[i] }))
);
```
**Check Agent Return:**
- Parse agent result JSON (or all results if multi-queue)
- If any `clarifications` array exists and non-empty → user decision required
**Clarification Flow:**
```javascript
if (allClarifications.length > 0) {
for (const clarification of allClarifications) {
// Present to user via AskUserQuestion
const answer = AskUserQuestion({
questions: [{
question: `[${clarification.queue_id}] ${clarification.question}`,
header: clarification.conflict_id,
options: clarification.options,
multiSelect: false
}]
});
// Resume respective agent with user decision
Task(
subagent_type="issue-queue-agent",
resume=clarification.agent_id,
prompt=`Conflict ${clarification.conflict_id} resolved: ${answer.selected}`
);
}
}
```
### Phase 6: Status Update & Summary
**Status Update** (MUST use CLI command, NOT direct file operations):
```bash
# Option 1: Batch update from queue (recommended)
ccw issue update --from-queue [queue-id] --json
ccw issue update --from-queue --json # Use active queue
ccw issue update --from-queue QUE-xxx --json # Use specific queue
# Option 2: Individual issue update
ccw issue update <issue-id> --status queued
```
**⚠️ IMPORTANT**: Do NOT directly modify `issues.jsonl`. Always use CLI command to ensure proper validation and history tracking.
**Output** (JSON):
```json
{
"success": true,
"queue_id": "QUE-xxx",
"queued": ["ISS-001", "ISS-002"],
"queued_count": 2,
"unplanned": ["ISS-003"],
"unplanned_count": 1
}
```
**Behavior:**
- Updates issues in queue to `status: 'queued'` (skips already queued/executing/completed)
- Identifies planned issues with `bound_solution_id` NOT in queue → `unplanned` array
- Optional `queue-id`: defaults to active queue if omitted
**Summary Output:**
- Display queue ID, solution count, task count
- Show unplanned issues (planned but NOT in queue)
- Show next step: `/issue:execute`
### Phase 7: Active Queue Check & Decision
**After agent completes Phase 1-6, check for active queue:**
```bash
ccw issue queue list --brief
```
**Decision:**
- If `active_queue_id` is null → `ccw issue queue switch <new-queue-id>` (activate new queue)
- If active queue exists → Use **AskUserQuestion** to prompt user
**AskUserQuestion:**
```javascript
AskUserQuestion({
questions: [{
question: "Active queue exists. How would you like to proceed?",
header: "Queue Action",
options: [
{ label: "Merge into existing queue", description: "Add new items to active queue, delete new queue" },
{ label: "Use new queue", description: "Switch to new queue, keep existing in history" },
{ label: "Cancel", description: "Delete new queue, keep existing active" }
],
multiSelect: false
}]
})
```
**Action Commands:**
| User Choice | Commands |
|-------------|----------|
| **Merge into existing** | `ccw issue queue merge <new-queue-id> --queue <active-queue-id>` then `ccw issue queue delete <new-queue-id>` |
| **Use new queue** | `ccw issue queue switch <new-queue-id>` |
| **Cancel** | `ccw issue queue delete <new-queue-id>` |
## Storage Structure (Queue History)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queues/ # Queue history directory
│ ├── index.json # Queue index (active + history)
│ ├── {queue-id}.json # Individual queue files
│ └── ...
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
### Queue Index Schema
```json
{
"active_queue_id": "QUE-20251227-143000",
"active_queue_group": "QGR-20251227-143000",
"queues": [
{
"id": "QUE-20251227-143000-1",
"queue_group": "QGR-20251227-143000",
"queue_index": 1,
"total_queues": 3,
"status": "active",
"issue_ids": ["ISS-xxx", "ISS-yyy"],
"total_solutions": 3,
"completed_solutions": 1,
"created_at": "2025-12-27T14:30:00Z"
}
]
}
```
**Multi-Queue Fields:**
- `queue_group`: Links queues created in same batch (format: `QGR-{timestamp}`)
- `queue_index`: Position in group (1-based)
- `total_queues`: Total queues in group
- `active_queue_group`: Current active group (for multi-queue execution)
**Note**: Queue file schema is produced by `issue-queue-agent`. See agent documentation for details.
## Error Handling
| Error | Resolution |
|-------|------------|
| No bound solutions | Display message, suggest /issue:plan |
| Circular dependency | List cycles, abort queue formation |
| High-severity conflict | Return `clarifications`, prompt user decision |
| User cancels clarification | Abort queue formation |
| **index.json not updated** | Auto-fix: Set active_queue_id to new queue |
| **Queue file missing solutions** | Abort with error, agent must regenerate |
| **User cancels queue add** | Display message, return without changes |
| **Merge with empty source** | Skip merge, display warning |
| **All items duplicate** | Skip merge, display "All items already exist" |
## Quality Checklist
Before completing, verify:
- [ ] All planned issues with `bound_solution_id` are included
- [ ] Queue JSON written to `queues/{queue-id}.json` (N files if multi-queue)
- [ ] Index updated in `queues/index.json` with `active_queue_id`
- [ ] Multi-queue: All queues share same `queue_group`
- [ ] No circular dependencies in solution DAG
- [ ] All conflicts resolved (auto or via user clarification)
- [ ] Parallel groups have no file overlaps
- [ ] Cross-queue: No file overlaps between queues
- [ ] Issue statuses updated to `queued`
## Related Commands
- `/issue:execute` - Execute queue with codex
- `ccw issue queue list` - View current queue
- `ccw issue update --from-queue [queue-id]` - Sync issue statuses from queue

View File

@@ -0,0 +1,687 @@
---
name: code-map-memory
description: 3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)
argument-hint: "\"feature-keyword\" [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Code Flow Mapping Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates code flow analysis to specialized cli-explore-agent. Orchestrator transforms agent's JSON analysis into Mermaid documentation.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing codemap OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing codemap found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility** (cli-explore-agent):
- Deep code flow analysis using dual-source strategy (Bash + Gemini CLI)
- Returns structured JSON with architecture, functions, data flow, conditionals, patterns
- NO file writing - analysis only
**Orchestrator Responsibility**:
- Provides feature keyword and analysis scope to agent
- Transforms agent's JSON into Mermaid-enriched markdown documentation
- Writes all files (5 docs + metadata.json + SKILL.md)
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Feature-Specific SKILL**: Each feature creates independent `.claude/skills/codemap-{feature}/` package
3. **Specialized Agent**: Phase 2a uses cli-explore-agent for professional code analysis (Deep Scan mode)
4. **Orchestrator Documentation**: Phase 2b transforms agent JSON into Mermaid markdown files
5. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
6. **No User Prompts**: Never ask user questions or wait for input between phases
7. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
8. **Multi-Level Detail**: Generate 4 levels: architecture → function → data → conditional
---
## 3-Phase Execution
### Phase 1: Parse Feature Keyword & Check Existing
**Goal**: Normalize feature keyword, check existing codemap, prepare for analysis
**Step 1: Parse Feature Keyword**
```bash
# Get feature keyword from argument
FEATURE_KEYWORD="$1"
# Normalize: lowercase, spaces to hyphens
normalized_feature=$(echo "$FEATURE_KEYWORD" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr '_' '-')
# Example: "User Authentication" → "user-authentication"
# Example: "支付处理" → "支付处理" (keep non-ASCII)
```
**Step 2: Set Tool Preference**
```bash
# Default to gemini unless --tool specified
TOOL="${tool_flag:-gemini}"
```
**Step 3: Check Existing Codemap**
```bash
# Define codemap directory
CODEMAP_DIR=".claude/skills/codemap-${normalized_feature}"
# Check if codemap exists
bash(test -d "$CODEMAP_DIR" && echo "exists" || echo "not_exists")
# Count existing files
bash(find "$CODEMAP_DIR" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Step 4: Skip Decision**
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Codemap already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf "$CODEMAP_DIR")
SKIP_GENERATION = false
message = "Regenerating codemap from scratch."
} else {
SKIP_GENERATION = false
message = "No existing codemap found, generating new code flow analysis."
}
```
**Output Variables**:
- `FEATURE_KEYWORD`: Original feature keyword
- `normalized_feature`: Normalized feature name for directory
- `CODEMAP_DIR`: `.claude/skills/codemap-{feature}`
- `TOOL`: CLI tool to use (gemini or qwen)
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Code Flow Analysis & Documentation Generation
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Use cli-explore-agent for professional code analysis, then orchestrator generates Mermaid documentation
**Architecture**: Phase 2a (Agent Analysis) → Phase 2b (Orchestrator Documentation)
---
#### Phase 2a: cli-explore-agent Analysis
**Purpose**: Leverage specialized cli-explore-agent for deep code flow analysis
**Agent Task Specification**:
```
Task(
subagent_type: "cli-explore-agent",
description: "Analyze code flow: {FEATURE_KEYWORD}",
prompt: "
Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
**Analysis Mode**: deep-scan (Dual-source: Bash structural scan + Gemini semantic analysis)
**Analysis Objectives**:
1. **Module Architecture**: Identify high-level module organization, interactions, and entry points
2. **Function Call Chains**: Trace execution paths, call sequences, and parameter flows
3. **Data Transformations**: Map data structure changes and transformation stages
4. **Conditional Paths**: Document decision trees, branches, and error handling strategies
5. **Design Patterns**: Discover architectural patterns and extract design intent
**Scope**:
- Feature: {FEATURE_KEYWORD}
- CLI Tool: {TOOL} (gemini-2.5-pro or qwen coder-model)
- File Discovery: MCP Code Index (preferred) + rg fallback
- Target: 5-15 most relevant files
**MANDATORY FIRST STEP**:
Read: ~/.claude/workflows/cli-templates/schemas/codemap-json-schema.json
**Output**: Return JSON following schema exactly. NO FILE WRITING - return JSON analysis only.
**Critical Requirements**:
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
- Focus exclusively on {FEATURE_KEYWORD} feature flow
- Include file:line references for ALL findings
- Extract design intent from code structure and comments
- NO FILE WRITING - return JSON analysis only
- Handle tool failures gracefully (Gemini → Qwen fallback, MCP → rg fallback)
"
)
```
**Agent Output**: JSON analysis result with architecture, functions, data flow, conditionals, and patterns
---
#### Phase 2b: Orchestrator Documentation Generation
**Purpose**: Transform cli-explore-agent JSON into Mermaid-enriched documentation
**Input**: Agent's JSON analysis result
**Process**:
1. **Parse Agent Analysis**:
```javascript
const analysis = JSON.parse(agentResult)
const { feature, files_analyzed, architecture, function_calls, data_flow, conditional_logic, design_patterns } = analysis
```
2. **Generate Mermaid Diagrams from Structured Data**:
**a) architecture-flow.md** (~3K tokens):
```javascript
// Convert architecture.modules + architecture.interactions → Mermaid graph TD
const architectureMermaid = `
graph TD
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
${architecture.interactions.map(i => ` ${i.from} -->|${i.type}| ${i.to}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/architecture-flow.md`,
content: `---
feature: ${feature}
level: architecture
detail: high-level module interactions
---
# Architecture Flow: ${feature}
## Overview
${architecture.overview}
## Module Architecture
${architecture.modules.map(m => `### ${m.name}\n- **File**: ${m.file}\n- **Role**: ${m.responsibility}\n- **Dependencies**: ${m.dependencies.join(', ')}`).join('\n\n')}
## Flow Diagram
\`\`\`mermaid
${architectureMermaid}
\`\`\`
## Key Interactions
${architecture.interactions.map(i => `- **${i.from} → ${i.to}**: ${i.description}`).join('\n')}
## Entry Points
${architecture.entry_points.map(e => `- **${e.function}** (${e.file}): ${e.description}`).join('\n')}
`
})
```
**b) function-calls.md** (~5K tokens):
```javascript
// Convert function_calls.sequences → Mermaid sequenceDiagram
const sequenceMermaid = `
sequenceDiagram
${function_calls.sequences.map(s => ` ${s.from}->>${s.to}: ${s.method}`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/function-calls.md`,
content: `---
feature: ${feature}
level: function
detail: function-level call sequences
---
# Function Call Chains: ${feature}
## Call Sequence Diagram
\`\`\`mermaid
${sequenceMermaid}
\`\`\`
## Detailed Call Chains
${function_calls.call_chains.map(chain => `
### Chain ${chain.chain_id}: ${chain.description}
${chain.sequence.map(fn => `- **${fn.function}** (${fn.file})\n - Calls: ${fn.calls.join(', ')}`).join('\n')}
`).join('\n')}
## Parameters & Returns
${function_calls.sequences.map(s => `- **${s.method}** → Returns: ${s.returns || 'void'}`).join('\n')}
`
})
```
**c) data-flow.md** (~4K tokens):
```javascript
// Convert data_flow.transformations → Mermaid flowchart LR
const dataFlowMermaid = `
flowchart LR
${data_flow.transformations.map((t, i) => ` Stage${i}[${t.from}] -->|${t.transformer}| Stage${i+1}[${t.to}]`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/data-flow.md`,
content: `---
feature: ${feature}
level: data
detail: data structure transformations
---
# Data Flow: ${feature}
## Data Transformation Diagram
\`\`\`mermaid
${dataFlowMermaid}
\`\`\`
## Data Structures
${data_flow.structures.map(s => `### ${s.name} (${s.stage})\n\`\`\`json\n${JSON.stringify(s.shape, null, 2)}\n\`\`\``).join('\n\n')}
## Transformations
${data_flow.transformations.map(t => `- **${t.from} → ${t.to}** via \`${t.transformer}\` (${t.file})`).join('\n')}
`
})
```
**d) conditional-paths.md** (~4K tokens):
```javascript
// Convert conditional_logic.branches → Mermaid flowchart TD
const conditionalMermaid = `
flowchart TD
Start[Entry Point]
${conditional_logic.branches.map((b, i) => `
Start --> Check${i}{${b.condition}}
Check${i} -->|Yes| Path${i}A[${b.true_path}]
Check${i} -->|No| Path${i}B[${b.false_path}]
`).join('\n')}
`
Write({
file_path: `${CODEMAP_DIR}/conditional-paths.md`,
content: `---
feature: ${feature}
level: conditional
detail: decision trees and error paths
---
# Conditional Paths: ${feature}
## Decision Tree
\`\`\`mermaid
${conditionalMermaid}
\`\`\`
## Branch Conditions
${conditional_logic.branches.map(b => `- **${b.condition}** (${b.file})\n - True: ${b.true_path}\n - False: ${b.false_path}`).join('\n')}
## Error Handling
${conditional_logic.error_handling.map(e => `- **${e.error_type}**: Handler \`${e.handler}\` (${e.file}) - Recovery: ${e.recovery}`).join('\n')}
`
})
```
**e) complete-flow.md** (~8K tokens):
```javascript
// Integrate all Mermaid diagrams
Write({
file_path: `${CODEMAP_DIR}/complete-flow.md`,
content: `---
feature: ${feature}
level: complete
detail: integrated multi-level view
---
# Complete Flow: ${feature}
## Integrated Flow Diagram
\`\`\`mermaid
graph TB
subgraph Architecture
${architecture.modules.map(m => ` ${m.name}[${m.name}]`).join('\n')}
end
subgraph "Function Calls"
${function_calls.call_chains[0]?.sequence.map(fn => ` ${fn.function}`).join('\n') || ''}
end
subgraph "Data Flow"
${data_flow.structures.map(s => ` ${s.name}[${s.name}]`).join('\n')}
end
\`\`\`
## Complete Trace
[Comprehensive end-to-end documentation combining all analysis layers]
## Design Patterns Identified
${design_patterns.map(p => `- **${p.pattern}** in ${p.location}: ${p.description}`).join('\n')}
## Recommendations
${analysis.recommendations.map(r => `- ${r}`).join('\n')}
## Cross-References
- [Architecture Flow](./architecture-flow.md) - High-level module structure
- [Function Calls](./function-calls.md) - Detailed call chains
- [Data Flow](./data-flow.md) - Data transformation stages
- [Conditional Paths](./conditional-paths.md) - Decision trees and error handling
`
})
```
3. **Write metadata.json**:
```javascript
Write({
file_path: `${CODEMAP_DIR}/metadata.json`,
content: JSON.stringify({
feature: feature,
normalized_name: normalized_feature,
generated_at: new Date().toISOString(),
tool_used: analysis.analysis_metadata.tool_used,
files_analyzed: files_analyzed.map(f => f.file),
analysis_summary: {
total_files: files_analyzed.length,
modules_traced: architecture.modules.length,
functions_traced: function_calls.call_chains.reduce((sum, c) => sum + c.sequence.length, 0),
patterns_discovered: design_patterns.length
}
}, null, 2)
})
```
4. **Report Phase 2 Completion**:
```
Phase 2 Complete: Code flow analysis and documentation generated
- Agent Analysis: cli-explore-agent with {TOOL}
- Files Analyzed: {count}
- Documentation Generated: 5 markdown files + metadata.json
- Location: {CODEMAP_DIR}
```
**Completion Criteria**:
- cli-explore-agent task completed successfully with JSON result
- 5 documentation files written with valid Mermaid diagrams
- metadata.json written with analysis summary
- All files properly formatted and cross-referenced
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated flow documentation and create SKILL.md index with progressive loading
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find "{CODEMAP_DIR}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read({CODEMAP_DIR}/metadata.json)
// Extract: feature, normalized_name, files_analyzed, analysis_summary
```
3. **Read File Headers** (optional, first 30 lines):
```javascript
Read({CODEMAP_DIR}/architecture-flow.md, limit: 30)
Read({CODEMAP_DIR}/function-calls.md, limit: 30)
// Extract overview and diagram counts
```
4. **Generate SKILL.md Index**:
Template structure:
```yaml
---
name: codemap-{normalized_feature}
description: Code flow mapping for {FEATURE_KEYWORD} feature (located at {project_path}). Load this SKILL when analyzing, tracing, or understanding {FEATURE_KEYWORD} execution flow, especially when no relevant context exists in memory.
version: 1.0.0
generated_at: {ISO_TIMESTAMP}
---
# Code Flow Map: {FEATURE_KEYWORD}
## Feature: `{FEATURE_KEYWORD}`
**Analysis Date**: {DATE}
**Tool Used**: {TOOL}
**Files Analyzed**: {COUNT}
## Progressive Loading
### Level 0: Quick Overview (~2K tokens)
- [Architecture Flow](./architecture-flow.md) - High-level module interactions
### Level 1: Core Flows (~10K tokens)
- [Architecture Flow](./architecture-flow.md) - Module architecture
- [Function Calls](./function-calls.md) - Function call chains
### Level 2: Complete Analysis (~20K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md) - Data transformations
### Level 3: Deep Dive (~30K tokens)
- [Architecture Flow](./architecture-flow.md)
- [Function Calls](./function-calls.md)
- [Data Flow](./data-flow.md)
- [Conditional Paths](./conditional-paths.md) - Branches and error handling
- [Complete Flow](./complete-flow.md) - Integrated comprehensive view
## Usage
Load this SKILL package when:
- Analyzing {FEATURE_KEYWORD} implementation
- Tracing execution flow for debugging
- Understanding code dependencies
- Planning refactoring or enhancements
## Analysis Summary
- **Modules Traced**: {modules_traced}
- **Functions Traced**: {functions_traced}
- **Files Analyzed**: {total_files}
## Mermaid Diagrams Included
- Architecture flow diagram (graph TD)
- Function call sequence diagram (sequenceDiagram)
- Data transformation flowchart (flowchart LR)
- Conditional decision tree (flowchart TD)
- Complete integrated diagram (graph TB)
```
5. **Write SKILL.md**:
```javascript
Write({
file_path: `{CODEMAP_DIR}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All documentation files verified
- Progressive loading levels (0-3) properly structured
- Mermaid diagram references included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Code Flow Mapping Complete
Feature: {FEATURE_KEYWORD}
Location: .claude/skills/codemap-{normalized_feature}/
Files Generated:
- SKILL.md (index)
- architecture-flow.md (with Mermaid diagram)
- function-calls.md (with Mermaid sequence diagram)
- data-flow.md (with Mermaid flowchart)
- conditional-paths.md (with Mermaid decision tree)
- complete-flow.md (with integrated Mermaid diagram)
- metadata.json
Analysis:
- Files analyzed: {count}
- Modules traced: {count}
- Functions traced: {count}
Usage: Skill(command: "codemap-{normalized_feature}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "in_progress", "activeForm": "Parsing feature keyword"},
{"content": "Agent analyzes code flow and generates files", "status": "pending", "activeForm": "Analyzing code flow"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Parse feature keyword and check existing", "status": "completed", ...},
{"content": "Agent analyzes code flow and generates files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (parse) → Phase 2 (agent analyzes) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Empty feature keyword: Report error, ask user to provide feature description
- Invalid characters: Normalize and continue
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- No files discovered: Warn user, ask for more specific feature keyword
- CLI failures: Agent handles internally with retries
- Invalid Mermaid syntax: Agent validates before writing
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:code-map-memory "feature-keyword" [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **"feature-keyword"**: Feature or flow to analyze (required)
- Examples: `"user authentication"`, `"payment processing"`, `"数据导入流程"`
- Can be English, Chinese, or mixed
- Spaces and underscores normalized to hyphens
- **--regenerate**: Force regenerate existing codemap (deletes and recreates)
- **--tool**: CLI tool for analysis (default: gemini)
- `gemini`: Comprehensive flow analysis with gemini-2.5-pro
- `qwen`: Alternative with coder-model
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/codemap-{feature}/
├── SKILL.md # Index (Phase 3)
├── architecture-flow.md # Agent (Phase 2) - High-level flow
├── function-calls.md # Agent (Phase 2) - Function chains
├── data-flow.md # Agent (Phase 2) - Data transformations
├── conditional-paths.md # Agent (Phase 2) - Branches & errors
├── complete-flow.md # Agent (Phase 2) - Integrated view
└── metadata.json # Agent (Phase 2)
```
### Example 1: User Authentication Flow
```bash
/memory:code-map-memory "user authentication"
```
**Workflow**:
1. Phase 1: Normalizes to "user-authentication", checks existing codemap
2. Phase 2: Agent discovers auth-related files, executes CLI analysis, generates 5 flow docs with Mermaid
3. Phase 3: Generates SKILL.md index with progressive loading
**Output**: `.claude/skills/codemap-user-authentication/` with 6 files + metadata
### Example 3: Regenerate with Qwen
```bash
/memory:code-map-memory "payment processing" --regenerate --tool qwen
```
**Workflow**:
1. Phase 1: Deletes existing codemap due to --regenerate
2. Phase 2: Agent uses qwen with coder-model for fresh analysis
3. Phase 3: Generates updated SKILL.md
---
## Architecture
```
code-map-memory (orchestrator)
├─ Phase 1: Parse & Check (bash commands, skip decision)
├─ Phase 2: Code Analysis & Documentation (skippable)
│ ├─ Phase 2a: cli-explore-agent Analysis
│ │ └─ Deep Scan: Bash structural + Gemini semantic → JSON
│ └─ Phase 2b: Orchestrator Documentation
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
└─ Phase 3: Write SKILL.md (index generation, always runs)
Output: .claude/skills/codemap-{feature}/
```

View File

@@ -1,383 +0,0 @@
---
name: compact
description: Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool
argument-hint: "[optional: session description]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:compact
- /memory:compact "completed core-memory module"
---
# Memory Compact Command (/memory:compact)
## 1. Overview
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
**Core Philosophy**:
- **Session Recovery First**: Capture everything needed to resume work seamlessly
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
- **Actionable State**: Record last action result and known issues
## 2. Parameters
- `"session description"` (Optional): Session description to supplement objective
- Example: "completed core-memory module"
- Example: "debugging JWT refresh - suspected memory leak"
## 3. Structured Output Format
```markdown
## Session ID
[WFS-ID if workflow session active, otherwise (none)]
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Objective
[High-level goal - the "North Star" of this session]
## Execution Plan
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
### Source: [workflow | todo | user-stated | inferred]
<details>
<summary>Full Execution Plan (Click to expand)</summary>
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
- ALL phases, tasks, subtasks
- ALL file paths (absolute)
- ALL dependencies and prerequisites
- ALL acceptance criteria
- ALL status markers ([x] done, [ ] pending)
- ALL notes and context
Example:
## Phase 1: Setup
- [x] Initialize project structure
- Created D:\Claude_dms3\src\core\index.ts
- Added dependencies: lodash, zod
- [ ] Configure TypeScript
- Update tsconfig.json for strict mode
## Phase 2: Implementation
- [ ] Implement core API
- Target: D:\Claude_dms3\src\api\handler.ts
- Dependencies: Phase 1 complete
- Acceptance: All tests pass
</details>
## Working Files (Modified)
[Absolute paths to actively modified files]
- D:\Claude_dms3\src\file1.ts (role: main implementation)
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
## Reference Files (Read-Only)
[Absolute paths to context files - NOT modified but essential for understanding]
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
- D:\Claude_dms3\package.json (role: dependencies)
## Last Action
[Last significant action and its result/status]
## Decisions
- [Decision]: [Reasoning]
- [Decision]: [Reasoning]
## Constraints
- [User-specified limitation or preference]
## Dependencies
- [Added/changed packages or environment requirements]
## Known Issues
- [Deferred bug or edge case]
## Changes Made
- [Completed modification]
## Pending
- [Next step] or (none)
## Notes
[Unstructured thoughts, hypotheses, debugging trails]
```
## 4. Field Definitions
| Field | Purpose | Recovery Value |
|-------|---------|----------------|
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
| **Changes Made** | Completed modifications | Clear record of what was done |
| **Pending** | Next steps | Immediate action items |
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
## 5. Execution Flow
### Step 1: Analyze Current Session
Extract the following from conversation history:
```javascript
const sessionAnalysis = {
sessionId: "", // WFS-* if workflow session active, null otherwise
projectRoot: "", // Absolute path: D:\Claude_dms3
objective: "", // High-level goal (1-2 sentences)
executionPlan: {
source: "workflow" | "todo" | "user-stated" | "inferred",
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
},
workingFiles: [], // {absolutePath, role} - modified files
referenceFiles: [], // {absolutePath, role} - read-only context files
lastAction: "", // Last significant action + result
decisions: [], // {decision, reasoning}
constraints: [], // User-specified limitations
dependencies: [], // Added/changed packages
knownIssues: [], // Deferred bugs
changesMade: [], // Completed modifications
pending: [], // Next steps
notes: "" // Unstructured thoughts
}
```
### Step 2: Generate Structured Text
```javascript
// Helper: Generate execution plan section
const generateExecutionPlan = (plan) => {
const sourceLabels = {
'workflow': 'workflow (IMPL_PLAN.md)',
'todo': 'todo (TodoWrite)',
'user-stated': 'user-stated',
'inferred': 'inferred'
};
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
return `### Source: ${sourceLabels[plan.source] || plan.source}
<details>
<summary>Full Execution Plan (Click to expand)</summary>
${plan.content}
</details>`;
};
const structuredText = `## Session ID
${sessionAnalysis.sessionId || '(none)'}
## Project Root
${sessionAnalysis.projectRoot}
## Objective
${sessionAnalysis.objective}
## Execution Plan
${generateExecutionPlan(sessionAnalysis.executionPlan)}
## Working Files (Modified)
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Reference Files (Read-Only)
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Last Action
${sessionAnalysis.lastAction}
## Decisions
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
## Constraints
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
## Dependencies
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
## Known Issues
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
## Changes Made
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
## Pending
${sessionAnalysis.pending.length > 0
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
: '(none)'}
## Notes
${sessionAnalysis.notes || '(none)'}`
```
### Step 3: Import to Core Memory via MCP
Use the MCP `core_memory` tool to save the structured text:
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
Or via CLI (pipe structured text to import):
```bash
# Write structured text to temp file, then import
echo "$structuredText" | ccw core-memory import
# Or from a file
ccw core-memory import --file /path/to/session-memory.md
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 4: Report Recovery ID
After successful import, **clearly display the Recovery ID** to the user:
```
╔════════════════════════════════════════════════════════════════════════════╗
║ ✓ Session Memory Saved ║
║ ║
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
║ ║
║ To restore: "Please import memory <ID>" ║
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
╚════════════════════════════════════════════════════════════════════════════╝
```
## 6. Quality Checklist
Before generating:
- [ ] Session ID captured if workflow session active (WFS-*)
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
- [ ] Objective clearly states the "North Star" goal
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
- [ ] All file paths are ABSOLUTE (not relative)
- [ ] Working Files: 3-8 modified files with roles
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
- [ ] Last Action captures final state (success/failure)
- [ ] Decisions include reasoning, not just choices
- [ ] Known Issues separates deferred from forgotten bugs
- [ ] Notes preserve debugging hypotheses if any
## 7. Path Resolution Rules
### Project Root Detection
1. Check current working directory from environment
2. Look for project markers: `.git/`, `package.json`, `.claude/`
3. Use the topmost directory containing these markers
### Absolute Path Conversion
```javascript
// Convert relative to absolute
const toAbsolutePath = (relativePath, projectRoot) => {
if (path.isAbsolute(relativePath)) return relativePath;
return path.join(projectRoot, relativePath);
};
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
```
### Reference File Categories
| Category | Examples | Priority |
|----------|----------|----------|
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
| Test Files | Corresponding test files for modified code | Medium |
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
## 8. Plan Detection (Priority Order)
### Priority 1: Workflow Session (IMPL_PLAN.md)
```javascript
// Check for active workflow session
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
if (manifest.sessions?.length > 0) {
const session = manifest.sessions[0];
const plan = await mcp__ccw-tools__session_manager({
operation: "read",
session_id: session.id,
content_type: "plan"
});
sessionAnalysis.sessionId = session.id;
sessionAnalysis.executionPlan.source = "workflow";
sessionAnalysis.executionPlan.content = plan.content;
}
```
### Priority 2: TodoWrite (Current Session Todos)
```javascript
// Extract from conversation - look for TodoWrite tool calls
// Preserve COMPLETE todo list with all details
const todos = extractTodosFromConversation();
if (todos.length > 0) {
sessionAnalysis.executionPlan.source = "todo";
// Format todos with full context - preserve status markers
sessionAnalysis.executionPlan.content = todos.map(t =>
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
).join('\n');
}
```
### Priority 3: User-Stated Plan
```javascript
// Look for explicit plan statements in user messages:
// - "Here's my plan: 1. ... 2. ... 3. ..."
// - "I want to: first..., then..., finally..."
// - Numbered or bulleted lists describing steps
const userPlan = extractUserStatedPlan();
if (userPlan) {
sessionAnalysis.executionPlan.source = "user-stated";
sessionAnalysis.executionPlan.content = userPlan;
}
```
### Priority 4: Inferred Plan
```javascript
// If no explicit plan, infer from:
// - Task description and breakdown discussion
// - Sequence of actions taken
// - Outstanding work mentioned
const inferredPlan = inferPlanFromDiscussion();
if (inferredPlan) {
sessionAnalysis.executionPlan.source = "inferred";
sessionAnalysis.executionPlan.content = inferredPlan;
}
```
## 9. Notes
- **Timing**: Execute at task completion or before context switch
- **Frequency**: Once per independent task or milestone
- **Recovery**: New session can immediately continue with full context
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
- **Absolute Paths**: Critical for cross-session recovery on different machines

View File

@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -263,7 +263,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ccw tool exec generate_module_docs
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
run_in_background: false
})
exit_code=$?
@@ -322,7 +322,7 @@ let project_root = get_project_root();
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -459,7 +459,7 @@ Documentation Generation Summary:
## Template Reference
Templates used from `~/.ccw/workflows/cli-templates/prompts/documentation/`:
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation (Part A: Code API, Part B: HTTP API)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories

View File

@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -207,21 +207,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
@@ -374,7 +374,7 @@ Documentation Generation Summary:
## Template Reference
Templates used from `~/.ccw/workflows/cli-templates/prompts/documentation/`:
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
- `api.txt`: Code API documentation
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders

View File

@@ -0,0 +1,615 @@
---
name: docs
description: Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]"
---
# Documentation Workflow (/memory:docs)
## Overview
Lightweight planner that analyzes project structure, decomposes documentation work into tasks, and generates execution plans. Does NOT generate documentation content itself - delegates to doc-generator agent.
**Execution Strategy**:
- **Dynamic Task Grouping**: Level 1 tasks grouped by top-level directories with document count limit
- **Primary constraint**: Each task generates ≤10 documents (API.md + README.md count)
- **Optimization goal**: Prefer grouping 2 top-level directories per task for context sharing
- **Conflict resolution**: If 2 dirs exceed 10 docs, reduce to 1 dir/task; if 1 dir exceeds 10 docs, split by subdirectories
- **Context benefit**: Same-task directories analyzed together via single Gemini call
- **Parallel Execution**: Multiple Level 1 tasks execute concurrently for faster completion
- **Pre-computed Analysis**: Phase 2 performs unified analysis once, stored in `.process/` for reuse
- **Efficient Data Loading**: All existing docs loaded once in Phase 2, shared across tasks
**Path Mirroring**: Documentation structure mirrors source code under `.workflow/docs/{project_name}/`
- Example: `my_app/src/core/``.workflow/docs/my_app/src/core/API.md`
**Two Execution Modes**:
- **Default (Agent Mode)**: CLI analyzes in `pre_analysis` (MODE=analysis), agent writes docs
- **--cli-execute (CLI Mode)**: CLI generates docs in `implementation_approach` (MODE=write), agent executes CLI commands
## Path Mirroring Strategy
**Principle**: Documentation structure **mirrors** source code structure under project-specific directory.
| Source Path | Project Name | Documentation Path |
|------------|--------------|-------------------|
| `my_app/src/core/` | `my_app` | `.workflow/docs/my_app/src/core/API.md` |
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
## Parameters
```bash
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)
- **--tool**: CLI tool selection (default: gemini)
- `gemini`: Comprehensive documentation, pattern recognition
- `qwen`: Architecture analysis, system design focus
- `codex`: Implementation validation, code quality
- **--cli-execute**: Enable CLI-based documentation generation (optional)
## Planning Workflow
### Phase 1: Initialize Session
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
```bash
# Update workflow-session.json with docs-specific fields
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Analyze Structure
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
**Commands** (collect data with simple bash):
```bash
# 1. Run folder analysis
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
# 2. Get top-level directories (first 2 path levels)
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
# 4. Read existing docs content (if files exist)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:57:30.469669",
"project_name": "project_name",
"project_root": "/path/to/project"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2}
],
"top_level_dirs": ["src/modules", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... existing docs content ..."
},
"unified_analysis": [],
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
### Phase 3: Detect Update Mode
**Commands**:
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
- Add `"update_mode": "update"` if count > 0, else `"create"`
- Add `"existing_docs": <count>`
### Phase 4: Decompose Tasks
**Task Hierarchy** (Dynamic based on document count):
```
Small Projects (total ≤10 docs):
Level 1: IMPL-001 (all directories in single task, shared context)
Level 2: IMPL-002 (README, full mode only)
Level 3: IMPL-003 (ARCHITECTURE+EXAMPLES), IMPL-004 (HTTP API, optional)
Medium Projects (Example: 7 top-level dirs, 18 total docs):
Step 1: Count docs per top-level dir
├─ dir1: 3 docs, dir2: 4 docs → Group 1 (7 docs)
├─ dir3: 5 docs, dir4: 3 docs → Group 2 (8 docs)
├─ dir5: 2 docs → Group 3 (2 docs, can add more)
Step 2: Create tasks with ≤10 docs constraint
Level 1: IMPL-001 to IMPL-003 (parallel groups)
├─ IMPL-001: Group 1 (dir1 + dir2, 7 docs, shared context)
├─ IMPL-002: Group 2 (dir3 + dir4, 8 docs, shared context)
└─ IMPL-003: Group 3 (remaining dirs, ≤10 docs)
Level 2: IMPL-004 (README, depends on Level 1, full mode only)
Level 3: IMPL-005 (ARCHITECTURE+EXAMPLES), IMPL-006 (HTTP API, optional)
Large Projects (single dir >10 docs):
Step 1: Detect oversized directory
└─ src/modules/: 15 subdirs → 30 docs (exceeds limit)
Step 2: Split by subdirectories
Level 1: IMPL-001 to IMPL-003 (split oversized dir)
├─ IMPL-001: src/modules/ subdirs 1-5 (10 docs)
├─ IMPL-002: src/modules/ subdirs 6-10 (10 docs)
└─ IMPL-003: src/modules/ subdirs 11-15 (10 docs)
```
**Grouping Algorithm**:
1. Count total docs for each top-level directory
2. Try grouping 2 directories (optimization for context sharing)
3. If group exceeds 10 docs, split to 1 dir/task
4. If single dir exceeds 10 docs, split by subdirectories
5. Create parallel Level 1 tasks with ≤10 docs each
**Commands**:
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
```
**Data Processing**:
1. Count documents for each top-level directory (from folder_analysis):
- Code folders: 2 docs each (API.md + README.md)
- Navigation folders: 1 doc each (README.md only)
2. Apply grouping algorithm with ≤10 docs constraint:
- Try grouping 2 directories, calculate total docs
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 5},
{"group_id": "002", "directories": ["lib/core"], "doc_count": 6},
{"group_id": "003", "directories": ["lib/helpers"], "doc_count": 3}
]
}
```
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
```
### Phase 5: Generate Task JSONs
**CLI Strategy**:
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `cd dir && gemini -p "..."`
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
## Task Templates
### Level 1: Module Trees Group Task (Unified)
**Execution Model**: Each task processes assigned directory group (max 2 directories) using pre-analyzed data from Phase 2.
```json
{
"id": "IMPL-${group_number}",
"title": "Document Module Trees Group ${group_number}",
"status": "pending",
"meta": {
"type": "docs-tree-group",
"agent": "@doc-generator",
"tool": "gemini",
"cli_execute": false,
"group_number": "${group_number}",
"total_groups": "${total_groups}"
},
"context": {
"requirements": [
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate documentation for assigned directory group",
"description": "Process directories in Group ${group_number} using pre-analyzed data",
"modification_points": [
"Read group directories from [phase2_context].groups.assignments[${group_number}].directories",
"For each directory: parse folder types from folder_analysis, parse structure from unified_analysis",
"Map source_path to .workflow/docs/${project_name}/{path}",
"Generate API.md for code folders, README.md for all folders",
"Preserve user modifications from [phase2_context].existing_docs.content"
],
"logic_flow": [
"phase2 = parse([phase2_context])",
"dirs = phase2.groups.assignments[${group_number}].directories",
"for dir in dirs:",
" folder_info = find(dir, phase2.folder_analysis)",
" outline = find(dir, phase2.unified_analysis)",
" if folder_info.type == 'code': generate API.md + README.md",
" elif folder_info.type == 'navigation': generate README.md only",
" write to .workflow/docs/${project_name}/{dir}/"
],
"depends_on": [],
"output": "group_module_docs"
}
],
"target_files": [
".workflow/docs/${project_name}/*/API.md",
".workflow/docs/${project_name}/*/README.md"
]
}
}
```
**CLI Execute Mode Note**: When `cli_execute=true`, add Step 2 in `implementation_approach`:
```json
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
```
### Level 2: Project README Task
**Task ID**: `IMPL-${readme_id}` (where `readme_id = group_count + 1`)
**Dependencies**: Depends on all Level 1 tasks completing.
```json
{
"id": "IMPL-${readme_id}",
"title": "Generate Project README",
"status": "pending",
"depends_on": ["IMPL-001", "...", "IMPL-${group_count}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{
"step": "load_existing_readme",
"command": "bash(cat .workflow/docs/${project_name}/README.md 2>/dev/null || echo 'No existing README')",
"output_to": "existing_readme"
},
{
"step": "load_module_docs",
"command": "bash(find .workflow/docs/${project_name} -type f -name '*.md' ! -path '.workflow/docs/${project_name}/README.md' ! -path '.workflow/docs/${project_name}/ARCHITECTURE.md' ! -path '.workflow/docs/${project_name}/EXAMPLES.md' ! -path '.workflow/docs/${project_name}/api/*' | xargs cat)",
"output_to": "all_module_docs"
},
{
"step": "analyze_project",
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
"output_to": "project_outline"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate project README",
"description": "Generate project README with navigation links while preserving user modifications",
"modification_points": [
"Parse [project_outline] and [all_module_docs]",
"Generate README structure with navigation links",
"Preserve [existing_readme] user modifications"
],
"logic_flow": ["Parse data", "Generate README with navigation", "Preserve modifications"],
"depends_on": [],
"output": "project_readme"
}
],
"target_files": [".workflow/docs/${project_name}/README.md"]
}
}
```
### Level 3: Architecture & Examples Documentation Task
**Task ID**: `IMPL-${arch_id}` (where `arch_id = group_count + 2`)
**Dependencies**: Depends on Level 2 (Project README).
```json
{
"id": "IMPL-${arch_id}",
"title": "Generate Architecture & Examples Documentation",
"status": "pending",
"depends_on": ["IMPL-${readme_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate architecture and examples documentation",
"modification_points": [
"Parse [arch_examples_outline] and [all_docs]",
"Generate ARCHITECTURE.md (system design, patterns)",
"Generate EXAMPLES.md (code snippets, usage)",
"Preserve [existing_arch_examples] modifications"
],
"depends_on": [],
"output": "arch_examples_docs"
}
],
"target_files": [".workflow/docs/${project_name}/ARCHITECTURE.md", ".workflow/docs/${project_name}/EXAMPLES.md"]
}
}
```
### Level 4: HTTP API Documentation Task (Optional)
**Task ID**: `IMPL-${api_id}` (where `api_id = group_count + 3`)
**Dependencies**: Depends on Level 3.
```json
{
"id": "IMPL-${api_id}",
"title": "Generate HTTP API Documentation",
"status": "pending",
"depends_on": ["IMPL-${arch_id}"],
"meta": {"type": "docs", "agent": "@doc-generator", "tool": "gemini", "cli_execute": false},
"flow_control": {
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
],
"implementation_approach": [
{
"step": 1,
"title": "Generate HTTP API documentation",
"modification_points": [
"Parse [api_outline] and [endpoint_discovery]",
"Document endpoints, request/response formats",
"Preserve [existing_api_docs] modifications"
],
"depends_on": [],
"output": "api_docs"
}
],
"target_files": [".workflow/docs/${project_name}/api/README.md"]
}
}
```
## Session Structure
**Unified Structure** (single JSON replaces multiple text files):
```
.workflow/active/
└── WFS-docs-{timestamp}/
├── workflow-session.json # Session metadata
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
├── IMPL-{N+1}.json # README (full mode)
├── IMPL-{N+2}.json # ARCHITECTURE+EXAMPLES (full mode)
└── IMPL-{N+3}.json # HTTP API (optional)
```
**doc-planning-data.json Structure**:
```json
{
"metadata": {
"generated_at": "2025-11-03T16:41:06+08:00",
"project_name": "Claude_dms3",
"project_root": "/d/Claude_dms3"
},
"folder_analysis": [
{"path": "./src/core", "type": "code", "code_count": 5, "dirs_count": 2},
{"path": "./src/utils", "type": "navigation", "code_count": 0, "dirs_count": 4}
],
"top_level_dirs": ["src/modules", "src/utils", "lib/core"],
"existing_docs": {
"file_list": [".workflow/docs/project/src/core/API.md"],
"content": "... concatenated existing docs ..."
},
"unified_analysis": [
{"module_path": "./src/core", "outline_summary": "Core functionality"}
],
"groups": {
"count": 4,
"assignments": [
{"group_id": "001", "directories": ["src/modules", "src/utils"], "doc_count": 6},
{"group_id": "002", "directories": ["lib/core", "lib/helpers"], "doc_count": 7}
]
},
"statistics": {
"total": 15,
"code": 8,
"navigation": 7,
"top_level": 3
}
}
```
**Workflow Session Structure** (workflow-session.json):
```json
{
"session_id": "WFS-docs-{timestamp}",
"project": "{project_name} documentation",
"status": "planning",
"timestamp": "2024-01-20T14:30:22+08:00",
"path": ".",
"target_path": "/path/to/project",
"project_root": "/path/to/project",
"project_name": "{project_name}",
"mode": "full",
"tool": "gemini",
"cli_execute": false,
"update_mode": "update",
"existing_docs": 5,
"analysis": {
"total": "15",
"code": "8",
"navigation": "7",
"top_level": "3"
}
}
```
## Generated Documentation
**Structure mirrors project source directories under project-specific folder**:
```
.workflow/docs/
└── {project_name}/ # Project-specific root
├── src/ # Mirrors src/ directory
│ ├── modules/
│ │ ├── README.md # Navigation
│ │ ├── auth/
│ │ │ ├── API.md # API signatures
│ │ │ ├── README.md # Module docs
│ │ │ └── middleware/
│ │ │ ├── API.md
│ │ │ └── README.md
│ │ └── api/
│ │ ├── API.md
│ │ └── README.md
│ └── utils/
│ └── README.md
├── lib/ # Mirrors lib/ directory
│ └── core/
│ ├── API.md
│ └── README.md
├── README.md # Project root
├── ARCHITECTURE.md # System design
├── EXAMPLES.md # Usage examples
└── api/ # Optional
└── README.md # HTTP API reference
```
## Execution Commands
```bash
# Execute entire workflow (auto-discovers active session)
/workflow:execute
# Or specify session
/workflow:execute --resume-session="WFS-docs-yyyymmdd-hhmmss"
# Individual task execution
/task:execute IMPL-001
```
## Template Reference
**Available Templates** (`~/.claude/workflows/cli-templates/prompts/documentation/`):
- `api.txt`: Code API (Part A) + HTTP API (Part B)
- `module-readme.txt`: Module purpose, usage, dependencies
- `folder-navigation.txt`: Navigation README for folders with subdirectories
- `project-readme.txt`: Project overview, getting started, navigation
- `project-architecture.txt`: System structure, module map, design patterns
- `project-examples.txt`: End-to-end usage examples
## Execution Mode Summary
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`
- **Phase 4**: Dynamic grouping (max 2 dirs per group)
- **Level 1**: Parallel processing for module tree groups
- **Level 2+**: Sequential execution for project-level docs
## Related Commands
- `/workflow:execute` - Execute documentation tasks
- `/workflow:status` - View task progress
- `/workflow:session:complete` - Mark session complete

View File

@@ -0,0 +1,182 @@
---
name: load-skill-memory
description: Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords
argument-hint: "[skill_name] \"task intent description\""
allowed-tools: Bash(*), Read(*), Skill(*)
---
# Memory Load SKILL Command (/memory:load-skill-memory)
## 1. Overview
The `memory:load-skill-memory` command **activates SKILL package** (auto-detect from task or manual specification) and intelligently loads documentation based on user's task intent. The system automatically determines which documentation files to read based on the intent description.
**Core Philosophy**:
- **Flexible Activation**: Auto-detect skill from task description/paths, or user explicitly specifies
- **Intent-Driven Loading**: System analyzes task intent to determine documentation scope
- **Intelligent Selection**: Automatically chooses appropriate documentation level and modules
- **Direct Context Loading**: Loads selected documentation into conversation memory
**When to Use**:
- Manually activate a known SKILL package for a specific task
- Load SKILL context when system hasn't auto-triggered it
- Force reload SKILL documentation with specific intent focus
**Note**: Normal SKILL activation happens automatically via description triggers or path mentions (system extracts skill name from file paths for intelligent triggering). Use this command only when manual activation is needed.
## 2. Parameters
- `[skill_name]` (Optional): Name of SKILL package to activate
- If omitted: System auto-detects from task description or file paths
- If specified: Direct activation of named SKILL package
- Example: `my_project`, `api_service`
- Must match directory name under `.claude/skills/`
- `"task intent description"` (Required): Description of what you want to do
- Used for both: auto-detection (if skill_name omitted) and documentation scope selection
- **Analysis tasks**: "分析builder pattern实现", "理解参数系统架构"
- **Modification tasks**: "修改workflow逻辑", "增强thermal template功能"
- **Learning tasks**: "学习接口设计模式", "了解测试框架使用"
- **With paths**: "修改D:\projects\my_project\src\auth.py的认证逻辑" (auto-extracts `my_project`)
## 3. Execution Flow
### Step 1: Determine SKILL Name (if not provided)
**Auto-Detection Strategy** (when skill_name parameter is omitted):
1. **Path Extraction**: Scan task description for file paths
- Extract potential project names from path segments
- Example: `"修改D:\projects\my_project\src\auth.py"` → extracts `my_project`
2. **Keyword Matching**: Match task keywords against SKILL descriptions
- Search for project-specific terms, domain keywords
3. **Validation**: Check if extracted name matches `.claude/skills/{skill_name}/`
**Result**: Either uses provided skill_name or auto-detected name for activation
### Step 2: Activate SKILL and Analyze Intent
**Activate SKILL Package**:
```javascript
Skill(command: "${skill_name}") // Uses provided or auto-detected name
```
**What Happens After Activation**:
1. If SKILL exists in memory: System reads `.claude/skills/${skill_name}/SKILL.md`
2. If SKILL not found in memory: Error - SKILL package doesn't exist
3. SKILL description triggers are loaded into memory
4. Progressive loading mechanism becomes available
5. Documentation structure is now accessible
**Intent Analysis**:
Based on task intent description, system determines:
- **Action type**: analyzing, modifying, learning
- **Scope**: specific module, architecture overview, complete system
- **Depth**: quick reference, detailed API, full documentation
### Step 3: Intelligent Documentation Loading
**Loading Strategy**:
The system automatically selects documentation based on intent keywords:
1. **Quick Understanding** ("了解", "快速理解", "什么是"):
- Load: Level 0 (README.md only, ~2K tokens)
- Use case: Quick overview of capabilities
2. **Specific Module Analysis** ("分析XXX模块", "理解XXX实现"):
- Load: Module-specific README.md + API.md (~5K tokens)
- Use case: Deep dive into specific component
3. **Architecture Review** ("架构", "设计模式", "整体结构"):
- Load: README.md + ARCHITECTURE.md (~10K tokens)
- Use case: System design understanding
4. **Implementation/Modification** ("修改", "增强", "实现"):
- Load: Relevant module docs + EXAMPLES.md (~15K tokens)
- Use case: Code modification with examples
5. **Comprehensive Learning** ("学习", "完整了解", "深入理解"):
- Load: Level 3 (All documentation, ~40K tokens)
- Use case: Complete system mastery
**Documentation Loaded into Memory**:
After loading, the selected documentation content is available in conversation memory for subsequent operations.
## 4. Usage Examples
### Example 1: Manual Specification
**User Command**:
```bash
/memory:load-skill-memory my_project "修改认证模块增加OAuth支持"
```
**Execution**:
```javascript
// Step 1: Use provided skill_name
skill_name = "my_project" // Directly from parameter
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "认证模块", "增加", "OAuth"]
Action: modifying (implementation)
Scope: auth module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/auth/README.md)
Read(.workflow/docs/my_project/auth/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
### Example 2: Auto-Detection from Path
**User Command**:
```bash
/memory:load-skill-memory "修改D:\projects\my_project\src\services\api.py的接口逻辑"
```
**Execution**:
```javascript
// Step 1: Auto-detect skill_name from path
Path detected: "D:\projects\my_project\src\services\api.py"
Extracted: "my_project"
Validated: .claude/skills/my_project/ exists
skill_name = "my_project"
// Step 2: Activate SKILL
Skill(command: "my_project")
// Step 3: Intent Analysis
Keywords: ["修改", "services", "接口逻辑"]
Action: modifying (implementation)
Scope: services module + examples
// Load documentation based on intent
Read(.workflow/docs/my_project/services/README.md)
Read(.workflow/docs/my_project/services/API.md)
Read(.workflow/docs/my_project/EXAMPLES.md)
```
## 5. Intent Keyword Mapping
**Quick Reference**:
- **Triggers**: "了解", "快速", "什么是", "简介"
- **Loads**: README.md only (~2K)
**Module-Specific**:
- **Triggers**: "XXX模块", "XXX组件", "分析XXX"
- **Loads**: Module README + API (~5K)
**Architecture**:
- **Triggers**: "架构", "设计", "整体结构", "系统设计"
- **Loads**: README + ARCHITECTURE (~10K)
**Implementation**:
- **Triggers**: "修改", "增强", "实现", "开发", "集成"
- **Loads**: Relevant module + EXAMPLES (~15K)
**Comprehensive**:
- **Triggers**: "完整", "深入", "全面", "学习整个"
- **Loads**: All documentation (~40K)

View File

@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
allowed-tools: Task(*), Bash(*)
examples:
- /memory:load "在当前前端基础上开发用户认证功能"
- /memory:load --tool qwen "重构支付模块API"
- /memory:load --tool qwen -p "重构支付模块API"
---
# Memory Load Command (/memory:load)
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
3. **Extracts Keywords**: Derives core keywords from task description
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
6. **Generates Content Package**: Returns structured JSON core content package
@@ -109,7 +109,7 @@ Task(
1. **Project Structure**
\`\`\`bash
bash(ccw tool exec get_modules_by_depth '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh)
\`\`\`
2. **Core Documentation**
@@ -136,7 +136,7 @@ Task(
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
\`\`\`bash
ccw cli -p "
cd . && ${tool} -p "
PURPOSE: Extract project core context for task: ${task_description}
TASK: Analyze project architecture, tech stack, key patterns, relevant files
MODE: analysis
@@ -147,7 +147,7 @@ RULES:
- Identify key architecture patterns and technical constraints
- Extract integration points and development standards
- Output concise, structured format
" --tool ${tool} --mode analysis
"
\`\`\`
### Step 4: Generate Core Content Package
@@ -212,7 +212,7 @@ Before returning:
### Example 2: Using Qwen Tool
```bash
/memory:load --tool qwen "重构支付模块API"
/memory:load --tool qwen -p "重构支付模块API"
```
Agent uses Qwen CLI for analysis, returns same structured package.

View File

@@ -0,0 +1,525 @@
---
name: skill-memory
description: 4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)
argument-hint: "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Memory SKILL Package Generator
## Orchestrator Role
**Pure Orchestrator**: Execute documentation generation workflow, then generate SKILL.md index. Does NOT create task JSON files.
**Auto-Continue Workflow**: This command runs **fully autonomously** once triggered. Each phase completes and automatically triggers the next phase without user interaction.
**Execution Paths**:
- **Full Path**: All 4 phases (no existing docs OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 4 (existing docs found AND no `--regenerate` flag)
- **Phase 4 Always Executes**: SKILL.md index is never skipped, always generated or updated
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **No Task JSON**: This command does not create task JSON files - delegates to /memory:docs
3. **Parse Every Output**: Extract required data from each command output (session_id, task_count, file paths)
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
6. **Direct Generation**: Phase 4 directly generates SKILL.md using Write tool
7. **No Manual Steps**: User should never be prompted for decisions between phases
---
## 4-Phase Execution
### Phase 1: Prepare Arguments
**Goal**: Parse command arguments and check existing documentation
**Step 1: Get Target Path and Project Name**
```bash
# Get current directory (or use provided path)
bash(pwd)
# Get project name from directory
bash(basename "$(pwd)")
# Get project root
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
```
**Output**:
- `target_path`: `/d/my_project`
- `project_name`: `my_project`
- `project_root`: `/d/my_project`
**Step 2: Set Default Parameters**
```bash
# Default values (use these unless user specifies otherwise):
# - tool: "gemini"
# - mode: "full"
# - regenerate: false (no --regenerate flag)
# - cli_execute: false (no --cli-execute flag)
```
**Step 3: Check Existing Documentation**
```bash
# Check if docs directory exists
bash(test -d .workflow/docs/my_project && echo "exists" || echo "not_exists")
# Count existing documentation files
bash(find .workflow/docs/my_project -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Output**:
- `docs_exists`: `exists` or `not_exists`
- `existing_docs`: `5` (or `0` if no docs)
**Step 4: Determine Execution Path**
**Decision Logic**:
```javascript
if (existing_docs > 0 && !regenerate_flag) {
// Documentation exists and no regenerate flag
SKIP_DOCS_GENERATION = true
message = "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
// Force regeneration: delete existing docs
bash(rm -rf .workflow/docs/my_project 2>/dev/null || true)
SKIP_DOCS_GENERATION = false
message = "Regenerating documentation from scratch."
} else {
// No existing docs
SKIP_DOCS_GENERATION = false
message = "No existing documentation found, generating new documentation."
}
```
**Summary Variables**:
- `PROJECT_NAME`: `my_project`
- `TARGET_PATH`: `/d/my_project`
- `DOCS_PATH`: `.workflow/docs/my_project`
- `TOOL`: `gemini` (default) or user-specified
- `MODE`: `full` (default) or user-specified
- `CLI_EXECUTE`: `false` (default) or `true` if --cli-execute flag
- `REGENERATE`: `false` (default) or `true` if --regenerate flag
- `EXISTING_DOCS`: Count of existing documentation files
- `SKIP_DOCS_GENERATION`: `true` if skipping Phase 2/3, `false` otherwise
**Completion & TodoWrite**:
- If `SKIP_DOCS_GENERATION = true`: Mark phase 1 completed, phase 2&3 completed (skipped), phase 4 in_progress
- If `SKIP_DOCS_GENERATION = false`: Mark phase 1 completed, phase 2 in_progress
**Next Action**:
- If skipping: Display skip message → Jump to Phase 4 (SKILL.md generation)
- If not skipping: Display preparation results → Continue to Phase 2 (documentation planning)
---
### Phase 2: Call /memory:docs
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Trigger documentation generation workflow
**Command**:
```bash
SlashCommand(command="/memory:docs [targetPath] --tool [tool] --mode [mode] [--cli-execute]")
```
**Example**:
```bash
/memory:docs /d/my_app --tool gemini --mode full
/memory:docs /d/my_app --tool gemini --mode full --cli-execute
```
**Note**: The `--regenerate` flag is handled in Phase 1 by deleting existing documentation. This command always calls `/memory:docs` without the regenerate flag, relying on docs.md's built-in update detection.
**Parse Output**:
- Extract session ID: `WFS-docs-[timestamp]` (store as `docsSessionId`)
- Extract task count (store as `taskCount`)
**Completion Criteria**:
- `/memory:docs` command executed successfully
- Session ID extracted and stored
- Task count retrieved
- Task files created in `.workflow/[docsSessionId]/.task/`
- workflow-session.json exists
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**Next Action**: Display docs planning results (session ID, task count) → Auto-continue to Phase 3
---
### Phase 3: Execute Documentation Generation
**Skip Condition**: This phase is **skipped if SKIP_DOCS_GENERATION = true** (documentation already exists without --regenerate flag)
**Goal**: Execute documentation generation tasks
**Command**:
```bash
SlashCommand(command="/workflow:execute")
```
**Note**: `/workflow:execute` automatically discovers active session from Phase 2
**Completion Criteria**:
- `/workflow:execute` command executed successfully
- Documentation files generated in `.workflow/docs/[projectName]/`
- All tasks marked as completed in session
- At minimum: module documentation files exist (API.md and/or README.md)
- For full mode: Project README, ARCHITECTURE, EXAMPLES files generated
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
**Next Action**: Display execution results (file count, module count) → Auto-continue to Phase 4
---
### Phase 4: Generate SKILL.md Index
**Note**: This phase is **NEVER skipped** - it always executes to generate or update the SKILL index.
**Step 1: Read Key Files** (Use Read tool)
- `.workflow/docs/{project_name}/README.md` (required)
- `.workflow/docs/{project_name}/ARCHITECTURE.md` (optional)
**Step 2: Discover Structure**
```bash
bash(find .workflow/docs/{project_name} -name "*.md" | sed 's|.workflow/docs/{project_name}/||' | awk -F'/' '{if(NF>=2) print $1"/"$2}' | sort -u)
```
**Step 3: Generate Intelligent Description**
Extract from README + structure: Function (capabilities), Modules (names), Keywords (API/CLI/auth/etc.)
**Format**: `{Project} {core capabilities} (located at {project_path}). Load this SKILL when analyzing, modifying, or learning about {domain_description} or files under this path, especially when no relevant context exists in memory.`
**Key Elements**:
- **Path Reference**: Use `TARGET_PATH` from Phase 1 for precise location identification
- **Domain Description**: Extract human-readable domain/feature area from README (e.g., "workflow management", "thermal modeling")
- **Trigger Optimization**: Include project path, emphasize "especially when no relevant context exists in memory"
- **Action Coverage**: analyzing (分析), modifying (修改), learning (了解)
**Example**: "Workflow orchestration system with CLI tools and documentation generation (located at /d/Claude_dms3). Load this SKILL when analyzing, modifying, or learning about workflow management or files under this path, especially when no relevant context exists in memory."
**Step 4: Write SKILL.md** (Use Write tool)
```bash
bash(mkdir -p .claude/skills/{project_name})
```
`.claude/skills/{project_name}/SKILL.md`:
```yaml
---
name: {project_name}
description: {intelligent description from Step 3}
version: 1.0.0
---
# {Project Name} SKILL Package
## Documentation: `../../../.workflow/docs/{project_name}/`
## Progressive Loading
### Level 0: Quick Start (~2K)
- [README](../../../.workflow/docs/{project_name}/README.md)
### Level 1: Core Modules (~8K)
{Module READMEs}
### Level 2: Complete (~25K)
All modules + [Architecture](../../../.workflow/docs/{project_name}/ARCHITECTURE.md)
### Level 3: Deep Dive (~40K)
Everything + [Examples](../../../.workflow/docs/{project_name}/EXAMPLES.md)
```
**Completion Criteria**:
- SKILL.md file created at `.claude/skills/{project_name}/SKILL.md`
- Intelligent description generated from documentation
- Progressive loading levels (0-3) properly structured
- Module index includes all documented modules
- All file references use relative paths
**TodoWrite**: Mark phase 4 completed
**Final Action**: Report completion summary to user
**Return to User**:
```
SKILL Package Generation Complete
Project: {project_name}
Documentation: .workflow/docs/{project_name}/ ({doc_count} files)
SKILL Index: .claude/skills/{project_name}/SKILL.md
Generated:
- {task_count} documentation tasks completed
- SKILL.md with progressive loading (4 levels)
- Module index with {module_count} modules
Usage:
- Load Level 0: Quick project overview (~2K tokens)
- Load Level 1: Core modules (~8K tokens)
- Load Level 2: Complete docs (~25K tokens)
- Load Level 3: Everything (~40K tokens)
```
---
## Implementation Details
### Critical Rules
1. **No User Prompts Between Phases**: Never ask user questions or wait for input between phases
2. **Immediate Phase Transition**: After TodoWrite update, immediately execute next phase command
3. **Status-Driven Execution**: Check TodoList status after each phase:
- If next task is "pending" → Mark it "in_progress" and execute
- If all tasks are "completed" → Report final summary
4. **Phase Completion Pattern**:
```
Phase N completes → Update TodoWrite (N=completed, N+1=in_progress) → Execute Phase N+1
```
### TodoWrite Patterns
#### Initialization (Before Phase 1)
**FIRST ACTION**: Create TodoList with all 4 phases
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "in_progress", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "pending", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
```
**SECOND ACTION**: Execute Phase 1 immediately
#### Full Path (SKIP_DOCS_GENERATION = false)
**After Phase 1**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "in_progress", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "pending", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 2
```
**After Phase 2**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "in_progress", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 3
```
**After Phase 3**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Auto-continue to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
#### Skip Path (SKIP_DOCS_GENERATION = true)
**After Phase 1** (detects existing docs, skips Phase 2 & 3):
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "in_progress", "activeForm": "Generating SKILL.md"}
]})
// Display skip message: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
// Jump directly to Phase 4
```
**After Phase 4**:
```javascript
TodoWrite({todos: [
{"content": "Parse arguments and prepare", "status": "completed", "activeForm": "Parsing arguments"},
{"content": "Call /memory:docs to plan documentation", "status": "completed", "activeForm": "Calling /memory:docs"},
{"content": "Execute documentation generation", "status": "completed", "activeForm": "Executing documentation"},
{"content": "Generate SKILL.md index", "status": "completed", "activeForm": "Generating SKILL.md"}
]})
// Report completion summary to user
```
### Execution Flow Diagrams
#### Full Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments
[TodoWrite] Phase 1 = completed, Phase 2 = in_progress
[Execute] Phase 2: Call /memory:docs
[TodoWrite] Phase 2 = completed, Phase 3 = in_progress
[Execute] Phase 3: Call /workflow:execute
[TodoWrite] Phase 3 = completed, Phase 4 = in_progress
[Execute] Phase 4: Generate SKILL.md
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
#### Skip Path Flow
```
User triggers command
[TodoWrite] Initialize 4 phases (Phase 1 = in_progress)
[Execute] Phase 1: Parse arguments, detect existing docs
[TodoWrite] Phase 1 = completed, Phase 2&3 = completed (skipped), Phase 4 = in_progress
[Display] Skip message: "Documentation already exists, skipping Phase 2 and Phase 3"
[Execute] Phase 4: Generate SKILL.md (always runs)
[TodoWrite] Phase 4 = completed
[Report] Display completion summary
```
### Error Handling
- If any phase fails, mark it as "in_progress" (not completed)
- Report error details to user
- Do NOT auto-continue to next phase on failure
---
## Parameters
```bash
/memory:skill-memory [path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **--tool**: CLI tool for documentation (default: gemini)
- `gemini`: Comprehensive documentation
- `qwen`: Architecture analysis
- `codex`: Implementation validation
- **--regenerate**: Force regenerate all documentation
- When enabled: Deletes existing `.workflow/docs/{project_name}/` before regeneration
- Ensures fresh documentation from source code
- **--mode**: Documentation mode (default: full)
- `full`: Complete docs (modules + README + ARCHITECTURE + EXAMPLES)
- `partial`: Module docs only
- **--cli-execute**: Enable CLI-based documentation generation (optional)
- When enabled: CLI generates docs directly in implementation_approach
- When disabled (default): Agent generates documentation content
---
## Examples
### Example 1: Generate SKILL Package (Default)
```bash
/memory:skill-memory
```
**Workflow**:
1. Phase 1: Detects current directory, checks existing docs
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full` (Agent Mode)
3. Phase 3: Executes documentation generation via `/workflow:execute`
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 2: Regenerate with Qwen
```bash
/memory:skill-memory /d/my_app --tool qwen --regenerate
```
**Workflow**:
1. Phase 1: Parses target path, detects regenerate flag, deletes existing docs
2. Phase 2: Calls `/memory:docs /d/my_app --tool qwen --mode full`
3. Phase 3: Executes documentation regeneration
4. Phase 4: Generates updated SKILL.md
### Example 3: Partial Mode (Modules Only)
```bash
/memory:skill-memory --mode partial
```
**Workflow**:
1. Phase 1: Detects partial mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode partial` (Agent Mode)
3. Phase 3: Executes module documentation only
4. Phase 4: Generates SKILL.md with module-only index
### Example 4: CLI Execute Mode
```bash
/memory:skill-memory --cli-execute
```
**Workflow**:
1. Phase 1: Detects CLI execute mode
2. Phase 2: Calls `/memory:docs . --tool gemini --mode full --cli-execute` (CLI Mode)
3. Phase 3: Executes CLI-based documentation generation
4. Phase 4: Generates SKILL.md at `.claude/skills/{project_name}/SKILL.md`
### Example 5: Skip Path (Existing Docs)
```bash
/memory:skill-memory
```
**Scenario**: Documentation already exists in `.workflow/docs/{project_name}/`
**Workflow**:
1. Phase 1: Detects existing docs (5 files), sets SKIP_DOCS_GENERATION = true
2. Display: "Documentation already exists, skipping Phase 2 and Phase 3. Use --regenerate to force regeneration."
3. Phase 4: Generates or updates SKILL.md index only (~5-10x faster)
---
## Architecture
```
skill-memory (orchestrator)
├─ Phase 1: Prepare (bash commands, skip decision)
├─ Phase 2: /memory:docs (task planning, skippable)
├─ Phase 3: /workflow:execute (task execution, skippable)
└─ Phase 4: Write SKILL.md (direct file generation, always runs)
No task JSON created by this command
All documentation tasks managed by /memory:docs
Smart skip logic: 5-10x faster when docs exist
```

View File

@@ -271,7 +271,7 @@ bash(mkdir -p .claude/skills/style-${package_name})
**⚠️ CRITICAL - Execute First**:
```bash
bash(cat ~/.ccw/workflows/cli-templates/memory/style-skill-memory/skill-md-template.md)
bash(cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/skill-md-template.md)
```
**Template Processing**:
@@ -355,7 +355,7 @@ Variables: `{package_name}`, `{component_count}`, `{universal_count}`, `{special
1. **Check Before Generate**: Verify package exists before attempting SKILL generation
2. **Respect Existing SKILL**: Don't overwrite unless --regenerate flag provided
3. **Load Templates via cat**: Use `cat ~/.ccw/workflows/cli-templates/memory/style-skill-memory/{template}` to load templates
3. **Load Templates via cat**: Use `cat ~/.claude/workflows/cli-templates/memory/style-skill-memory/{template}` to load templates
4. **Variable Substitution**: Replace all `{variable}` placeholders with actual values
5. **Technical Feature Detection**: Analyze tokens for modern features (oklch, calc, dark mode) and generate appropriate Prerequisites section
6. **Dynamic Content Generation**: Generate sections based on DESIGN_ANALYSIS characteristics

View File

@@ -0,0 +1,477 @@
---
name: tech-research
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Research SKILL Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility**:
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
- Orchestrator only provides context paths and waits for completion
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **No User Prompts**: Never ask user questions or wait for input between phases
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
---
## 3-Phase Execution
### Phase 1: Prepare Context Paths
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
**Input Mode Detection**:
```bash
# Get input parameter
input="$1"
# Detect mode
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
CONTEXT_PATH=".workflow/${SESSION_ID}"
else
MODE="direct"
TECH_STACK_NAME="$input"
CONTEXT_PATH="$input" # Pass tech stack name as context
fi
```
**Check Existing SKILL**:
```bash
# For session mode, peek at session to get tech stack name
if [[ "$MODE" == "session" ]]; then
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
Read(.workflow/${SESSION_ID}/workflow-session.json)
# Extract tech_stack_name (minimal extraction)
fi
# Normalize and check
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf ".claude/skills/${normalized_name}")
SKIP_GENERATION = false
message = "Regenerating tech stack SKILL from scratch."
} else {
SKIP_GENERATION = false
message = "No existing SKILL found, generating new tech stack documentation."
}
```
**Output Variables**:
- `MODE`: `session` or `direct`
- `SESSION_ID`: Session ID (if session mode)
- `CONTEXT_PATH`: Path to session directory OR tech stack name
- `TECH_STACK_NAME`: Extracted or provided tech stack name
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Agent Produces All Files
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
**Agent Task Specification**:
```
Task(
subagent_type: "general-purpose",
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
prompt: "
Generate a complete tech stack SKILL package with Exa research.
**Context Provided**:
- Mode: {MODE}
- Context Path: {CONTEXT_PATH}
**Templates Available**:
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
**Your Responsibilities**:
1. **Extract Tech Stack Information**:
IF MODE == 'session':
- Read `.workflow/active/{session_id}/workflow-session.json`
- Read `.workflow/active/{session_id}/.process/context-package.json`
- Extract tech_stack: {language, frameworks, libraries}
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
- Example: \"typescript-react-nextjs\"
IF MODE == 'direct':
- Tech stack name = CONTEXT_PATH
- Parse composite: split by '-' delimiter
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
2. **Execute Exa Research** (4-6 parallel queries):
Base Queries (always execute):
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
Component Queries (if composite):
- For each additional component:
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
3. **Read Module Format Template**:
Read template for structure guidance:
```bash
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
```
4. **Synthesize Content into 6 Modules**:
Follow template structure from tech-module-format.txt:
- **principles.md** - Core concepts, philosophies (~3K tokens)
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
- **testing.md** - Testing strategies, frameworks (~3K tokens)
- **config.md** - Setup, configuration, tooling (~3K tokens)
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
Each module follows template format:
- Frontmatter (YAML)
- Main sections with clear headings
- Code examples from Exa research
- Best practices sections
- References to Exa sources
5. **Write Files Directly**:
```javascript
// Create directory
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
// Write each module file using Write tool
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
// Write frameworks.md only if composite
// Write metadata.json
Write({
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
content: JSON.stringify({
tech_stack_name,
components,
is_composite,
generated_at: timestamp,
source: \"exa-research\",
research_summary: { total_queries, total_sources }
})
})
```
6. **Report Completion**:
Provide summary:
- Tech stack name
- Files created (count)
- Exa queries executed
- Sources consulted
**CRITICAL**:
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
- Do NOT return JSON or structured data - produce actual .md files
- Handle errors gracefully (Exa failures, missing files, template read failures)
- If tech stack cannot be determined, ask orchestrator to clarify
"
)
```
**Completion Criteria**:
- Agent task executed successfully
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
- metadata.json written
- Agent reports completion
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
// Extract: tech_stack_name, components, is_composite, research_summary
```
3. **Read Module Headers** (optional, first 20 lines):
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
// Repeat for other modules
```
4. **Read SKILL Index Template**:
```javascript
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
```
5. **Generate SKILL.md Index**:
Follow template from tech-skill-index.txt with variable substitutions:
- `{TECH_STACK_NAME}`: From metadata.json
- `{MAIN_TECH}`: Primary technology
- `{ISO_TIMESTAMP}`: Current timestamp
- `{QUERY_COUNT}`: From research_summary
- `{SOURCE_COUNT}`: From research_summary
- Conditional sections for composite tech stacks
Template provides structure for:
- Frontmatter with metadata
- Overview and tech stack description
- Module organization (Core/Practical/Config sections)
- Loading recommendations (Quick/Implementation/Complete)
- Usage guidelines and auto-trigger keywords
- Research metadata and version history
6. **Write SKILL.md**:
```javascript
Write({
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All module files verified
- Loading recommendations included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Tech Stack SKILL Package Complete
Tech Stack: {TECH_STACK_NAME}
Location: .claude/skills/{TECH_STACK_NAME}/
Files: SKILL.md + 5-6 modules + metadata.json
Exa Research: {queries} queries, {sources} sources
Usage: Skill(command: "{TECH_STACK_NAME}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Invalid session ID: Report error, verify session exists
- Missing context-package: Warn, fall back to direct mode
- No tech stack detected: Ask user to specify tech stack name
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- Exa API failures: Agent handles internally with retries
- Incomplete results: Warn user, proceed with partial data if minimum sections available
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
- **--tool**: Reserved for future CLI integration (default: gemini)
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/{tech-stack}/
├── SKILL.md # Index (Phase 3)
├── principles.md # Agent (Phase 2)
├── patterns.md # Agent
├── practices.md # Agent
├── testing.md # Agent
├── config.md # Agent
├── frameworks.md # Agent (if composite)
└── metadata.json # Agent
```
### Direct Mode - Single Stack
```bash
/memory:tech-research "typescript"
```
**Workflow**:
1. Phase 1: Detects direct mode, checks existing SKILL
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
3. Phase 3: Generates SKILL.md index
### Direct Mode - Composite Stack
```bash
/memory:tech-research "typescript-react-nextjs"
```
**Workflow**:
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
3. Phase 3: Generates SKILL.md index with framework integration
### Session Mode - Extract from Workflow
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**:
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
3. Phase 3: Generates SKILL.md index
### Regenerate Existing
```bash
/memory:tech-research "react" --regenerate
```
**Workflow**:
1. Phase 1: Deletes existing SKILL due to --regenerate
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
3. Phase 3: Generates updated SKILL.md
### Skip Path - Fast Update
```bash
/memory:tech-research "python"
```
**Scenario**: SKILL already exists with 7 files
**Workflow**:
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
2. Phase 2: **SKIPPED**
3. Phase 3: Updates SKILL.md index only (5-10x faster)

View File

@@ -1,332 +0,0 @@
---
name: tips
description: Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference
argument-hint: "<note content> [--tag <tag1,tag2>] [--context <context>]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:tips "Remember to use Redis for rate limiting"
- /memory:tips "Auth pattern: JWT with refresh tokens" --tag architecture,auth
- /memory:tips "Bug: memory leak in WebSocket handler after 24h" --context websocket-service
- /memory:tips "Performance: lazy loading reduced bundle by 40%" --tag performance
---
# Memory Tips Command (/memory:tips)
## 1. Overview
The `memory:tips` command provides **quick note-taking** for capturing:
- Quick ideas and insights
- Code snippets and patterns
- Reminders and follow-ups
- Bug notes and debugging hints
- Performance observations
- Architecture decisions
- Library/tool recommendations
**Core Philosophy**:
- **Speed First**: Minimal friction for capturing thoughts
- **Searchable**: Tagged for easy retrieval
- **Context-Aware**: Optional context linking
- **Lightweight**: No complex session analysis
## 2. Parameters
- `<note content>` (Required): The tip/note content to save
- `--tag <tags>` (Optional): Comma-separated tags for categorization
- `--context <context>` (Optional): Related context (file, module, feature)
**Examples**:
```bash
/memory:tips "Use Zod for runtime validation - better DX than class-validator"
/memory:tips "Redis connection pool: max 10, min 2" --tag config,redis
/memory:tips "Fix needed: race condition in payment processor" --tag bug,payment --context src/payments
```
## 3. Structured Output Format
```markdown
## Tip ID
TIP-YYYYMMDD-HHMMSS
## Timestamp
YYYY-MM-DD HH:MM:SS
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Content
[The tip/note content exactly as provided]
## Tags
[Comma-separated tags, or (none)]
## Context
[Optional context linking - file, module, or feature reference]
## Session Link
[WFS-ID if workflow session active, otherwise (none)]
## Auto-Detected Context
[Files/topics from current conversation if relevant]
```
## 4. Field Definitions
| Field | Purpose | Example |
|-------|---------|---------|
| **Tip ID** | Unique identifier with timestamp | TIP-20260128-143052 |
| **Timestamp** | When tip was created | 2026-01-28 14:30:52 |
| **Project Root** | Current project path | D:\Claude_dms3 |
| **Content** | The actual tip/note | "Use Redis for rate limiting" |
| **Tags** | Categorization labels | architecture, auth, performance |
| **Context** | Related code/feature | src/auth/**, payment-module |
| **Session Link** | Link to workflow session | WFS-auth-20260128 |
| **Auto-Detected Context** | Files from conversation | src/api/handler.ts |
## 5. Execution Flow
### Step 1: Parse Arguments
```javascript
const parseTipsCommand = (input) => {
// Extract note content (everything before flags)
const contentMatch = input.match(/^"([^"]+)"|^([^\s-]+)/);
const content = contentMatch ? (contentMatch[1] || contentMatch[2]) : '';
// Extract tags
const tagsMatch = input.match(/--tag\s+([^\s-]+)/);
const tags = tagsMatch ? tagsMatch[1].split(',').map(t => t.trim()) : [];
// Extract context
const contextMatch = input.match(/--context\s+([^\s-]+)/);
const context = contextMatch ? contextMatch[1] : '';
return { content, tags, context };
};
```
### Step 2: Gather Context
```javascript
const gatherTipContext = async () => {
// Get project root
const projectRoot = process.cwd(); // or detect from environment
// Get current session if active
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
const sessionId = manifest.sessions?.[0]?.id || null;
// Auto-detect files from recent conversation
const recentFiles = extractRecentFilesFromConversation(); // Last 5 messages
return {
projectRoot,
sessionId,
autoDetectedContext: recentFiles
};
};
```
### Step 3: Generate Structured Text
```javascript
const generateTipText = (parsed, context) => {
const timestamp = new Date().toISOString().replace('T', ' ').slice(0, 19);
const tipId = `TIP-${new Date().toISOString().slice(0,10).replace(/-/g, '')}-${new Date().toTimeString().slice(0,8).replace(/:/g, '')}`;
return `## Tip ID
${tipId}
## Timestamp
${timestamp}
## Project Root
${context.projectRoot}
## Content
${parsed.content}
## Tags
${parsed.tags.length > 0 ? parsed.tags.join(', ') : '(none)'}
## Context
${parsed.context || '(none)'}
## Session Link
${context.sessionId || '(none)'}
## Auto-Detected Context
${context.autoDetectedContext.length > 0
? context.autoDetectedContext.map(f => `- ${f}`).join('\n')
: '(none)'}`;
};
```
### Step 4: Save to Core Memory
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 5: Confirm to User
```
✓ Tip saved successfully
ID: CMEM-YYYYMMDD-HHMMSS
Tags: architecture, auth
Context: src/auth/**
To retrieve: /memory:search "auth patterns"
Or via MCP: core_memory(operation="search", query="auth")
```
## 6. Tag Categories (Suggested)
**Technical**:
- `architecture` - Design decisions and patterns
- `performance` - Optimization insights
- `security` - Security considerations
- `bug` - Bug notes and fixes
- `config` - Configuration settings
- `api` - API design patterns
**Development**:
- `testing` - Test strategies and patterns
- `debugging` - Debugging techniques
- `refactoring` - Refactoring notes
- `documentation` - Doc improvements
**Domain Specific**:
- `auth` - Authentication/authorization
- `database` - Database patterns
- `frontend` - UI/UX patterns
- `backend` - Backend logic
- `devops` - Infrastructure and deployment
**Organizational**:
- `reminder` - Follow-up items
- `research` - Research findings
- `idea` - Feature ideas
- `review` - Code review notes
## 7. Search Integration
Tips can be retrieved using:
```bash
# Via command (if /memory:search exists)
/memory:search "rate limiting"
# Via MCP tool
mcp__ccw-tools__core_memory({
operation: "search",
query: "rate limiting",
source_type: "core_memory",
top_k: 10
})
# Via CLI
ccw core-memory search --query "rate limiting" --top-k 10
```
## 8. Quality Checklist
Before saving:
- [ ] Content is clear and actionable
- [ ] Tags are relevant and consistent
- [ ] Context provides enough reference
- [ ] Auto-detected context is accurate
- [ ] Project root is absolute path
- [ ] Timestamp is properly formatted
## 9. Best Practices
### Good Tips Examples
**Specific and Actionable**:
```
"Use connection pooling for Redis: { max: 10, min: 2, acquireTimeoutMillis: 30000 }"
--tag config,redis
```
**With Context**:
```
"Auth middleware must validate both access and refresh tokens"
--tag security,auth --context src/middleware/auth.ts
```
**Problem + Solution**:
```
"Memory leak fixed by unsubscribing event listeners in componentWillUnmount"
--tag bug,react --context src/components/Chat.tsx
```
### Poor Tips Examples
**Too Vague**:
```
"Fix the bug" --tag bug
```
**Too Long** (use /memory:compact instead):
```
"Here's the complete implementation plan for the entire auth system... [3 paragraphs]"
```
**No Context**:
```
"Remember to update this later"
```
## 10. Use Cases
### During Development
```bash
/memory:tips "JWT secret must be 256-bit minimum" --tag security,auth
/memory:tips "Use debounce (300ms) for search input" --tag performance,ux
```
### After Bug Fixes
```bash
/memory:tips "Race condition in payment: lock with Redis SETNX" --tag bug,payment
```
### Code Review Insights
```bash
/memory:tips "Prefer early returns over nested ifs" --tag style,readability
```
### Architecture Decisions
```bash
/memory:tips "Chose PostgreSQL over MongoDB for ACID compliance" --tag architecture,database
```
### Library Recommendations
```bash
/memory:tips "Zod > Yup for TypeScript validation - better type inference" --tag library,typescript
```
## 11. Notes
- **Frequency**: Use liberally - capture all valuable insights
- **Retrieval**: Search by tags, content, or context
- **Lifecycle**: Tips persist across sessions
- **Organization**: Tags enable filtering and categorization
- **Integration**: Can reference tips in later workflows
- **Lightweight**: No complex session analysis required

View File

@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
// Get module structure
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
// OR with --path
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -244,7 +244,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ccw tool exec update_module_claude
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
- Accepts strategy parameter: multi-layer | single-layer
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
run_in_background: false
})
exit_code=$?

View File

@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
```javascript
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -184,21 +184,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module

View File

@@ -0,0 +1,517 @@
---
name: workflow-skill-memory
description: Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)
argument-hint: "session <session-id> | all"
allowed-tools: Task(*), TodoWrite(*), Bash(*), Read(*), Write(*)
---
# Workflow SKILL Memory Generator
## Overview
Generate SKILL package from archived workflow sessions using agent-driven analysis. Supports single-session incremental updates or parallel processing of all sessions.
**Scope**: Only processes WFS-* workflow sessions. Other session types (e.g., doc sessions) are automatically ignored.
## Usage
```bash
/memory:workflow-skill-memory session WFS-<session-id> # Process single WFS session
/memory:workflow-skill-memory all # Process all WFS sessions in parallel
```
## Execution Modes
### Mode 1: Single Session (`session <session-id>`)
**Purpose**: Incremental update - process one archived session and merge into existing SKILL package
**Workflow**:
1. **Validate session**: Check if session exists in `.workflow/.archives/{session-id}/`
2. **Invoke agent**: Call `universal-executor` to analyze session and update SKILL documents
3. **Agent tasks**:
- Read session data from `.workflow/.archives/{session-id}/`
- Extract lessons, conflicts, and outcomes
- Use Gemini for intelligent aggregation (optional)
- Update or create SKILL documents using templates
- Regenerate SKILL.md index
**Command Example**:
```bash
/memory:workflow-skill-memory session WFS-user-auth
```
**Expected Output**:
```
Session WFS-user-auth processed
Updated:
- sessions-timeline.md (1 session added)
- lessons-learned.md (3 lessons merged)
- conflict-patterns.md (1 conflict added)
- SKILL.md (index regenerated)
```
---
### Mode 2: All Sessions (`all`)
**Purpose**: Full regeneration - process all archived sessions in parallel for complete SKILL package
**Workflow**:
1. **List sessions**: Read manifest.json to get all archived session IDs
2. **Parallel invocation**: Launch multiple `universal-executor` agents in parallel (one per session)
3. **Agent coordination**:
- Each agent processes one session independently
- Agents use Gemini for analysis
- Agents collect data into JSON (no direct file writes)
- Final aggregator agent merges results and generates SKILL documents
**Command Example**:
```bash
/memory:workflow-skill-memory all
```
**Expected Output**:
```
All sessions processed in parallel
Sessions: 8 total
Updated:
- sessions-timeline.md (8 sessions)
- lessons-learned.md (24 lessons aggregated)
- conflict-patterns.md (12 conflicts documented)
- SKILL.md (index regenerated)
```
---
## Implementation Flow
### Phase 1: Validation and Setup
**Step 1.1: Parse Command Arguments**
Extract mode and session ID:
```javascript
if (args === "all") {
mode = "all"
} else if (args.startsWith("session ")) {
mode = "session"
session_id = args.replace("session ", "").trim()
} else {
ERROR = "Invalid arguments. Usage: session <session-id> | all"
EXIT
}
```
**Step 1.2: Validate Archive Directory**
```bash
bash(test -d .workflow/.archives && echo "exists" || echo "missing")
```
If missing, report error and exit.
**Step 1.3: Mode-Specific Validation**
**Single Session Mode**:
```bash
# Validate session ID format (must start with WFS-)
if [[ ! "$session_id" =~ ^WFS- ]]; then
ERROR = "Invalid session ID format. Only WFS-* sessions are supported"
EXIT
fi
# Check if session exists
bash(test -d .workflow/.archives/{session_id} && echo "exists" || echo "missing")
```
If missing, report error: "Session {session_id} not found in archives"
**All Sessions Mode**:
```bash
# Read manifest and filter only WFS- sessions
bash(cat .workflow/.archives/manifest.json | jq -r '.archives[].session_id | select(startswith("WFS-"))')
```
Store filtered session IDs in array. Ignore doc sessions and other non-WFS sessions.
**Step 1.4: TodoWrite Initialization**
**Single Session Mode**:
```javascript
TodoWrite({todos: [
{"content": "Validate session existence", "status": "completed", "activeForm": "Validating session"},
{"content": "Invoke agent to process session", "status": "in_progress", "activeForm": "Invoking agent"},
{"content": "Verify SKILL package updated", "status": "pending", "activeForm": "Verifying update"}
]})
```
**All Sessions Mode**:
```javascript
TodoWrite({todos: [
{"content": "Read manifest and list sessions", "status": "completed", "activeForm": "Reading manifest"},
{"content": "Invoke agents in parallel", "status": "in_progress", "activeForm": "Invoking agents"},
{"content": "Verify SKILL package regenerated", "status": "pending", "activeForm": "Verifying regeneration"}
]})
```
---
### Phase 2: Agent Invocation
#### Single Session Mode - Agent Task
Invoke `universal-executor` with session-specific task:
**Agent Prompt Structure**:
```
Task: Process Workflow Session for SKILL Package
Context:
- Session ID: {session_id}
- Session Path: .workflow/.archives/{session_id}/
- Mode: Incremental update
Objectives:
1. Read session data:
- workflow-session.json (metadata)
- IMPL_PLAN.md (implementation summary)
- TODO_LIST.md (if exists)
- manifest.json entry for lessons
2. Extract key information:
- Description, tags, metrics
- Lessons (successes, challenges, watch_patterns)
- Context package path (reference only)
- Key outcomes from IMPL_PLAN
3. Use Gemini for aggregation (optional):
Command pattern:
cd .workflow/.archives/{session_id} && gemini -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
• Identify success patterns and challenges
• Extract conflict patterns with resolutions
• Categorize by functional domain
MODE: analysis
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
"
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
Read skill-index.txt template Section: "Description Field Generation"
Execute command to get project root:
```bash
git rev-parse --show-toplevel # Example output: /d/Claude_dms3
```
Apply description format:
```
Progressive workflow development history (located at {project_root}).
Load this SKILL when continuing development, analyzing past implementations,
or learning from workflow history, especially when no relevant context exists in memory.
```
**Validation**:
- [ ] Path uses forward slashes (not backslashes)
- [ ] All three use cases present
- [ ] Trigger optimization phrase included
- [ ] Path is absolute (starts with / or drive letter)
4. Read templates for formatting guidance:
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-sessions-timeline.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-lessons-learned.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-conflict-patterns.txt
- ~/.claude/workflows/cli-templates/prompts/workflow/skill-index.txt
**CRITICAL**: From skill-index.txt, read these sections:
- "Description Field Generation" - Rules for generating description
- "Variable Substitution Guide" - All required variables
- "Generation Instructions" - Step-by-step generation process
- "Validation Checklist" - Final validation steps
5. Update SKILL documents:
- sessions-timeline.md: Append new session, update domain grouping
- lessons-learned.md: Merge lessons into categories, update frequencies
- conflict-patterns.md: Add conflicts, update recurring pattern frequencies
- SKILL.md: Regenerate index with updated counts
**For SKILL.md generation**:
- Follow "Generation Instructions" from skill-index.txt (Steps 1-7)
- Use git command for project_root: `git rev-parse --show-toplevel`
- Apply "Description Field Generation" rules
- Validate using "Validation Checklist"
- Increment version (patch level)
6. Return result JSON:
{
"status": "success",
"session_id": "{session_id}",
"updates": {
"sessions_added": 1,
"lessons_merged": count,
"conflicts_added": count
}
}
```
---
#### All Sessions Mode - Parallel Agent Tasks
**Step 2.1: Launch parallel session analyzers**
Invoke multiple agents in parallel (one message with multiple Task calls):
**Per-Session Agent Prompt**:
```
Task: Extract Session Data for SKILL Package
Context:
- Session ID: {session_id}
- Mode: Parallel analysis (no direct file writes)
Objectives:
1. Read session data (same as single mode)
2. Extract key information (same as single mode)
3. Use Gemini for analysis (same as single mode)
4. Return structured data JSON:
{
"status": "success",
"session_id": "{session_id}",
"data": {
"metadata": {
"description": "...",
"archived_at": "...",
"tags": [...],
"metrics": {...}
},
"lessons": {
"successes": [...],
"challenges": [...],
"watch_patterns": [...]
},
"conflicts": [
{
"type": "architecture|dependencies|testing|performance",
"pattern": "...",
"resolution": "...",
"code_impact": [...]
}
],
"impl_summary": "First 200 chars of IMPL_PLAN",
"context_package_path": "..."
}
}
```
**Step 2.2: Aggregate results**
After all session agents complete, invoke aggregator agent:
**Aggregator Agent Prompt**:
```
Task: Aggregate Session Results and Generate SKILL Package
Context:
- Mode: Full regeneration
- Input: JSON results from {session_count} session agents
Objectives:
1. Aggregate all session data:
- Collect metadata from all sessions
- Merge lessons by category
- Group conflicts by type
- Sort sessions by date
2. Use Gemini for final aggregation:
gemini -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
• Categorize challenges by severity (HIGH/MEDIUM/LOW)
• Identify recurring conflict patterns
• Calculate frequencies and prioritize
MODE: analysis
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
"
3. Read templates for formatting (same 4 templates as single mode)
4. Generate all SKILL documents:
- sessions-timeline.md (all sessions, sorted by date)
- lessons-learned.md (aggregated lessons with frequencies)
- conflict-patterns.md (recurring patterns with resolutions)
- SKILL.md (index with progressive loading)
5. Write files to .claude/skills/workflow-progress/
6. Return result JSON:
{
"status": "success",
"sessions_processed": count,
"files_generated": ["SKILL.md", "sessions-timeline.md", ...],
"summary": {
"total_sessions": count,
"functional_domains": [...],
"date_range": "...",
"lessons_count": count,
"conflicts_count": count
}
}
```
---
### Phase 3: Verification
**Step 3.1: Check SKILL Package Files**
```bash
bash(ls -lh .claude/skills/workflow-progress/)
```
Verify all 4 files exist:
- SKILL.md
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
**Step 3.2: TodoWrite Completion**
Mark all tasks as completed.
**Step 3.3: Display Summary**
**Single Session Mode**:
```
Session {session_id} processed successfully
Updated:
- sessions-timeline.md
- lessons-learned.md
- conflict-patterns.md
- SKILL.md
SKILL Location: .claude/skills/workflow-progress/SKILL.md
```
**All Sessions Mode**:
```
All sessions processed in parallel
Sessions: {count} total
Functional Domains: {domain_list}
Date Range: {earliest} - {latest}
Generated:
- sessions-timeline.md ({count} sessions)
- lessons-learned.md ({lessons_count} lessons)
- conflict-patterns.md ({conflicts_count} conflicts)
- SKILL.md (4-level progressive loading)
SKILL Location: .claude/skills/workflow-progress/SKILL.md
Usage:
- Level 0: Quick refresh (~2K tokens)
- Level 1: Recent history (~8K tokens)
- Level 2: Complete analysis (~25K tokens)
- Level 3: Deep dive (~40K tokens)
```
---
## Agent Guidelines
### Agent Capabilities
**universal-executor agents can**:
- Read files from `.workflow/.archives/`
- Execute bash commands
- Call Gemini CLI for intelligent analysis
- Read template files for formatting guidance
- Write SKILL package files (single mode) or return JSON (parallel mode)
- Return structured results
### Gemini Usage Pattern
**When to use Gemini**:
- Aggregating lessons from multiple sources
- Identifying recurring patterns
- Classifying conflicts by type and severity
- Extracting structured data from IMPL_PLAN
**Fallback Strategy**: If Gemini fails or times out, use direct file parsing with structured extraction logic.
---
## Template System
### Template Files
All templates located in: `~/.claude/workflows/cli-templates/prompts/workflow/`
1. **skill-sessions-timeline.txt**: Format for sessions-timeline.md
2. **skill-lessons-learned.txt**: Format for lessons-learned.md
3. **skill-conflict-patterns.txt**: Format for conflict-patterns.md
4. **skill-index.txt**: Format for SKILL.md index
5. **skill-aggregation.txt**: Rules for Gemini aggregation (existing)
### Template Usage in Agent
**Agents read templates to understand**:
- File structure and markdown format
- Data sources (which files to read)
- Update strategy (incremental vs full)
- Formatting rules and conventions
- Aggregation logic (for Gemini)
**Templates are NOT shown in this command documentation** - agents read them directly as needed.
---
## Error Handling
### Validation Errors
- **No archives directory**: "Error: No workflow archives found at .workflow/.archives/"
- **Invalid session ID format**: "Error: Invalid session ID format. Only WFS-* sessions are supported"
- **Session not found**: "Error: Session {session_id} not found in archives"
- **No WFS sessions in manifest**: "Error: No WFS-* workflow sessions found in manifest.json"
### Agent Errors
- If agent fails, report error message from agent result
- If Gemini times out, agents use fallback direct parsing
- If template read fails, agents use inline format
### Recovery
- Single session mode: Can be retried without affecting other sessions
- All sessions mode: If one agent fails, others continue; retry failed sessions individually
## Integration
### Called by `/workflow:session:complete`
Automatically invoked after session archival:
```bash
SlashCommand(command="/memory:workflow-skill-memory session {session_id}")
```
### Manual Invocation
Users can manually process sessions:
```bash
/memory:workflow-skill-memory session WFS-custom-feature # Single session
/memory:workflow-skill-memory all # Full regeneration
```

View File

@@ -0,0 +1,204 @@
---
name: breakdown
description: Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order
argument-hint: "task-id"
---
# Task Breakdown Command (/task:breakdown)
## Overview
Breaks down complex tasks into executable subtasks with context inheritance and agent assignment.
## Core Principles
**File Cohesion:** Related files must stay in same task
**10-Task Limit:** Total tasks cannot exceed 10 (triggers re-scoping)
## Core Features
**CRITICAL**: Manual breakdown with safety controls to prevent file conflicts and task limit violations.
### Breakdown Process
1. **Session Check**: Verify active session contains parent task
2. **Task Validation**: Ensure parent is `pending` status
3. **10-Task Limit Check**: Verify breakdown won't exceed total limit
4. **Manual Decomposition**: User defines subtasks with validation
5. **File Conflict Detection**: Warn if same files appear in multiple subtasks
6. **Similar Function Warning**: Alert if subtasks have overlapping functionality
7. **Context Distribution**: Inherit parent requirements and scope
8. **Agent Assignment**: Auto-assign agents based on subtask type
9. **TODO_LIST Update**: Regenerate TODO_LIST.md with new structure
### Breakdown Rules
- Only `pending` tasks can be broken down
- **Manual breakdown only**: Automated breakdown disabled to prevent violations
- Parent becomes `container` status (not executable)
- Subtasks use format: IMPL-N.M (max 2 levels)
- Context flows from parent to subtasks
- All relationships tracked in JSON
- **10-task limit enforced**: Breakdown rejected if total would exceed 10 tasks
- **File cohesion preserved**: Same files cannot be split across subtasks
## Usage
### Basic Breakdown
```bash
/task:breakdown impl-1
```
Interactive process:
```
Task: Build authentication module
Current total tasks: 6/10
MANUAL BREAKDOWN REQUIRED
Define subtasks manually (remaining capacity: 4 tasks):
1. Enter subtask title: User authentication core
Focus files: models/User.js, routes/auth.js, middleware/auth.js
2. Enter subtask title: OAuth integration
Focus files: services/OAuthService.js, routes/oauth.js
FILE CONFLICT DETECTED:
- routes/auth.js appears in multiple subtasks
- Recommendation: Merge related authentication routes
SIMILAR FUNCTIONALITY WARNING:
- "User authentication" and "OAuth integration" both handle auth
- Consider combining into single task
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "File conflicts and/or similar functionality detected. How do you want to proceed?",
header: "Confirm",
options: [
{ label: "Proceed with breakdown", description: "Accept the risks and create the subtasks as defined." },
{ label: "Restart breakdown", description: "Discard current subtasks and start over." },
{ label: "Cancel breakdown", description: "Abort the operation and leave the parent task as is." }
],
multiSelect: false
}]
})
User selected: "Proceed with breakdown"
Task IMPL-1 broken down:
IMPL-1: Build authentication module (container)
├── IMPL-1.1: User authentication core -> @code-developer
└── IMPL-1.2: OAuth integration -> @code-developer
Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
```
## Decomposition Logic
### Agent Assignment
- **Design/Planning** → `@planning-agent`
- **Implementation** → `@code-developer`
- **Testing** → `@code-developer` (type: "test-gen")
- **Test Validation** → `@test-fix-agent` (type: "test-fix")
- **Review** → `@universal-executor` (optional)
### Context Inheritance
- Subtasks inherit parent requirements
- Scope refined for specific subtask
- Implementation details distributed appropriately
## Safety Controls
### File Conflict Detection
**Validates file cohesion across subtasks:**
- Scans `focus_paths` in all subtasks
- Warns if same file appears in multiple subtasks
- Suggests merging subtasks with overlapping files
- Blocks breakdown if critical conflicts detected
### Similar Functionality Detection
**Prevents functional overlap:**
- Analyzes subtask titles for similar keywords
- Warns about potential functional redundancy
- Suggests consolidation of related functionality
- Examples: "user auth" + "login system" → merge recommendation
### 10-Task Limit Enforcement
**Hard limit compliance:**
- Counts current total tasks in session
- Calculates breakdown impact on total
- Rejects breakdown if would exceed 10 tasks
- Suggests re-scoping if limit reached
### Manual Control Requirements
**User-driven breakdown only:**
- No automatic subtask generation
- User must define each subtask title and scope
- Real-time validation during input
- Confirmation required before execution
## Implementation Details
- Complete task JSON schema
- Implementation field structure
- Context inheritance rules
- Agent assignment logic
## Validation
### Pre-breakdown Checks
1. Active session exists
2. Task found in session
3. Task status is `pending`
4. Not already broken down
5. **10-task limit compliance**: Total tasks + new subtasks ≤ 10
6. **Manual mode enabled**: No automatic breakdown allowed
### Post-breakdown Actions
1. Update parent to `container` status
2. Create subtask JSON files
3. Update parent subtasks list
4. Update session stats
5. **Regenerate TODO_LIST.md** with new hierarchy
6. Validate file paths in focus_paths
7. Update session task count
## Examples
### Basic Breakdown
```bash
/task:breakdown impl-1
impl-1: Build authentication (container)
├── impl-1.1: Design schema -> @planning-agent
├── impl-1.2: Implement logic + tests -> @code-developer
└── impl-1.3: Execute & fix tests -> @test-fix-agent
```
## Error Handling
```bash
# Task not found
Task IMPL-5 not found
# Already broken down
Task IMPL-1 already has subtasks
# Wrong status
Cannot breakdown completed task IMPL-2
# 10-task limit exceeded
Breakdown would exceed 10-task limit (current: 8, proposed: 4)
Suggestion: Re-scope project into smaller iterations
# File conflicts detected
File conflict: routes/auth.js appears in IMPL-1.1 and IMPL-1.2
Recommendation: Merge subtasks or redistribute files
# Similar functionality warning
Similar functions detected: "user login" and "authentication"
Consider consolidating related functionality
# Manual breakdown required
Automatic breakdown disabled. Use manual breakdown process.
```
**System ensures**: Manual breakdown control with file cohesion enforcement, similar functionality detection, and 10-task limit compliance

View File

@@ -0,0 +1,152 @@
---
name: create
description: Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis
argument-hint: "\"task title\""
---
# Task Create Command (/task:create)
## Overview
Creates new implementation tasks with automatic context awareness and ID generation.
## Core Principles
**Task System:** @~/.claude/workflows/task-core.md
## Core Features
### Automatic Behaviors
- **ID Generation**: Auto-generates IMPL-N format (max 2 levels)
- **Context Inheritance**: Inherits from active workflow session
- **JSON Creation**: Creates task JSON in active session
- **Status Setting**: Initial status = "pending"
- **Agent Assignment**: Suggests agent based on task type
- **Session Integration**: Updates workflow session stats
### Context Awareness
- Validates active workflow session exists
- Avoids duplicate task IDs
- Inherits session requirements and scope
- Suggests task relationships
## Usage
### Basic Creation
```bash
/task:create "Build authentication module"
```
Output:
```
Task created: IMPL-1
Title: Build authentication module
Type: feature
Agent: code-developer
Status: pending
```
### Task Types
- `feature` - New functionality (default)
- `bugfix` - Bug fixes
- `refactor` - Code improvements
- `test` - Test implementation
- `docs` - Documentation
## Task Creation Process
1. **Session Validation**: Check active workflow session
2. **ID Generation**: Auto-increment IMPL-N
3. **Context Inheritance**: Load workflow context
4. **Implementation Setup**: Initialize implementation field
5. **Agent Assignment**: Select appropriate agent
6. **File Creation**: Save JSON to .task/ directory
7. **Session Update**: Update workflow stats
**Task Schema**: See @~/.claude/workflows/task-core.md for complete JSON structure
## Implementation Field Setup
### Auto-Population Strategy
- **Detailed info**: Extract from task description and scope
- **Missing info**: Mark `pre_analysis` as multi-step array format for later pre-analysis
- **Basic structure**: Initialize with standard template
### Analysis Triggers
When implementation details incomplete:
```bash
Task requires analysis for implementation details
Suggest running: gemini analysis for file locations and dependencies
```
## File Management
### JSON Task File
- **Location**: `.task/IMPL-[N].json` in active session
- **Content**: Complete task with implementation field
- **Updates**: Session stats only
### Simple Process
1. Validate session and inputs
2. Generate task JSON
3. Update session stats
4. Notify completion
## Context Inheritance
Tasks inherit from:
1. **Active Session** - Requirements and scope from workflow-session.json
2. **Planning Document** - Context from IMPL_PLAN.md
3. **Parent Task** - For subtasks (IMPL-N.M format)
## Agent Assignment
Based on task type and title keywords:
- **Build/Implement** → @code-developer
- **Design/Plan** → @planning-agent
- **Test Generation** → @code-developer (type: "test-gen")
- **Test Execution/Fix** → @test-fix-agent (type: "test-fix")
- **Review/Audit** → @universal-executor (optional, only when explicitly requested)
## Validation Rules
1. **Session Check** - Active workflow session required
2. **Duplicate Check** - Avoid similar task titles
3. **ID Uniqueness** - Auto-increment task IDs
4. **Schema Validation** - Ensure proper JSON structure
## Error Handling
```bash
# No workflow session
No active workflow found
Use: /workflow init "project name"
# Duplicate task
Similar task exists: IMPL-3
Continue anyway? (y/n)
# Max depth exceeded
Cannot create IMPL-1.2.1 (max 2 levels)
Use: IMPL-2 for new main task
```
## Examples
### Feature Task
```bash
/task:create "Implement user authentication"
Created IMPL-1: Implement user authentication
Type: feature
Agent: code-developer
Status: pending
```
### Bug Fix
```bash
/task:create "Fix login validation bug" --type=bugfix
Created IMPL-2: Fix login validation bug
Type: bugfix
Agent: code-developer
Status: pending
```

View File

@@ -0,0 +1,270 @@
---
name: execute
description: Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking
argument-hint: "task-id"
---
## Command Overview: /task:execute
**Purpose**: Executes tasks using intelligent agent selection, context preparation, and progress tracking.
## Execution Modes
- **auto (Default)**
- Fully autonomous execution with automatic agent selection.
- Provides progress updates at each checkpoint.
- Automatically completes the task when done.
- **guided**
- Executes step-by-step, requiring user confirmation at each checkpoint.
- Allows for dynamic adjustments and manual review during the process.
- **review**
- Optional manual review using `@universal-executor`.
- Used only when explicitly requested by user.
## Agent Selection Logic
The system determines the appropriate agent for a task using the following logic.
```pseudo
FUNCTION select_agent(task, agent_override):
// A manual override always takes precedence.
// Corresponds to the --agent=<agent-type> flag.
IF agent_override IS NOT NULL:
RETURN agent_override
// If no override, select based on keywords in the task title.
ELSE:
CASE task.title:
WHEN CONTAINS "Build API", "Implement":
RETURN "@code-developer"
WHEN CONTAINS "Design schema", "Plan":
RETURN "@planning-agent"
WHEN CONTAINS "Write tests", "Generate tests":
RETURN "@code-developer" // type: test-gen
WHEN CONTAINS "Execute tests", "Fix tests", "Validate":
RETURN "@test-fix-agent" // type: test-fix
WHEN CONTAINS "Review code":
RETURN "@universal-executor" // Optional manual review
DEFAULT:
RETURN "@code-developer" // Default agent
END CASE
END FUNCTION
```
## Core Execution Protocol
`Pre-Execution` -> `Execution` -> `Post-Execution`
### Pre-Execution Protocol
`Validate Task & Dependencies` **->** `Prepare Execution Context` **->** `Coordinate with TodoWrite`
- **Validation**: Checks for the task's JSON file in `.task/` and resolves its dependencies.
- **Context Preparation**: Loads task and workflow context, preparing it for the selected agent.
- **Session Context Injection**: Provides workflow directory paths to agents for TODO_LIST.md and summary management.
- **TodoWrite Coordination**: Generates execution Todos and checkpoints, syncing with `TODO_LIST.md`.
### Post-Execution Protocol
`Update Task Status` **->** `Generate Summary` **->** `Save Artifacts` **->** `Sync All Progress` **->** `Validate File Integrity`
- Updates status in the task's JSON file and `TODO_LIST.md`.
- Creates a summary in `.summaries/`.
- Stores outputs and syncs progress across the entire workflow session.
### Task & Subtask Execution Logic
This logic defines how single, multiple, or parent tasks are handled.
```pseudo
FUNCTION execute_task_command(task_id, mode, parallel_flag):
// Handle parent tasks by executing their subtasks.
IF is_parent_task(task_id):
subtasks = get_subtasks(task_id)
EXECUTE_SUBTASK_BATCH(subtasks, mode)
// Handle wildcard execution (e.g., IMPL-001.*)
ELSE IF task_id CONTAINS "*":
subtasks = find_matching_tasks(task_id)
IF parallel_flag IS true:
EXECUTE_IN_PARALLEL(subtasks)
ELSE:
FOR each subtask in subtasks:
EXECUTE_SINGLE_TASK(subtask, mode)
// Default case for a single task ID.
ELSE:
EXECUTE_SINGLE_TASK(task_id, mode)
END FUNCTION
```
### Error Handling & Recovery Logic
```pseudo
FUNCTION pre_execution_check(task):
// Ensure dependencies are met before starting.
IF task.dependencies ARE NOT MET:
LOG_ERROR("Cannot execute " + task.id)
LOG_INFO("Blocked by: " + unmet_dependencies)
HALT_EXECUTION()
FUNCTION on_execution_failure(checkpoint):
// Provide user with recovery options upon failure.
LOG_WARNING("Execution failed at checkpoint " + checkpoint)
PRESENT_OPTIONS([
"Retry from checkpoint",
"Retry from beginning",
"Switch to guided mode",
"Abort execution"
])
AWAIT user_input
// System performs the selected action.
END FUNCTION
```
### Simplified Context Structure (JSON)
This is the simplified data structure loaded to provide context for task execution.
```json
{
"task": {
"id": "IMPL-1",
"title": "Build authentication module",
"type": "feature",
"status": "active",
"agent": "code-developer",
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"scope": ["src/auth/*", "tests/auth/*"],
"acceptance": ["Module handles JWT tokens", "OAuth2 flow implemented"],
"inherited_from": "WFS-user-auth"
},
"relations": {
"parent": null,
"subtasks": ["IMPL-1.1", "IMPL-1.2"],
"dependencies": ["IMPL-0"]
},
"implementation": {
"files": [
{
"path": "src/auth/login.ts",
"location": {
"function": "authenticateUser",
"lines": "25-65",
"description": "Main authentication logic"
},
"original_code": "// Code snippet extracted via gemini analysis",
"modifications": {
"current_state": "Basic password authentication only",
"proposed_changes": [
"Add JWT token generation",
"Implement OAuth2 callback handling",
"Add multi-factor authentication support"
],
"logic_flow": [
"validateCredentials() ───► checkUserExists()",
"◊─── if password ───► generateJWT() ───► return token",
"◊─── if OAuth ───► validateOAuthCode() ───► exchangeForToken()",
"◊─── if MFA ───► sendMFACode() ───► awaitVerification()"
],
"reason": "Support modern authentication standards and security requirements",
"expected_outcome": "Comprehensive authentication system supporting multiple methods"
}
}
],
"context_notes": {
"dependencies": ["jsonwebtoken", "passport", "speakeasy"],
"affected_modules": ["user-session", "auth-middleware", "api-routes"],
"risks": [
"Breaking changes to existing login endpoints",
"Token storage and rotation complexity",
"OAuth provider configuration dependencies"
],
"performance_considerations": "JWT validation adds ~10ms per request, OAuth callbacks may timeout",
"error_handling": "Ensure sensitive authentication errors don't leak user enumeration data"
},
"pre_analysis": [
{
"action": "analyze patterns",
"template": "~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt",
"method": "gemini"
}
]
}
},
"workflow": {
"session": "WFS-user-auth",
"phase": "IMPLEMENT",
"session_context": {
"workflow_directory": ".workflow/active/WFS-user-auth/",
"todo_list_location": ".workflow/active/WFS-user-auth/TODO_LIST.md",
"summaries_directory": ".workflow/active/WFS-user-auth/.summaries/",
"task_json_location": ".workflow/active/WFS-user-auth/.task/"
}
},
"execution": {
"agent": "code-developer",
"mode": "auto",
"attempts": 0
}
}
```
### Agent-Specific Context
Different agents receive context tailored to their function, including implementation details:
**`@code-developer`**:
- Complete implementation.files array with file paths and locations
- original_code snippets and proposed_changes for precise modifications
- logic_flow diagrams for understanding data flow
- Dependencies and affected modules for integration planning
- Performance and error handling considerations
**`@planning-agent`**:
- High-level requirements, constraints, success criteria
- Implementation risks and mitigation strategies
- Architecture implications from implementation.context_notes
**`@test-fix-agent`**:
- Test files to execute from task.context.focus_paths
- Source files to fix from implementation.files[].path
- Expected behaviors from implementation.modifications.logic_flow
- Error conditions to validate from implementation.context_notes.error_handling
- Performance requirements from implementation.context_notes.performance_considerations
**`@universal-executor`**:
- Used for optional manual reviews when explicitly requested
- Code quality standards and implementation patterns
- Security considerations from implementation.context_notes.risks
- Dependency validation from implementation.context_notes.dependencies
- Architecture compliance checks
### Simplified File Output
- **Task JSON File (`.task/<task-id>.json`)**: Updated with status and last attempt time only.
- **Session File (`workflow-session.json`)**: Updated task stats (completed count).
- **Summary File**: Generated in `.summaries/` upon completion (optional).
### Simplified Summary Template
Optional summary file generated at `.summaries/IMPL-[task-id]-summary.md`.
```markdown
# Task Summary: IMPL-1 Build Authentication Module
## What Was Done
- Created src/auth/login.ts with JWT validation
- Added tests in tests/auth.test.ts
## Execution Results
- **Agent**: code-developer
- **Status**: completed
## Files Modified
- `src/auth/login.ts` (created)
- `tests/auth.test.ts` (created)
```

View File

@@ -0,0 +1,437 @@
---
name: replan
description: Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json
argument-hint: "task-id [\"text\"|file.md] | --batch [verification-report.md]"
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
# Task Replan Command (/task:replan)
> **⚠️ DEPRECATION NOTICE**: This command is maintained for backward compatibility. For new workflows, use `/workflow:replan` which provides:
> - Session-level replanning with comprehensive artifact updates
> - Interactive boundary clarification
> - Updates to IMPL_PLAN.md, TODO_LIST.md, and session metadata
> - Better integration with workflow sessions
>
> **Migration**: Replace `/task:replan IMPL-1 "changes"` with `/workflow:replan IMPL-1 "changes"`
## Overview
Replans individual tasks or batch processes multiple tasks with change tracking and backup management.
**Modes**:
- **Single Task Mode**: Replan one task with specific changes
- **Batch Mode**: Process multiple tasks from action-plan verification report
## Key Features
- **Single/Batch Operations**: Single task or multiple tasks from verification report
- **Multiple Input Sources**: Text, files, or verification report
- **Backup Management**: Automatic backup of previous versions
- **Change Documentation**: Track all modifications
- **Progress Tracking**: TodoWrite integration for batch operations
**CRITICAL**: Validates active session before replanning
## Operation Modes
### Single Task Mode
#### Direct Text (Default)
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
```
#### File-based Input
```bash
/task:replan IMPL-1 updated-specs.md
```
Supports: .md, .txt, .json, .yaml
#### Interactive Mode
```bash
/task:replan IMPL-1 --interactive
```
Guided step-by-step modification process with validation
### Batch Mode
#### From Verification Report
```bash
/task:replan --batch ACTION_PLAN_VERIFICATION.md
```
**Workflow**:
1. Parse verification report to extract replan recommendations
2. Create TodoWrite task list for all modifications
3. Process each task sequentially with confirmation
4. Track progress and generate summary report
**Auto-detection**: If input file contains "Action Plan Verification Report" header, automatically enters batch mode
## Replanning Process
### Single Task Process
1. **Load & Validate**: Read task JSON and validate session
2. **Parse Input**: Process changes from input source
3. **Create Backup**: Save previous version to backup folder
4. **Update Task**: Modify JSON structure and relationships
5. **Save Changes**: Write updated task and increment version
6. **Update Session**: Reflect changes in workflow stats
### Batch Process
1. **Parse Verification Report**: Extract all replan recommendations
2. **Initialize TodoWrite**: Create task list for tracking
3. **For Each Task**:
- Mark todo as in_progress
- Load and validate task JSON
- Create backup
- Apply recommended changes
- Save updated task
- Mark todo as completed
4. **Generate Summary**: Report all changes and backup locations
## Backup Management
### Backup Tracking
Tasks maintain backup history:
```json
{
"id": "IMPL-1",
"version": "1.2",
"replan_history": [
{
"version": "1.2",
"reason": "Add OAuth2 support",
"input_source": "direct_text",
"backup_location": ".task/backup/IMPL-1-v1.1.json",
"timestamp": "2025-10-17T10:30:00Z"
}
]
}
```
**Complete schema**: See @~/.claude/workflows/task-core.md
### File Structure
```
.task/
├── IMPL-1.json # Current version
├── backup/
│ ├── IMPL-1-v1.0.json # Original version
│ ├── IMPL-1-v1.1.json # Previous backup
│ └── IMPL-1-v1.2.json # Latest backup
└── [new subtasks as needed]
```
**Backup Naming**: `{task-id}-v{version}.json`
## Implementation Updates
### Change Detection
Tracks modifications to:
- Files in implementation.files array
- Dependencies and affected modules
- Risk assessments and performance notes
- Logic flows and code locations
### Analysis Triggers
May require gemini re-analysis when:
- New files need code extraction
- Function locations change
- Dependencies require re-evaluation
## Document Updates
### Planning Document
May update IMPL_PLAN.md sections when task structure changes significantly
### TODO List Sync
If TODO_LIST.md exists, synchronizes:
- New subtasks (with [ ] checkbox)
- Modified tasks (marked as updated)
- Removed subtasks (deleted from list)
## Change Documentation
### Change Summary
Generates brief change log with:
- Version increment (1.1 → 1.2)
- Input source and reason
- Key modifications made
- Files updated/created
- Backup location
## Session Updates
Updates workflow-session.json with:
- Modified task tracking
- Task count changes (if subtasks added/removed)
- Last modification timestamps
## Rollback Support
```bash
/task:replan IMPL-1 --rollback v1.1
Rollback to version 1.1:
- Restore task from backup/.../IMPL-1-v1.1.json
- Remove new subtasks if any
- Update session stats
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Are you sure you want to roll back this task to a previous version?",
header: "Confirm",
options: [
{ label: "Yes, rollback", description: "Restore the task from the selected backup." },
{ label: "No, cancel", description: "Keep the current version of the task." }
],
multiSelect: false
}]
})
User selected: "Yes, rollback"
Task rolled back to version 1.1
```
## Batch Processing with TodoWrite
### Progress Tracking
When processing multiple tasks, automatically creates TodoWrite task list:
```markdown
**Batch Replan Progress**:
- [x] IMPL-002: Add FR-12 draft saving acceptance criteria
- [x] IMPL-003: Add FR-14 history tracking acceptance criteria
- [ ] IMPL-004: Add FR-09 response surface explicit coverage
- [ ] IMPL-008: Add NFR performance validation steps
```
### Batch Report
After completion, generates summary:
```markdown
## Batch Replan Summary
**Total Tasks**: 4
**Successful**: 3
**Failed**: 1
**Skipped**: 0
### Changes Made
- IMPL-002 v1.0 → v1.1: Added FR-12 acceptance criteria
- IMPL-003 v1.0 → v1.1: Added FR-14 acceptance criteria
- IMPL-004 v1.0 → v1.1: Added FR-09 explicit coverage
### Backups Created
- .task/backup/IMPL-002-v1.0.json
- .task/backup/IMPL-003-v1.0.json
- .task/backup/IMPL-004-v1.0.json
### Errors
- IMPL-008: File not found (task may have been renamed)
```
## Examples
### Single Task - Text Input
```bash
/task:replan IMPL-1 "Add OAuth2 authentication support"
Processing changes...
Proposed updates:
+ Add OAuth2 integration
+ Update authentication flow
# Use AskUserQuestion for confirmation
AskUserQuestion({
questions: [{
question: "Do you want to apply these changes to the task?",
header: "Apply",
options: [
{ label: "Yes, apply", description: "Create new version with these changes." },
{ label: "No, cancel", description: "Discard changes and keep current version." }
],
multiSelect: false
}]
})
User selected: "Yes, apply"
Version 1.2 created
Context updated
Backup saved to .task/backup/IMPL-1-v1.1.json
```
### Single Task - File Input
```bash
/task:replan IMPL-2 requirements.md
Loading requirements.md...
Applying specification changes...
Task updated with new requirements
Version 1.1 created
Backup saved to .task/backup/IMPL-2-v1.0.json
```
### Batch Mode - From Verification Report
```bash
/task:replan --batch .workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md
Parsing verification report...
Found 4 tasks requiring replanning:
- IMPL-002: Add FR-12 draft saving acceptance criteria
- IMPL-003: Add FR-14 history tracking acceptance criteria
- IMPL-004: Add FR-09 response surface explicit coverage
- IMPL-008: Add NFR performance validation steps
Creating task tracking list...
Processing IMPL-002...
Backup created: .task/backup/IMPL-002-v1.0.json
Updated to v1.1
Processing IMPL-003...
Backup created: .task/backup/IMPL-003-v1.0.json
Updated to v1.1
Processing IMPL-004...
Backup created: .task/backup/IMPL-004-v1.0.json
Updated to v1.1
Processing IMPL-008...
Backup created: .task/backup/IMPL-008-v1.0.json
Updated to v1.1
Batch replan completed: 4/4 successful
Summary report saved
```
### Batch Mode - Auto-detection
```bash
# If file contains "Action Plan Verification Report", auto-enters batch mode
/task:replan ACTION_PLAN_VERIFICATION.md
Detected verification report format
Entering batch mode...
[same as above]
```
## Error Handling
### Single Task Errors
```bash
# Task not found
Task IMPL-5 not found
Check task ID with /workflow:status
# Task completed
Task IMPL-1 is completed (cannot replan)
Create new task for additional work
# File not found
File requirements.md not found
Check file path
# No input provided
Please specify changes needed
Provide text, file, or verification report
```
### Batch Mode Errors
```bash
# Invalid verification report
File does not contain valid verification report format
Check report structure or use single task mode
# Partial failures
Batch completed with errors: 3/4 successful
Review error details in summary report
# No replan recommendations found
Verification report contains no replan recommendations
Check report content or use /workflow:action-plan-verify first
```
## Batch Mode Integration
### Input Format Expectations
Batch mode parses verification reports looking for:
1. **Required Actions Section**: Commands like `/task:replan IMPL-X "changes"`
2. **Findings Table**: Task IDs with recommendations
3. **Next Actions Section**: Specific replan commands
**Example Patterns**:
```markdown
#### 1. HIGH Priority - Address FR Coverage Gaps
/task:replan IMPL-004 "
Add explicit acceptance criteria:
- FR-09: Response surface 3D visualization
"
#### 2. MEDIUM Priority - Enhance NFR Coverage
/task:replan IMPL-008 "
Add performance testing:
- NFR-01: Load test API endpoints
"
```
### Extraction Logic
1. Scan for `/task:replan` commands in report
2. Extract task ID and change description
3. Group by priority (HIGH, MEDIUM, LOW)
4. Process in priority order with TodoWrite tracking
### Confirmation Behavior
- **Default**: Confirm each task before applying
- **With `--auto-confirm`**: Apply all changes without prompting
```bash
/task:replan --batch report.md --auto-confirm
```
## Implementation Details
### Backup Management
```typescript
// Backup file naming convention
const backupPath = `.task/backup/${taskId}-v${previousVersion}.json`;
// Backup metadata in task JSON
{
"replan_history": [
{
"version": "1.2",
"timestamp": "2025-10-17T10:30:00Z",
"reason": "Add FR-09 explicit coverage",
"input_source": "batch_verification_report",
"backup_location": ".task/backup/IMPL-004-v1.1.json"
}
]
}
```
### TodoWrite Integration
```typescript
// Initialize tracking for batch mode
TodoWrite({
todos: taskList.map(task => ({
content: `${task.id}: ${task.changeDescription}`,
status: "pending",
activeForm: `Replanning ${task.id}`
}))
});
// Update progress during processing
TodoWrite({
todos: updateTaskStatus(taskId, "in_progress")
});
// Mark completed
TodoWrite({
todos: updateTaskStatus(taskId, "completed")
});
```

254
.claude/commands/version.md Normal file
View File

@@ -0,0 +1,254 @@
---
name: version
description: Display Claude Code version information and check for updates
allowed-tools: Bash(*)
---
# Version Command (/version)
## Purpose
Display local and global installation versions, check for the latest updates from GitHub, and provide upgrade recommendations.
## Execution Flow
1. **Local Version Check**: Read version information from `./.claude/version.json` if it exists.
2. **Global Version Check**: Read version information from `~/.claude/version.json` if it exists.
3. **Fetch Remote Versions**: Use GitHub API to get the latest stable release tag and the latest commit hash from the main branch.
4. **Compare & Suggest**: Compare installed versions with the latest remote versions and provide upgrade suggestions if applicable.
## Step 1: Check Local Version
### Check if local version.json exists
```bash
bash(test -f ./.claude/version.json && echo "found" || echo "not_found")
```
### Read local version (if exists)
```bash
bash(cat ./.claude/version.json)
```
### Extract version with jq (preferred)
```bash
bash(cat ./.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ./.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Local Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 2: Check Global Version
### Check if global version.json exists
```bash
bash(test -f ~/.claude/version.json && echo "found" || echo "not_found")
```
### Read global version
```bash
bash(cat ~/.claude/version.json)
```
### Extract version
```bash
bash(cat ~/.claude/version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
```
### Extract installation date
```bash
bash(cat ~/.claude/version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
```
**Output Format**:
```
Global Version: 3.2.1
Installed: 2025-10-03T12:00:00Z
```
## Step 3: Fetch Latest Stable Release
### Call GitHub API for latest release (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
```
### Extract tag name (version)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract release name
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
### Extract published date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"published_at": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Stable: v3.2.2
Release: v3.2.2: Independent Test-Gen Workflow with Cross-Session Context
Published: 2025-10-03T04:10:08Z
```
## Step 4: Fetch Latest Main Branch
### Call GitHub API for latest commit on main (with timeout)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
```
### Extract commit SHA (short)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
```
### Extract commit message (first line only)
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
```
### Extract commit date
```bash
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"date": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
```
**Output Format**:
```
Latest Dev: a03415b
Message: feat: Add version tracking and upgrade check system
Date: 2025-10-03T04:46:44Z
```
## Step 5: Compare Versions and Suggest Upgrade
### Normalize versions (remove 'v' prefix)
```bash
bash(echo "v3.2.1" | sed 's/^v//')
```
### Compare two versions
```bash
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
```
### Check if versions are equal
```bash
# If equal: Up to date
# If remote newer: Upgrade available
# If local newer: Development version
```
**Output Scenarios**:
**Scenario 1: Up to date**
```
You are on the latest stable version (3.2.1)
```
**Scenario 2: Upgrade available**
```
A newer stable version is available: v3.2.2
Your version: 3.2.1
To upgrade:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
Bash: bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
**Scenario 3: Development version**
```
You are running a development version (3.4.0-dev)
This is newer than the latest stable release (v3.3.0)
```
## Simple Bash Commands
### Basic Operations
```bash
# Check local version file
bash(test -f ./.claude/version.json && cat ./.claude/version.json)
# Check global version file
bash(test -f ~/.claude/version.json && cat ~/.claude/version.json)
# Extract version from JSON
bash(cat version.json | grep -o '"version": *"[^"]*"' | cut -d'"' -f4)
# Extract date from JSON
bash(cat version.json | grep -o '"installation_date_utc": *"[^"]*"' | cut -d'"' -f4)
# Fetch latest release (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null, timeout: 30000)
# Extract tag name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"tag_name": *"[^"]*"' | cut -d'"' -f4, timeout: 30000)
# Extract release name
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null | grep -o '"name": *"[^"]*"' | head -1 | cut -d'"' -f4, timeout: 30000)
# Fetch latest commit (with timeout)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null, timeout: 30000)
# Extract commit SHA
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep -o '"sha": *"[^"]*"' | head -1 | cut -d'"' -f4 | cut -c1-7, timeout: 30000)
# Extract commit message (first line)
bash(curl -fsSL "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null | grep '"message":' | cut -d'"' -f4 | cut -d'\' -f1, timeout: 30000)
# Compare versions
bash(printf "%s\n%s" "3.2.1" "3.2.2" | sort -V | tail -n 1)
# Remove 'v' prefix
bash(echo "v3.2.1" | sed 's/^v//')
```
## Error Handling
### No installation found
```
WARNING: Claude Code Workflow not installed
Install using:
PowerShell: iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.ps1)
```
### Network error
```
ERROR: Could not fetch latest version from GitHub
Check your network connection
```
### Invalid version.json
```
ERROR: version.json is invalid or corrupted
```
## Design Notes
- Uses simple, direct bash commands instead of complex functions
- Each step is independent and can be executed separately
- Fallback to grep/sed for JSON parsing (no jq dependency required)
- Network calls use curl with error suppression and 30-second timeout
- Version comparison uses `sort -V` for accurate semantic versioning
- Use `/commits/main` API instead of `/branches/main` for more reliable commit info
- Extract first line of commit message using `cut -d'\' -f1` to handle JSON escape sequences
## API Endpoints
### GitHub API Used
- **Latest Release**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest`
- Fields: `tag_name`, `name`, `published_at`
- **Latest Commit**: `https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main`
- Fields: `sha`, `commit.message`, `commit.author.date`
### Timeout Configuration
All network calls should use `timeout: 30000` (30 seconds) to handle slow connections.

View File

@@ -0,0 +1,447 @@
---
name: action-plan-verify
description: Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation
argument-hint: "[optional: --session session-id]"
allowed-tools: Read(*), TodoWrite(*), Glob(*), Bash(*)
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items between action planning artifacts (`IMPL_PLAN.md`, `task.json`) and brainstorming artifacts (`role analysis documents`) before implementation. This command MUST run only after `/workflow:plan` has successfully produced complete `IMPL_PLAN.md` and task JSON files.
## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands).
**Synthesis Authority**: The `role analysis documents` is **authoritative** for requirements and design decisions. Any conflicts between IMPL_PLAN/tasks and synthesis are automatically CRITICAL and require adjustment of the plan/tasks—not reinterpretation of requirements.
## Execution Steps
### 1. Initialize Analysis Context
```bash
# Detect active workflow session
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive absolute paths
session_dir = .workflow/active/WFS-{session}
brainstorm_dir = session_dir/.brainstorming
task_dir = session_dir/.task
# Validate required artifacts
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
IMPL_PLAN = session_dir/IMPL_PLAN.md
TASK_FILES = Glob(task_dir/*.json)
# Abort if missing
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
IF SYNTHESIS_FILES.count == 0:
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
EXIT
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
EXIT
```
### 2. Load Artifacts (Progressive Disclosure)
Load only minimal necessary context from each artifact:
**From workflow-session.json** (NEW - PRIMARY REFERENCE):
- Original user prompt/intent (project or description field)
- User's stated goals and objectives
- User's scope definition
**From role analysis documents**:
- Functional Requirements (IDs, descriptions, acceptance criteria)
- Non-Functional Requirements (IDs, targets)
- Business Requirements (IDs, success metrics)
- Key Architecture Decisions
- Risk factors and mitigation strategies
- Implementation Roadmap (high-level phases)
**From IMPL_PLAN.md**:
- Summary and objectives
- Context Analysis
- Implementation Strategy
- Task Breakdown Summary
- Success Criteria
- Brainstorming Artifacts References (if present)
**From task.json files**:
- Task IDs
- Titles and descriptions
- Status
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority)
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
**Requirements inventory**:
- Each functional/non-functional/business requirement with stable ID
- Requirement text, acceptance criteria, priority
**Architecture decisions inventory**:
- ADRs from synthesis
- Technology choices
- Data model references
**Task coverage mapping**:
- Map each task to one or more requirements (by ID reference or keyword inference)
- Map each requirement to covering tasks
**Dependency graph**:
- Task-to-task dependencies (depends_on, blocks)
- Requirement-level dependencies (from synthesis)
### 4. Detection Passes (Token-Efficient Analysis)
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
#### A. User Intent Alignment (NEW - CRITICAL)
- **Goal Alignment**: IMPL_PLAN objectives match user's original intent
- **Scope Drift**: Plan covers user's stated scope without unauthorized expansion
- **Success Criteria Match**: Plan's success criteria reflect user's expectations
- **Intent Conflicts**: Tasks contradicting user's original objectives
#### B. Requirements Coverage Analysis
- **Orphaned Requirements**: Requirements in synthesis with zero associated tasks
- **Unmapped Tasks**: Tasks with no clear requirement linkage
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
#### C. Consistency Validation
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
#### D. Dependency Integrity
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
- **Broken Dependencies**: Task depends on non-existent task ID
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
#### E. Synthesis Alignment
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
#### F. Task Specification Quality
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
- **Missing Artifacts References**: Tasks not referencing relevant brainstorming artifacts in context.artifacts
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
- **Missing Target Files**: Tasks without flow_control.target_files specification
#### G. Duplication Detection
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
#### H. Feasibility Assessment
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
- **Resource Conflicts**: Parallel tasks requiring same resources/files
- **Skill Gap Risks**: Tasks requiring skills not in team capability assessment (from synthesis)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**:
- Violates user's original intent (goal misalignment, scope drift)
- Violates synthesis authority (requirement conflict)
- Core requirement with zero coverage
- Circular dependencies
- Broken dependencies
- **HIGH**:
- NFR coverage gaps
- Priority conflicts
- Missing risk mitigation tasks
- Ambiguous acceptance criteria
- **MEDIUM**:
- Terminology drift
- Missing artifacts references
- Weak flow control
- Logical ordering issues
- **LOW**:
- Style/wording improvements
- Minor redundancy not affecting execution
### 6. Produce Compact Analysis Report
**Report Generation**: Generate report content and save to file.
Output a Markdown report with the following structure:
```markdown
## Action Plan Verification Report
**Session**: WFS-{session-id}
**Generated**: {timestamp}
**Artifacts Analyzed**: role analysis documents, IMPL_PLAN.md, {N} task files
---
### Executive Summary
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
- **Recommendation**: (See decision matrix below)
- BLOCK_EXECUTION: Critical issues exist (must fix before proceeding)
- PROCEED_WITH_FIXES: High issues exist, no critical (fix recommended before execution)
- PROCEED_WITH_CAUTION: Medium issues only (proceed with awareness)
- PROCEED: Low issues only or no issues (safe to execute)
- **Critical Issues**: {count}
- **High Issues**: {count}
- **Medium Issues**: {count}
- **Low Issues**: {count}
---
### Findings Summary
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| C1 | Coverage | CRITICAL | synthesis:FR-03 | Requirement "User auth" has zero task coverage | Add authentication implementation task |
| H1 | Consistency | HIGH | IMPL-1.2 vs synthesis:ADR-02 | Task uses REST while synthesis specifies GraphQL | Align task with ADR-02 decision |
| M1 | Specification | MEDIUM | IMPL-2.1 | Missing context.artifacts reference | Add @synthesis reference |
| L1 | Duplication | LOW | IMPL-3.1, IMPL-3.2 | Similar scope | Consider merging |
(Add one row per finding; generate stable IDs prefixed by severity initial.)
---
### Requirements Coverage Analysis
| Requirement ID | Requirement Summary | Has Task? | Task IDs | Priority Match | Notes |
|----------------|---------------------|-----------|----------|----------------|-------|
| FR-01 | User authentication | Yes | IMPL-1.1, IMPL-1.2 | Match | Complete |
| FR-02 | Data export | Yes | IMPL-2.3 | Mismatch | High req → Med priority task |
| FR-03 | Profile management | No | - | - | **CRITICAL: Zero coverage** |
| NFR-01 | Response time <200ms | No | - | - | **HIGH: No performance tasks** |
**Coverage Metrics**:
- Functional Requirements: 85% (17/20 covered)
- Non-Functional Requirements: 40% (2/5 covered)
- Business Requirements: 100% (5/5 covered)
---
### Unmapped Tasks
| Task ID | Title | Issue | Recommendation |
|---------|-------|-------|----------------|
| IMPL-4.5 | Refactor utils | No requirement linkage | Link to technical debt or remove |
---
### Dependency Graph Issues
**Circular Dependencies**: None detected
**Broken Dependencies**:
- IMPL-2.3 depends on "IMPL-2.4" (non-existent)
**Logical Ordering Issues**:
- IMPL-5.1 (integration test) has no dependency on IMPL-1.* (implementation tasks)
---
### Synthesis Alignment Issues
| Issue Type | Synthesis Reference | IMPL_PLAN/Task | Impact | Recommendation |
|------------|---------------------|----------------|--------|----------------|
| Architecture Conflict | synthesis:ADR-01 (JWT auth) | IMPL_PLAN uses session cookies | HIGH | Update IMPL_PLAN to use JWT |
| Priority Mismatch | synthesis:FR-02 (High) | IMPL-2.3 (Medium) | MEDIUM | Elevate task priority |
| Missing Risk Mitigation | synthesis:Risk-03 (API rate limits) | No mitigation tasks | HIGH | Add rate limiting implementation task |
---
### Task Specification Quality Issues
**Missing Artifacts References**: 12 tasks lack context.artifacts
**Weak Flow Control**: 5 tasks lack implementation_approach
**Missing Target Files**: 8 tasks lack flow_control.target_files
**Sample Issues**:
- IMPL-1.2: No context.artifacts reference to synthesis
- IMPL-3.1: Missing flow_control.target_files specification
- IMPL-4.2: Vague focus_paths ["src/"] - needs refinement
---
### Feasibility Concerns
| Concern | Tasks Affected | Issue | Recommendation |
|---------|----------------|-------|----------------|
| Skill Gap | IMPL-6.1, IMPL-6.2 | Requires Kubernetes expertise not in team | Add training task or external consultant |
| Resource Conflict | IMPL-3.1, IMPL-3.2 | Both modify src/auth/service.ts in parallel | Add dependency or serialize |
---
### Metrics
- **Total Requirements**: 30 (20 functional, 5 non-functional, 5 business)
- **Total Tasks**: 25
- **Overall Coverage**: 77% (23/30 requirements with ≥1 task)
- **Critical Issues**: 2
- **High Issues**: 5
- **Medium Issues**: 8
- **Low Issues**: 3
---
### Next Actions
#### Action Recommendations
**Recommendation Decision Matrix**:
| Condition | Recommendation | Action |
|-----------|----------------|--------|
| Critical > 0 | BLOCK_EXECUTION | Must resolve all critical issues before proceeding |
| Critical = 0, High > 0 | PROCEED_WITH_FIXES | Fix high-priority issues before execution |
| Critical = 0, High = 0, Medium > 0 | PROCEED_WITH_CAUTION | Proceed with awareness of medium issues |
| Only Low or None | PROCEED | Safe to execute workflow |
**If CRITICAL Issues Exist** (BLOCK_EXECUTION):
- Resolve all critical issues before proceeding
- Use TodoWrite to track required fixes
- Fix broken dependencies and circular references first
**If HIGH Issues Exist** (PROCEED_WITH_FIXES):
- Fix high-priority issues before execution
- Use TodoWrite to systematically track and complete improvements
**If Only MEDIUM/LOW Issues** (PROCEED_WITH_CAUTION / PROCEED):
- Can proceed with execution
- Address issues during or after implementation
#### TodoWrite-Based Remediation Workflow
**Report Location**: `.workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md`
**Recommended Workflow**:
1. **Create TodoWrite Task List**: Extract all findings from report
2. **Process by Priority**: CRITICAL → HIGH → MEDIUM → LOW
3. **Complete Each Fix**: Mark tasks as in_progress/completed as you work
4. **Validate Changes**: Verify each modification against requirements
**TodoWrite Task Structure Example**:
```markdown
Priority Order:
1. Fix coverage gaps (CRITICAL)
2. Resolve consistency conflicts (CRITICAL/HIGH)
3. Add missing specifications (MEDIUM)
4. Improve task quality (LOW)
```
**Notes**:
- TodoWrite provides real-time progress tracking
- Each finding becomes a trackable todo item
- User can monitor progress throughout remediation
- Architecture drift in IMPL_PLAN requires manual editing
```
### 7. Save Report and Execute TodoWrite-Based Remediation
**Step 7.1: Save Analysis Report**:
```bash
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
Write(report_path, full_report_content)
```
**Step 7.2: Display Report Summary to User**:
- Show executive summary with counts
- Display recommendation (BLOCK/PROCEED_WITH_FIXES/PROCEED_WITH_CAUTION/PROCEED)
- List critical and high issues if any
**Step 7.3: After Report Generation**:
1. **Extract Findings**: Parse all issues by severity
2. **Create TodoWrite Task List**: Convert findings to actionable todos
3. **Execute Fixes**: Process each todo systematically
4. **Update Task Files**: Apply modifications directly to task JSON files
5. **Update IMPL_PLAN**: Apply strategic changes if needed
At end of report, provide remediation guidance:
```markdown
### 🔧 Remediation Workflow
**Recommended Approach**:
1. **Initialize TodoWrite**: Create comprehensive task list from all findings
2. **Process by Severity**: Start with CRITICAL, then HIGH, MEDIUM, LOW
3. **Apply Fixes Directly**: Modify task.json files and IMPL_PLAN.md as needed
4. **Track Progress**: Mark todos as completed after each fix
**TodoWrite Execution Pattern**:
```bash
# Step 1: Create task list from verification report
TodoWrite([
{ content: "Fix FR-03 coverage gap - add authentication task", status: "pending", activeForm: "Fixing FR-03 coverage gap" },
{ content: "Fix IMPL-1.2 consistency - align with ADR-02", status: "pending", activeForm: "Fixing IMPL-1.2 consistency" },
{ content: "Add context.artifacts to IMPL-1.2", status: "pending", activeForm: "Adding context.artifacts to IMPL-1.2" },
# ... additional todos for each finding
])
# Step 2: Process each todo systematically
# Mark as in_progress when starting
# Apply fix using Read/Edit tools
# Mark as completed when done
# Move to next priority item
```
**File Modification Workflow**:
```bash
# For task JSON modifications:
1. Read(.workflow/active/WFS-{session}/.task/IMPL-X.Y.json)
2. Edit() to apply fixes
3. Mark todo as completed
# For IMPL_PLAN modifications:
1. Read(.workflow/active/WFS-{session}/IMPL_PLAN.md)
2. Edit() to apply strategic changes
3. Mark todo as completed
```
**Note**: All fixes execute immediately after user confirmation without additional commands.

Some files were not shown because too many files have changed in this diff Show More