Remove obsolete role and skill router templates; update .gitignore to exclude generated files

This commit is contained in:
catlog22
2026-02-16 13:23:03 +08:00
parent d7349f0540
commit 8ac4356d63
93 changed files with 92 additions and 25590 deletions

View File

@@ -1,353 +0,0 @@
---
name: codex-skill-designer
description: Meta-skill for designing Codex-native skills with subagent orchestration (spawn_agent/wait/send_input/close_agent). Supports new skill creation and Claude→Codex conversion. Triggers on "design codex skill", "create codex skill", "codex skill designer", "convert to codex".
allowed-tools: Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep
---
# Codex Skill Designer
Meta-skill for creating Codex-native skills that use the subagent API (`spawn_agent`/`wait`/`send_input`/`close_agent`). Generates complete skill packages with orchestrator coordination and agent role definitions.
## Architecture Overview
```
┌──────────────────────────────────────────────────────────────┐
│ Codex Skill Designer │
│ → Analyze requirements → Design orchestrator → Design agents│
└───────────────┬──────────────────────────────────────────────┘
┌───────────┼───────────┬───────────┐
↓ ↓ ↓ ↓
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │
│ Require │ │ Orch │ │ Agent │ │ Valid │
│ Analysis│ │ Design │ │ Design │ │ & Integ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
↓ ↓ ↓ ↓
codexSkill orchestrator agents/ Complete
Config .md generated *.md skill pkg
```
## Target Output Structure
The skill this meta-skill produces follows this structure:
### Mode A: Structured Skill Package (multi-agent orchestration)
```
.codex/skills/{skill-name}/
├── orchestrator.md # Main Codex orchestrator
│ ├── Frontmatter (name, description)
│ ├── Architecture (spawn/wait/close flow)
│ ├── Agent Registry (role → path mapping)
│ ├── Phase Execution (spawn_agent patterns)
│ ├── Result Aggregation (wait + merge)
│ └── Lifecycle Management (close_agent cleanup)
├── agents/ # Skill-specific agent definitions
│ ├── {agent-1}.md # → deploy to ~/.codex/agents/
│ └── {agent-2}.md # → deploy to ~/.codex/agents/
└── phases/ # [Optional] Phase execution detail
├── 01-{phase}.md
└── 02-{phase}.md
```
### Mode B: Single Prompt (simple or self-contained skills)
```
~/.codex/prompts/{skill-name}.md # Self-contained Codex prompt
```
## Key Design Principles — Codex-Native Patterns
### Pattern 1: Explicit Lifecycle Management
Every agent has a complete lifecycle: `spawn_agent``wait` → [`send_input`] → `close_agent`.
```javascript
// Standard lifecycle
const agentId = spawn_agent({ message: taskMessage })
const result = wait({ ids: [agentId], timeout_ms: 300000 })
// [Optional: send_input for multi-round]
close_agent({ id: agentId })
```
**Key Rules**:
- Use `wait()` to get results, NEVER depend on `close_agent` return
- `close_agent` is irreversible — no further `wait`/`send_input` possible
- Delay `close_agent` until certain no more interaction is needed
### Pattern 2: Role Loading via Path Reference
Codex subagents cannot auto-load roles. Use MANDATORY FIRST STEPS pattern:
```javascript
spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
## TASK CONTEXT
${taskContext}
## DELIVERABLES
${deliverables}
`
})
```
### Pattern 3: Parallel Fan-out with Batch Wait
Multiple independent agents → batch `wait({ ids: [...] })`:
```javascript
const agentIds = tasks.map(task =>
spawn_agent({ message: buildTaskMessage(task) })
)
const results = wait({ ids: agentIds, timeout_ms: 600000 })
agentIds.forEach(id => close_agent({ id }))
```
### Pattern 4: Deep Interaction (send_input Multi-round)
Single agent, multi-phase with context preservation:
```javascript
const agent = spawn_agent({ message: explorePrompt })
const round1 = wait({ ids: [agent] })
// Continue with clarification
send_input({ id: agent, message: clarificationAnswers })
const round2 = wait({ ids: [agent] })
close_agent({ id: agent }) // Only after all rounds complete
```
### Pattern 5: Two-Phase Workflow (Clarify → Execute)
```
Phase 1: spawn_agent → output Open Questions only
Phase 2: send_input (answers) → output full solution
```
### Pattern 6: Structured Output Template
All agents produce uniform output:
```text
Summary:
- One-sentence completion status
Findings:
- Finding 1: specific description
- Finding 2: specific description
Proposed changes:
- File: path/to/file
- Change: specific modification
- Risk: potential impact
Tests:
- New/updated test cases needed
- Test commands to run
Open questions:
1. Question needing clarification
2. Question needing clarification
```
## Execution Flow
```
Phase 1: Requirements Analysis
└─ Ref: phases/01-requirements-analysis.md
├─ Input: text description / Claude skill / requirements doc / existing codex prompt
└─ Output: codexSkillConfig (agents, phases, patterns, interaction model)
Phase 2: Orchestrator Design
└─ Ref: phases/02-orchestrator-design.md
├─ Input: codexSkillConfig
└─ Output: .codex/skills/{name}/orchestrator.md (or ~/.codex/prompts/{name}.md)
Phase 3: Agent Design
└─ Ref: phases/03-agent-design.md
├─ Input: codexSkillConfig + source content
└─ Output: .codex/skills/{name}/agents/*.md + optional phases/*.md
Phase 4: Validation & Delivery
└─ Ref: phases/04-validation.md
└─ Output: Validated skill package + deployment instructions
```
**Phase Reference Documents** (read on-demand when phase executes):
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-requirements-analysis.md](phases/01-requirements-analysis.md) | Analyze inputs, determine skill config |
| 2 | [phases/02-orchestrator-design.md](phases/02-orchestrator-design.md) | Generate Codex-native orchestrator |
| 3 | [phases/03-agent-design.md](phases/03-agent-design.md) | Generate agent roles & command patterns |
| 4 | [phases/04-validation.md](phases/04-validation.md) | Validate structure, patterns, quality |
## Input Sources
| Source | Description | Example |
|--------|-------------|---------|
| **Text description** | User describes desired Codex skill | "Create a 3-agent code review skill for Codex" |
| **Claude skill** | Convert existing Claude skill to Codex | `.claude/skills/workflow-plan/SKILL.md` |
| **Requirements doc** | Structured requirements file | `requirements.md` with agents/phases/outputs |
| **Existing Codex prompt** | Refactor/enhance a Codex prompt | `~/.codex/prompts/plan.md` |
## Conversion Mode (Claude → Codex)
When source is a Claude skill, apply conversion rules:
| Claude Pattern | Codex Equivalent |
|----------------|-----------------|
| `Task({ subagent_type, prompt })` | `spawn_agent({ message })` + `wait()` |
| `Task({ run_in_background: false })` | `spawn_agent()` + immediate `wait()` |
| `Task({ resume: agentId })` | `send_input({ id: agentId })` |
| `TaskOutput({ task_id, block })` | `wait({ ids: [id], timeout_ms })` |
| Automatic agent cleanup | Explicit `close_agent({ id })` |
| `subagent_type` auto-loads role | MANDATORY FIRST STEPS role path |
| Multiple parallel `Task()` calls | Multiple `spawn_agent()` + batch `wait({ ids })` |
**Full conversion spec**: Ref: specs/conversion-rules.md
## Data Flow
```
Phase 1 → codexSkillConfig:
{
name, description, outputMode (structured|single),
agents: [{ name, role_file, responsibility, patterns }],
phases: [{ name, agents_involved, interaction_model }],
parallelSplits: [{ strategy, agents }],
conversionSource: null | { type, path }
}
Phase 2 → orchestrator.md:
Generated Codex orchestrator with spawn/wait/close patterns
Phase 3 → agents/*.md:
Per-agent role definitions with Codex-native conventions
Phase 4 → validated package:
Structural completeness + pattern compliance + quality score
```
## TodoWrite Pattern
```
Phase starts:
→ Sub-tasks ATTACHED to TodoWrite (in_progress + pending)
→ Designer executes sub-tasks sequentially
Phase ends:
→ Sub-tasks COLLAPSED back to high-level summary (completed)
→ Next phase begins
```
## Interactive Preference Collection
Collect preferences via AskUserQuestion before dispatching to phases:
```javascript
const prefResponse = AskUserQuestion({
questions: [
{
question: "What is the output mode for this Codex skill?",
header: "Output Mode",
multiSelect: false,
options: [
{ label: "Structured Package (Recommended)", description: "Multi-file: orchestrator.md + agents/*.md + phases/*.md" },
{ label: "Single Prompt", description: "Self-contained ~/.codex/prompts/{name}.md" }
]
},
{
question: "What is the input source?",
header: "Input Source",
multiSelect: false,
options: [
{ label: "Text Description", description: "Describe the desired Codex skill in natural language" },
{ label: "Claude Skill (Convert)", description: "Convert existing .claude/skills/ to Codex-native" },
{ label: "Requirements Doc", description: "Structured requirements file" },
{ label: "Existing Codex Prompt", description: "Refactor/enhance existing ~/.codex/prompts/" }
]
}
]
})
const workflowPreferences = {
outputMode: prefResponse["Output Mode"].includes("Structured") ? "structured" : "single",
inputSource: prefResponse["Input Source"]
}
```
## Specification Documents
Read specs on-demand for pattern guidance:
| Spec | Document | Purpose |
|------|----------|---------|
| Agent Patterns | [specs/codex-agent-patterns.md](specs/codex-agent-patterns.md) | Core Codex subagent API patterns |
| Conversion Rules | [specs/conversion-rules.md](specs/conversion-rules.md) | Claude → Codex mapping rules |
| Quality Standards | [specs/quality-standards.md](specs/quality-standards.md) | Quality gates & validation criteria |
## Generation Templates
Apply templates during generation:
| Template | Document | Purpose |
|----------|----------|---------|
| Orchestrator | [templates/orchestrator-template.md](templates/orchestrator-template.md) | Codex orchestrator output template |
| Agent Role | [templates/agent-role-template.md](templates/agent-role-template.md) | Agent role definition template |
| Command Patterns | [templates/command-pattern-template.md](templates/command-pattern-template.md) | Pre-built Codex command patterns |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Source Claude skill has unsupported patterns | Log warning, provide manual conversion guidance |
| Agent role file path conflict | Append skill-name prefix to agent file |
| Output directory exists | Ask user: overwrite or new name |
| Validation score < 70% | Block delivery, report issues |
## Post-Phase Updates
After each phase, update accumulated state:
```javascript
// After Phase 1
codexSkillConfig = { ...requirements analysis output }
// After Phase 2
generatedFiles.orchestrator = "path/to/orchestrator.md"
// After Phase 3
generatedFiles.agents = ["path/to/agent1.md", "path/to/agent2.md"]
generatedFiles.phases = ["path/to/phase1.md"] // optional
// After Phase 4
validationResult = { score, issues, passed }
```
## Coordinator Checklist
### Pre-Phase Actions
- [ ] Verify input source exists and is readable
- [ ] Collect preferences via AskUserQuestion
- [ ] Read relevant specs based on input source
### Post-Phase Actions
- [ ] Verify phase output completeness
- [ ] Update TodoWrite status
- [ ] Pass accumulated state to next phase
### Final Delivery
- [ ] All generated files written to target directory
- [ ] Deployment instructions provided
- [ ] Agent files include `~/.codex/agents/` deployment paths

View File

@@ -1,167 +0,0 @@
# Phase 1: Requirements Analysis
Analyze input source and extract Codex skill configuration.
## Objective
- Parse input source (text / Claude skill / requirements doc / Codex prompt)
- Identify agents, phases, interaction patterns
- Determine output mode (structured package vs single prompt)
- Produce codexSkillConfig for downstream phases
## Pre-Requisites
Read specification documents based on input source:
- **Always**: Read `specs/codex-agent-patterns.md` for available patterns
- **Claude conversion**: Also read `specs/conversion-rules.md`
- **Quality reference**: Read `specs/quality-standards.md` for target criteria
## Execution
### Step 1.1: Input Source Detection
```javascript
// Determine input type from workflowPreferences
const inputSource = workflowPreferences.inputSource
if (inputSource.includes("Claude Skill")) {
// Read source Claude skill
const sourceSkillPath = AskUserQuestion({
questions: [{
question: "Path to the Claude skill to convert?",
header: "Skill Path",
multiSelect: false,
options: [
{ label: "Browse", description: "I'll provide the path" }
]
}]
})
// Read SKILL.md + phases/*.md from source
const skillContent = Read(sourceSkillPath)
const phaseFiles = Glob(`${sourceSkillDir}/phases/*.md`)
} else if (inputSource.includes("Text Description")) {
// Collect description via user interaction
} else if (inputSource.includes("Requirements Doc")) {
// Read requirements file
} else if (inputSource.includes("Existing Codex")) {
// Read existing Codex prompt for refactoring
}
```
### Step 1.2: Skill Structure Extraction
For each input type, extract:
**From Text Description**:
```javascript
const codexSkillConfig = {
name: extractSkillName(userDescription),
description: extractDescription(userDescription),
outputMode: workflowPreferences.outputMode,
agents: inferAgents(userDescription),
phases: inferPhases(userDescription),
parallelSplits: inferParallelism(userDescription),
interactionModel: inferInteractionModel(userDescription),
conversionSource: null
}
```
**From Claude Skill** (conversion):
```javascript
// Parse Claude SKILL.md
const claudeConfig = {
phases: extractPhases(skillContent),
agents: extractTaskCalls(skillContent), // Find Task() invocations
dataFlow: extractDataFlow(skillContent),
todoPattern: extractTodoPattern(skillContent),
resumePatterns: findResumePatterns(skillContent) // For send_input mapping
}
const codexSkillConfig = {
name: claudeConfig.name,
description: claudeConfig.description,
outputMode: workflowPreferences.outputMode,
agents: claudeConfig.agents.map(a => ({
name: a.subagent_type,
role_file: mapToCodexRolePath(a.subagent_type),
responsibility: a.description,
patterns: determinePatterns(a)
})),
phases: claudeConfig.phases.map(p => ({
name: p.name,
agents_involved: p.agentCalls.map(a => a.subagent_type),
interaction_model: hasResume(p) ? "deep_interaction" : "standard"
})),
parallelSplits: detectParallelPatterns(claudeConfig),
conversionSource: { type: "claude_skill", path: sourceSkillPath }
}
```
### Step 1.3: Agent Inventory Check
Verify agent roles exist in `~/.codex/agents/`:
```javascript
const existingAgents = Glob("~/.codex/agents/*.md")
const requiredAgents = codexSkillConfig.agents.map(a => a.role_file)
const missingAgents = requiredAgents.filter(r =>
!existingAgents.includes(r)
)
if (missingAgents.length > 0) {
// Mark as "needs new agent role definition"
codexSkillConfig.newAgentDefinitions = missingAgents
}
```
### Step 1.4: Interaction Model Selection
Based on agent relationships, select interaction patterns:
| Pattern | Condition | Result |
|---------|-----------|--------|
| **Standard** | Single agent, single task | `spawn → wait → close` |
| **Parallel Fan-out** | Multiple independent agents | `spawn[] → batch wait → close[]` |
| **Deep Interaction** | Multi-phase with context | `spawn → wait → send_input → wait → close` |
| **Two-Phase** | Needs clarification first | `spawn(clarify) → wait → send_input(answers) → wait → close` |
| **Pipeline** | Sequential agent chain | `spawn(A) → wait → spawn(B, with A result) → wait → close` |
```javascript
codexSkillConfig.phases.forEach(phase => {
if (phase.agents_involved.length > 1) {
phase.interaction_model = "parallel_fanout"
} else if (phase.interaction_model === "deep_interaction") {
// Already set from resume pattern detection
} else {
phase.interaction_model = "standard"
}
})
```
### Step 1.5: User Confirmation
Present extracted configuration for user review:
```javascript
AskUserQuestion({
questions: [{
question: `Skill "${codexSkillConfig.name}" will have ${codexSkillConfig.agents.length} agent(s) and ${codexSkillConfig.phases.length} phase(s). ${codexSkillConfig.newAgentDefinitions?.length || 0} new agent definitions needed. Proceed?`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Proceed", description: "Generate Codex skill package" },
{ label: "Adjust", description: "Modify configuration first" }
]
}]
})
```
## Output
- **Variable**: `codexSkillConfig` — complete skill configuration
- **TodoWrite**: Mark Phase 1 completed, Phase 2 in_progress
## Next Phase
Return to orchestrator, then auto-continue to [Phase 2: Orchestrator Design](02-orchestrator-design.md).

View File

@@ -1,291 +0,0 @@
# Phase 2: Orchestrator Design
Generate the main Codex orchestrator document using codexSkillConfig.
## Objective
- Generate orchestrator.md (structured mode) or {skill-name}.md (single mode)
- Apply Codex-native patterns: spawn_agent, wait, send_input, close_agent
- Include agent registry, phase execution, lifecycle management
- Preserve source content faithfully when converting from Claude
## Pre-Requisites
- Read `templates/orchestrator-template.md` for output structure
- Read `specs/codex-agent-patterns.md` for pattern reference
- If converting: Read `specs/conversion-rules.md` for mapping rules
## Execution
### Step 2.1: Determine Output Path
```javascript
const outputPath = codexSkillConfig.outputMode === "structured"
? `.codex/skills/${codexSkillConfig.name}/orchestrator.md`
: `~/.codex/prompts/${codexSkillConfig.name}.md`
```
### Step 2.2: Generate Frontmatter
```markdown
---
name: {{skill_name}}
description: {{description}}
agents: {{agent_count}}
phases: {{phase_count}}
output_template: structured # or "open_questions" for clarification-first
---
```
### Step 2.3: Generate Architecture Diagram
Map phases and agents to ASCII flow:
```javascript
// For parallel fan-out:
const diagram = `
┌──────────────────────────────────────────┐
${codexSkillConfig.name} Orchestrator │
└──────────────┬───────────────────────────┘
┌───────────┼───────────┬────────────┐
↓ ↓ ↓ ↓
┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐
│Agent1│ │Agent2│ │Agent3│ │AgentN│
│spawn │ │spawn │ │spawn │ │spawn │
└──┬───┘ └──┬───┘ └──┬───┘ └──┬───┘
└──────────┼───────────┘ │
↓ │
batch wait({ids}) ←──────────┘
Aggregate Results
close_agent (all)
`
```
### Step 2.4: Generate Agent Registry
```javascript
const agentRegistry = codexSkillConfig.agents.map(agent => ({
name: agent.name,
role_file: agent.role_file, // e.g., ~/.codex/agents/cli-explore-agent.md
responsibility: agent.responsibility,
is_new: agent.role_file.startsWith('.codex/skills/') // skill-specific new agent
}))
```
Output as registry table in orchestrator:
```markdown
## Agent Registry
| Agent | Role File | Responsibility | Status |
|-------|-----------|----------------|--------|
{{#each agents}}
| `{{name}}` | `{{role_file}}` | {{responsibility}} | {{#if is_new}}NEW{{else}}existing{{/if}} |
{{/each}}
```
### Step 2.5: Generate Phase Execution Blocks
For each phase in codexSkillConfig.phases, generate the appropriate pattern:
**Standard Pattern** (single agent, single task):
```javascript
// Phase N: {{phase.name}}
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: {{agent.role_file}} (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: {{phase.goal}}
Scope:
- 可做: {{phase.scope.include}}
- 不可做: {{phase.scope.exclude}}
Context:
{{phase.context}}
Deliverables:
- {{phase.deliverables}}
Quality bar:
- {{phase.quality_criteria}}
`
})
const result = wait({ ids: [agentId], timeout_ms: {{phase.timeout_ms || 300000}} })
close_agent({ id: agentId })
```
**Parallel Fan-out Pattern** (multiple independent agents):
```javascript
// Phase N: {{phase.name}} (Parallel)
const agentIds = {{phase.agents}}.map(agentConfig => {
return spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ${agentConfig.role_file} (MUST read first)
2. Read: .workflow/project-tech.json
---
Goal: ${agentConfig.specific_goal}
Scope: ${agentConfig.scope}
Deliverables: ${agentConfig.deliverables}
`
})
})
// Batch wait for all agents
const results = wait({
ids: agentIds,
timeout_ms: {{phase.timeout_ms || 600000}}
})
// Handle timeout
if (results.timed_out) {
const completed = agentIds.filter(id => results.status[id].completed)
const pending = agentIds.filter(id => !results.status[id].completed)
// Decision: continue waiting or use partial results
}
// Aggregate results
const aggregated = agentIds.map(id => results.status[id].completed)
// Cleanup
agentIds.forEach(id => close_agent({ id }))
```
**Deep Interaction Pattern** (multi-round with send_input):
```javascript
// Phase N: {{phase.name}} (Deep Interaction)
const agent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: {{agent.role_file}} (MUST read first)
---
### Phase A: {{phase.initial_goal}}
Goal: {{phase.initial_goal}}
Output: Findings + Open Questions (if any)
Output format for questions:
\`\`\`
CLARIFICATION_NEEDED:
Q1: [question] | Options: [A, B, C] | Recommended: [A]
\`\`\`
### Phase B: {{phase.followup_goal}}
Trigger: Receive answers via send_input
Output: Complete deliverable
`
})
// Round 1: Initial exploration
const round1 = wait({ ids: [agent], timeout_ms: {{phase.timeout_ms || 600000}} })
// Check for clarification needs
const needsClarification = round1.status[agent].completed.includes('CLARIFICATION_NEEDED')
if (needsClarification) {
// Collect user answers (orchestrator responsibility)
const answers = collectUserAnswers(round1)
// Continue interaction
send_input({
id: agent,
message: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Proceed with Phase B: {{phase.followup_goal}}
`
})
const round2 = wait({ ids: [agent], timeout_ms: {{phase.timeout_ms || 900000}} })
}
close_agent({ id: agent })
```
**Pipeline Pattern** (sequential agent chain):
```javascript
// Phase N: {{phase.name}} (Pipeline)
// Stage 1
const agent1 = spawn_agent({ message: stage1Prompt })
const result1 = wait({ ids: [agent1] })
close_agent({ id: agent1 })
// Stage 2 (uses Stage 1 output)
const agent2 = spawn_agent({
message: `
## TASK ASSIGNMENT
...
## PREVIOUS STAGE OUTPUT
${result1.status[agent1].completed}
...
`
})
const result2 = wait({ ids: [agent2] })
close_agent({ id: agent2 })
```
### Step 2.6: Generate Lifecycle Management Section
```markdown
## Lifecycle Management
### Timeout Handling
| Timeout | Action |
|---------|--------|
| Agent completes within timeout | Process result, close_agent |
| Agent times out (partial) | Option 1: continue wait / Option 2: send_input to urge convergence / Option 3: close_agent and use partial |
| All agents timeout | Log warning, retry with extended timeout or abort |
### Cleanup Protocol
After ALL phases complete or on error:
1. Verify all agent IDs have been closed
2. Report any agents still running
3. Force close remaining agents
\`\`\`javascript
const allAgentIds = [] // accumulated during execution
allAgentIds.forEach(id => {
try { close_agent({ id }) } catch { /* already closed */ }
})
\`\`\`
```
### Step 2.7: Write Orchestrator File
Apply template from `templates/orchestrator-template.md` with generated content.
Write the complete orchestrator to the output path.
## Output
- **File**: `{outputPath}` — generated Codex orchestrator
- **Variable**: `generatedFiles.orchestrator` = outputPath
- **TodoWrite**: Mark Phase 2 completed, Phase 3 in_progress
## Next Phase
Return to orchestrator, then auto-continue to [Phase 3: Agent Design](03-agent-design.md).

View File

@@ -1,277 +0,0 @@
# Phase 3: Agent Design
Generate agent role definitions and optional phase execution detail files.
## Objective
- Generate agent role files for `~/.codex/agents/` or `.codex/skills/{name}/agents/`
- Apply Codex-native conventions (MANDATORY FIRST STEPS, structured output)
- Preserve source content when converting from Claude
- Generate optional phase detail files for complex orchestrations
## Pre-Requisites
- Read `templates/agent-role-template.md` for role file structure
- Read `templates/command-pattern-template.md` for pre-built command patterns
- Read `specs/codex-agent-patterns.md` for API patterns
## Execution
### Step 3.1: Identify Agents to Generate
```javascript
// From codexSkillConfig
const agentsToGenerate = codexSkillConfig.agents.filter(a =>
a.role_file.startsWith('.codex/skills/') // new skill-specific agents
|| codexSkillConfig.newAgentDefinitions?.includes(a.role_file)
)
// Existing agents (already in ~/.codex/agents/) — skip generation
const existingAgents = codexSkillConfig.agents.filter(a =>
!agentsToGenerate.includes(a)
)
```
### Step 3.2: Generate Agent Role Files
For each agent to generate, apply the agent-role-template:
```javascript
for (const agent of agentsToGenerate) {
const roleContent = applyTemplate('templates/agent-role-template.md', {
agent_name: agent.name,
description: agent.responsibility,
capabilities: agent.capabilities || inferCapabilities(agent),
execution_process: agent.workflow || inferWorkflow(agent),
output_format: codexSkillConfig.outputTemplate || "structured",
key_reminders: generateReminders(agent)
})
const outputPath = agent.role_file.startsWith('~/')
? agent.role_file
: `.codex/skills/${codexSkillConfig.name}/agents/${agent.name}.md`
Write(outputPath, roleContent)
generatedFiles.agents.push(outputPath)
}
```
### Step 3.3: Agent Role File Content Structure
Each generated agent role file follows this structure:
```markdown
---
name: {{agent_name}}
description: |
{{description}}
color: {{color}}
skill: {{parent_skill_name}}
---
# {{agent_display_name}}
{{description_paragraph}}
## Core Capabilities
1. **{{capability_1}}**: {{description}}
2. **{{capability_2}}**: {{description}}
3. **{{capability_3}}**: {{description}}
## Execution Process
### Step 1: Context Loading
- Read role-specific configuration files
- Load project context (.workflow/project-tech.json)
- Understand task scope from TASK ASSIGNMENT
### Step 2: {{primary_action}}
{{primary_action_detail}}
### Step 3: {{secondary_action}}
{{secondary_action_detail}}
### Step 4: Output Delivery
Produce structured output following the template:
\`\`\`text
Summary:
- {{summary_format}}
Findings:
- {{findings_format}}
Proposed changes:
- {{changes_format}}
Tests:
- {{tests_format}}
Open questions:
- {{questions_format}}
\`\`\`
## Key Reminders
**ALWAYS**:
- Read role definition file as FIRST action
- Follow structured output template
- Stay within assigned scope
- Report open questions instead of guessing
**NEVER**:
- Modify files outside assigned scope
- Skip role definition loading
- Produce unstructured output
- Make assumptions about unclear requirements
```
### Step 3.4: Conversion from Claude Agent Definitions
When converting from Claude skill, extract agent behavior from:
1. **Task() prompts**: The `prompt` parameter contains the agent's task instructions
2. **Phase files**: Phase execution detail contains the full agent interaction
3. **subagent_type**: Maps to existing `~/.codex/agents/` roles
```javascript
// For each Task() call found in Claude source
for (const taskCall of claudeConfig.agents) {
const existingRole = roleMapping[taskCall.subagent_type]
if (existingRole) {
// Map to existing Codex agent — no new file needed
// Just reference in orchestrator's MANDATORY FIRST STEPS
codexSkillConfig.agents.push({
name: taskCall.subagent_type,
role_file: `~/.codex/agents/${taskCall.subagent_type}.md`,
responsibility: taskCall.description,
is_new: false
})
} else {
// Extract agent behavior from Claude prompt and create new role
const newRole = extractRoleFromPrompt(taskCall.prompt)
// Generate new role file
}
}
```
### Step 3.5: Command Pattern Selection
For agents that need specific command patterns, select from pre-built templates:
| Pattern | Use When | Template |
|---------|----------|----------|
| **Explore** | Agent needs codebase exploration | Parallel fan-out spawn_agent |
| **Analyze** | Agent performs multi-perspective analysis | Parallel spawn + merge |
| **Implement** | Agent writes code | Sequential spawn + validate |
| **Validate** | Agent runs tests | Iterative spawn + send_input fix cycle |
| **Review** | Agent reviews code/artifacts | Parallel spawn + aggregate |
| **Deep Interact** | Agent needs multi-round conversation | spawn + wait + send_input loop |
| **Two-Phase** | Agent needs clarification first | spawn(clarify) + send_input(execute) |
Read `templates/command-pattern-template.md` for full pattern implementations.
### Step 3.6: Generate Phase Detail Files (Optional)
For structured mode with complex phases, generate phase detail files:
```javascript
if (codexSkillConfig.outputMode === "structured") {
for (const phase of codexSkillConfig.phases) {
if (phase.complexity === "high" || phase.agents_involved.length > 2) {
const phaseContent = generatePhaseDetail(phase, codexSkillConfig)
const phasePath = `.codex/skills/${codexSkillConfig.name}/phases/${phase.index}-${phase.slug}.md`
Write(phasePath, phaseContent)
generatedFiles.phases.push(phasePath)
}
}
}
```
Phase detail structure:
```markdown
# Phase {{N}}: {{Phase Name}}
{{One-sentence description}}
## Agents Involved
| Agent | Role | Interaction Model |
|-------|------|-------------------|
{{#each phase.agents}}
| {{name}} | {{role_file}} | {{interaction_model}} |
{{/each}}
## Execution
### spawn_agent Configuration
\`\`\`javascript
const agent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: {{role_file}} (MUST read first)
...
---
Goal: {{goal}}
Scope: {{scope}}
Context: {{context}}
Deliverables: {{deliverables}}
Quality bar: {{quality}}
`
})
\`\`\`
### Wait & Result Processing
\`\`\`javascript
const result = wait({ ids: [agent], timeout_ms: {{timeout}} })
// Process: {{result_processing}}
close_agent({ id: agent })
\`\`\`
## Output
- **Result**: {{output_description}}
- **Passed to**: Phase {{N+1}}
```
### Step 3.7: Deployment Mapping
Generate deployment instructions:
```javascript
const deploymentMap = {
// Existing agents — no action needed
existing: existingAgents.map(a => ({
name: a.name,
path: a.role_file,
action: "already deployed"
})),
// New agents — need deployment
new: agentsToGenerate.map(a => ({
name: a.name,
sourcePath: `.codex/skills/${codexSkillConfig.name}/agents/${a.name}.md`,
targetPath: `~/.codex/agents/${a.name}.md`,
action: "copy to ~/.codex/agents/"
}))
}
```
## Output
- **Files**: `generatedFiles.agents[]` — agent role files
- **Files**: `generatedFiles.phases[]` — optional phase detail files
- **Variable**: `deploymentMap` — deployment instructions
- **TodoWrite**: Mark Phase 3 completed, Phase 4 in_progress
## Next Phase
Return to orchestrator, then auto-continue to [Phase 4: Validation & Delivery](04-validation.md).

View File

@@ -1,254 +0,0 @@
# Phase 4: Validation & Delivery
Validate the generated Codex skill package and deliver to target location.
## Objective
- Verify structural completeness of all generated files
- Validate Codex pattern compliance (lifecycle, role loading, output format)
- Score quality against standards
- Deploy to target location with instructions
## Pre-Requisites
- Read `specs/quality-standards.md` for validation criteria
- Access `generatedFiles` from previous phases
- Access `codexSkillConfig` for expected structure
## Execution
### Step 4.1: Structural Completeness Check
```javascript
const structuralChecks = {
// Orchestrator exists
orchestrator: {
exists: fileExists(generatedFiles.orchestrator),
hasFrontmatter: checkFrontmatter(generatedFiles.orchestrator),
hasArchitecture: checkSection(generatedFiles.orchestrator, "Architecture"),
hasAgentRegistry: checkSection(generatedFiles.orchestrator, "Agent Registry"),
hasPhaseExecution: checkSection(generatedFiles.orchestrator, "Phase"),
hasLifecycleManagement: checkSection(generatedFiles.orchestrator, "Lifecycle"),
hasTimeoutHandling: checkSection(generatedFiles.orchestrator, "Timeout"),
passed: 0, total: 7
},
// Agent files exist and are well-formed
agents: codexSkillConfig.agents.map(agent => ({
name: agent.name,
exists: fileExists(agent.role_file) || fileExists(generatedFiles.agents.find(f => f.includes(agent.name))),
hasFrontmatter: checkFrontmatter(agentFile),
hasCapabilities: checkSection(agentFile, "Core Capabilities"),
hasExecution: checkSection(agentFile, "Execution Process"),
hasReminders: checkSection(agentFile, "Key Reminders"),
passed: 0, total: 5
})),
// Phase files (if structured mode)
phases: generatedFiles.phases?.map(phasePath => ({
path: phasePath,
exists: fileExists(phasePath),
hasAgentTable: checkSection(phasePath, "Agents Involved"),
hasSpawnConfig: checkSection(phasePath, "spawn_agent"),
hasWaitProcessing: checkSection(phasePath, "Wait"),
passed: 0, total: 4
})) || []
}
// Count passes
let totalPassed = 0, totalChecks = 0
// ... count logic
```
### Step 4.2: Codex Pattern Compliance
Verify all Codex-native patterns are correctly applied:
```javascript
const patternChecks = {
// Lifecycle: every spawn has a close
lifecycle: {
spawnCount: countPattern(orchestratorContent, /spawn_agent/g),
closeCount: countPattern(orchestratorContent, /close_agent/g),
balanced: spawnCount <= closeCount, // close >= spawn (batch close is OK)
description: "Every spawn_agent must have matching close_agent"
},
// Role loading: MANDATORY FIRST STEPS present
roleLoading: {
hasPattern: orchestratorContent.includes("MANDATORY FIRST STEPS"),
allAgentsReferenced: codexSkillConfig.agents.every(a =>
orchestratorContent.includes(a.role_file)
),
usesPathNotInline: !orchestratorContent.includes("## ROLE DEFINITION"),
description: "Role files loaded via path reference, not inline content"
},
// Wait pattern: uses wait() not close_agent for results
waitPattern: {
usesWaitForResults: countPattern(orchestratorContent, /wait\(\s*\{/) > 0,
noCloseForResults: !hasPatternSequence(orchestratorContent, "close_agent", "result"),
description: "Results obtained via wait(), not close_agent"
},
// Batch wait: parallel agents use batch wait
batchWait: {
applicable: codexSkillConfig.parallelSplits?.length > 0,
usesBatchIds: orchestratorContent.includes("ids: [") ||
orchestratorContent.includes("ids: agentIds"),
description: "Parallel agents use batch wait({ ids: [...] })"
},
// Timeout handling: timeout_ms specified
timeout: {
hasTimeout: orchestratorContent.includes("timeout_ms"),
hasTimeoutHandling: orchestratorContent.includes("timed_out"),
description: "Timeout specified and timeout scenarios handled"
},
// Structured output: agents produce uniform output
structuredOutput: {
hasSummary: agentContents.every(c => c.includes("Summary:")),
hasDeliverables: agentContents.every(c => c.includes("Deliverables") || c.includes("Findings")),
description: "All agents produce structured output template"
},
// No Claude patterns: no Task(), no TaskOutput(), no resume
noClaudePatterns: {
noTask: !orchestratorContent.includes("Task("),
noTaskOutput: !orchestratorContent.includes("TaskOutput("),
noResume: !orchestratorContent.includes("resume:") && !orchestratorContent.includes("resume ="),
description: "No Claude-specific patterns remain"
}
}
const patternScore = calculatePatternScore(patternChecks)
```
### Step 4.3: Content Quality Check
```javascript
const qualityChecks = {
// Orchestrator quality
orchestratorQuality: {
hasDescription: orchestratorContent.length > 500,
hasCodeBlocks: countPattern(orchestratorContent, /```/g) >= 4,
hasErrorHandling: orchestratorContent.includes("Error") || orchestratorContent.includes("error"),
noPlaceholders: !orchestratorContent.includes("{{") || !orchestratorContent.includes("TODO"),
description: "Orchestrator is complete and production-ready"
},
// Agent quality
agentQuality: agentContents.map(content => ({
hasSubstantiveContent: content.length > 300,
hasActionableSteps: countPattern(content, /Step \d/g) >= 2,
hasOutputFormat: content.includes("Output") || content.includes("Deliverables"),
noPlaceholders: !content.includes("{{") || !content.includes("TODO")
})),
// Conversion quality (if applicable)
conversionQuality: codexSkillConfig.conversionSource ? {
allTasksConverted: true, // verify all Claude Task() calls are mapped
noLostFunctionality: true, // verify no features dropped
interactionPreserved: true // verify resume → send_input mapping
} : null
}
const qualityScore = calculateQualityScore(qualityChecks)
```
### Step 4.4: Quality Gate
```javascript
const overallScore = (
structuralScore * 0.30 +
patternScore * 0.40 +
qualityScore * 0.30
)
const verdict = overallScore >= 80 ? "PASS" :
overallScore >= 60 ? "REVIEW" : "FAIL"
```
| Verdict | Score | Action |
|---------|-------|--------|
| **PASS** | >= 80% | Deliver to target location |
| **REVIEW** | 60-79% | Report issues, ask user to proceed or fix |
| **FAIL** | < 60% | Block delivery, list critical issues |
### Step 4.5: Validation Report
```javascript
const validationReport = {
skill: codexSkillConfig.name,
outputMode: codexSkillConfig.outputMode,
scores: {
structural: structuralScore,
pattern: patternScore,
quality: qualityScore,
overall: overallScore
},
verdict: verdict,
issues: collectIssues(structuralChecks, patternChecks, qualityChecks),
generatedFiles: generatedFiles,
deploymentMap: deploymentMap
}
```
### Step 4.6: Delivery
If verdict is PASS or user approves REVIEW:
```javascript
// For structured mode — files already in .codex/skills/{name}/
// Report deployment instructions for agent files
const deploymentInstructions = `
## Deployment Instructions
### Generated Files
${generatedFiles.orchestrator}
${generatedFiles.agents.join('\n')}
${generatedFiles.phases?.join('\n') || '(no phase files)'}
### Agent Deployment
${deploymentMap.new.map(a =>
`Copy: ${a.sourcePath}${a.targetPath}`
).join('\n')}
### Existing Agents (no action needed)
${deploymentMap.existing.map(a =>
`${a.name}: ${a.path}`
).join('\n')}
### Usage
Invoke the generated orchestrator via Codex:
- Read the orchestrator.md and follow its phase execution
- Or register as a Codex prompt in ~/.codex/prompts/
### Validation Score
Overall: ${overallScore}% (${verdict})
- Structural: ${structuralScore}%
- Pattern Compliance: ${patternScore}%
- Content Quality: ${qualityScore}%
`
```
### Step 4.7: Final Summary to User
Present:
1. Generated file list with paths
2. Validation scores
3. Deployment instructions
4. Any issues or warnings
5. Next steps (e.g., "test the skill by running the orchestrator")
## Output
- **Report**: Validation report with scores
- **Deployment**: Instructions for agent file deployment
- **TodoWrite**: Mark Phase 4 completed
## Completion
Skill package generation complete. All files written and validated.

View File

@@ -1,406 +0,0 @@
# Codex Agent Patterns
Core Codex subagent API patterns reference for skill generation.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand available Codex patterns |
| Phase 2 | Reference when generating orchestrator patterns |
| Phase 3 | Reference when designing agent interactions |
---
## 1. API Reference
### 1.1 spawn_agent
Creates a new subagent with independent context.
```javascript
const agentId = spawn_agent({
message: "task message", // Required: task assignment
agent_type: "type" // Optional: preset baseline
})
// Returns: agent_id (string)
```
**Key Facts**:
- Each agent has isolated context (no shared state)
- `agent_type` selects preset behavior baseline
- Role definition must be loaded via MANDATORY FIRST STEPS
- Returns immediately — use `wait()` for results
### 1.2 wait
Retrieves results from one or more agents.
```javascript
const result = wait({
ids: [agentId1, agentId2], // Required: agent IDs to wait for
timeout_ms: 300000 // Optional: max wait time (ms)
})
// Returns: { timed_out: boolean, status: { [id]: { completed: string } } }
```
**Key Facts**:
- Primary result retrieval method (NOT close_agent)
- Supports batch wait for multiple agents
- `timed_out: true` means some agents haven't finished — can re-wait
- Can be called multiple times on same agent
### 1.3 send_input
Continues interaction with an active agent.
```javascript
send_input({
id: agentId, // Required: target agent
message: "follow-up", // Required: continuation message
interrupt: false // Optional: interrupt current processing
})
```
**Key Facts**:
- Agent must NOT be closed
- Preserves full conversation context
- Use for: clarification answers, phase transitions, iterative refinement
- `interrupt: true` — use with caution (stops current processing)
### 1.4 close_agent
Permanently terminates an agent.
```javascript
close_agent({ id: agentId })
```
**Key Facts**:
- Irreversible — no further wait/send_input possible
- Do NOT use to retrieve results (use wait instead)
- Delay until certain no more interaction needed
- Call for ALL agents at end of workflow (cleanup)
## 2. Interaction Patterns
### 2.1 Standard (Single Agent, Single Task)
```
spawn_agent → wait → close_agent
```
**Use When**: Simple, one-shot tasks with clear deliverables.
```javascript
const agent = spawn_agent({ message: taskPrompt })
const result = wait({ ids: [agent], timeout_ms: 300000 })
close_agent({ id: agent })
```
### 2.2 Parallel Fan-out (Multiple Independent Agents)
```
spawn_agent × N → batch wait({ ids: [...] }) → close_agent × N
```
**Use When**: Multiple independent tasks that can run concurrently.
```javascript
const agents = tasks.map(t => spawn_agent({ message: buildPrompt(t) }))
const results = wait({ ids: agents, timeout_ms: 600000 })
// Aggregate results
const merged = agents.map(id => results.status[id].completed)
// Cleanup all
agents.forEach(id => close_agent({ id }))
```
**Split Strategies**:
| Strategy | Description | Example |
|----------|-------------|---------|
| By responsibility | Each agent has different role | Research / Plan / Test |
| By module | Each agent handles different code area | auth / api / database |
| By perspective | Each agent analyzes from different angle | security / performance / maintainability |
### 2.3 Deep Interaction (Multi-round with send_input)
```
spawn_agent → wait (round 1) → send_input → wait (round 2) → ... → close_agent
```
**Use When**: Tasks needing iterative refinement or multi-phase execution within single agent context.
```javascript
const agent = spawn_agent({ message: initialPrompt })
// Round 1
const r1 = wait({ ids: [agent], timeout_ms: 300000 })
// Round 2 (refine based on r1)
send_input({ id: agent, message: refinementPrompt })
const r2 = wait({ ids: [agent], timeout_ms: 300000 })
// Round 3 (finalize)
send_input({ id: agent, message: finalizationPrompt })
const r3 = wait({ ids: [agent], timeout_ms: 300000 })
close_agent({ id: agent })
```
### 2.4 Two-Phase (Clarify → Execute)
```
spawn_agent → wait (questions) → send_input (answers) → wait (solution) → close_agent
```
**Use When**: Complex tasks where requirements need clarification before execution.
```javascript
const agent = spawn_agent({
message: `
## TASK ASSIGNMENT
...
### Phase A: Exploration & Clarification
Output findings + Open Questions (CLARIFICATION_NEEDED format)
### Phase B: Full Solution (after receiving answers)
Output complete deliverable
`
})
// Phase A
const exploration = wait({ ids: [agent], timeout_ms: 600000 })
if (exploration.status[agent].completed.includes('CLARIFICATION_NEEDED')) {
// Collect answers
const answers = getUserAnswers(exploration)
// Phase B
send_input({
id: agent,
message: `## CLARIFICATION ANSWERS\n${answers}\n\n## PROCEED\nGenerate full solution.`
})
const solution = wait({ ids: [agent], timeout_ms: 900000 })
}
close_agent({ id: agent })
```
### 2.5 Pipeline (Sequential Agent Chain)
```
spawn(A) → wait(A) → close(A) → spawn(B, with A's output) → wait(B) → close(B)
```
**Use When**: Tasks where each stage depends on the previous stage's output.
```javascript
// Stage 1: Research
const researcher = spawn_agent({ message: researchPrompt })
const research = wait({ ids: [researcher] })
close_agent({ id: researcher })
// Stage 2: Plan (uses research output)
const planner = spawn_agent({
message: `${planPrompt}\n\n## RESEARCH CONTEXT\n${research.status[researcher].completed}`
})
const plan = wait({ ids: [planner] })
close_agent({ id: planner })
// Stage 3: Execute (uses plan output)
const executor = spawn_agent({
message: `${executePrompt}\n\n## PLAN\n${plan.status[planner].completed}`
})
const execution = wait({ ids: [executor] })
close_agent({ id: executor })
```
### 2.6 Merged Exploration (Explore + Clarify + Plan in Single Agent)
```
spawn(dual-role) → wait(explore) → send_input(clarify) → wait(plan) → close
```
**Use When**: Exploration and planning are tightly coupled and benefit from shared context.
**Advantages over Pipeline**:
- 60-80% fewer agent creations
- No context loss between phases
- Higher result consistency
```javascript
const agent = spawn_agent({
message: `
## DUAL ROLE ASSIGNMENT
### Role A: Explorer
Explore codebase, identify patterns, generate questions
### Role B: Planner (activated after clarification)
Generate implementation plan based on exploration + answers
### Phase 1: Explore
Output: Findings + CLARIFICATION_NEEDED questions
### Phase 2: Plan (triggered by send_input)
Output: plan.json
`
})
const explore = wait({ ids: [agent] })
// ... handle clarification ...
send_input({ id: agent, message: answers })
const plan = wait({ ids: [agent] })
close_agent({ id: agent })
```
## 3. Message Design
### 3.1 TASK ASSIGNMENT Structure
```text
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: One-sentence objective
Scope:
- Include: allowed operations
- Exclude: forbidden operations
- Directory: target paths
- Dependencies: dependency constraints
Context:
- Key paths: relevant file paths
- Current state: system status
- Constraints: must-follow rules
Deliverables:
- Output structured following template
Quality bar:
- Criterion 1
- Criterion 2
```
### 3.2 Structured Output Template
```text
Summary:
- One-sentence completion status
Findings:
- Finding 1: description
- Finding 2: description
Proposed changes:
- File: path/to/file
- Change: modification detail
- Risk: impact assessment
Tests:
- Test cases needed
- Commands to run
Open questions:
1. Unresolved question 1
2. Unresolved question 2
```
### 3.3 Clarification Format
```text
CLARIFICATION_NEEDED:
Q1: [question] | Options: [A, B, C] | Recommended: [A]
Q2: [question] | Options: [A, B] | Recommended: [B]
```
## 4. Error Handling
### 4.1 Timeout
```javascript
const result = wait({ ids: [agent], timeout_ms: 30000 })
if (result.timed_out) {
// Option 1: Continue waiting
const retry = wait({ ids: [agent], timeout_ms: 60000 })
// Option 2: Urge convergence
send_input({ id: agent, message: "Please wrap up and output current findings." })
const urged = wait({ ids: [agent], timeout_ms: 30000 })
// Option 3: Abort
close_agent({ id: agent })
}
```
### 4.2 Agent Recovery (post close_agent)
```javascript
// Cannot recover closed agent — must recreate
const newAgent = spawn_agent({
message: `${originalPrompt}\n\n## PREVIOUS ATTEMPT OUTPUT\n${previousOutput}`
})
```
### 4.3 Partial Results (parallel fan-out)
```javascript
const results = wait({ ids: agents, timeout_ms: 300000 })
const completed = agents.filter(id => results.status[id]?.completed)
const pending = agents.filter(id => !results.status[id]?.completed)
if (completed.length >= Math.ceil(agents.length * 0.7)) {
// 70%+ complete — proceed with partial results
pending.forEach(id => close_agent({ id }))
}
```
## 5. Role Loading
### 5.1 Path Reference Pattern (Recommended)
```javascript
spawn_agent({
message: `
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/${agentType}.md (MUST read first)
...
`
})
```
**Why**: Keeps message lean, agent loads its own role context.
### 5.2 Role Mapping
| Agent Type | Role File |
|------------|-----------|
| cli-explore-agent | ~/.codex/agents/cli-explore-agent.md |
| cli-lite-planning-agent | ~/.codex/agents/cli-lite-planning-agent.md |
| code-developer | ~/.codex/agents/code-developer.md |
| context-search-agent | ~/.codex/agents/context-search-agent.md |
| debug-explore-agent | ~/.codex/agents/debug-explore-agent.md |
| doc-generator | ~/.codex/agents/doc-generator.md |
| action-planning-agent | ~/.codex/agents/action-planning-agent.md |
| test-fix-agent | ~/.codex/agents/test-fix-agent.md |
| universal-executor | ~/.codex/agents/universal-executor.md |
| tdd-developer | ~/.codex/agents/tdd-developer.md |
| ui-design-agent | ~/.codex/agents/ui-design-agent.md |
## 6. Design Principles
1. **Delay close_agent**: Only close when certain no more interaction needed
2. **Batch wait over sequential**: Use `wait({ ids: [...] })` for parallel agents
3. **Merge phases when context-dependent**: Use send_input over new agents
4. **Structured output always**: Enforce uniform output template
5. **Minimal message size**: Pass role file paths, not inline content
6. **Explicit lifecycle**: Every spawn must have a close (balanced)
7. **Timeout handling**: Always specify timeout_ms, always handle timed_out

View File

@@ -1,228 +0,0 @@
# Claude → Codex Conversion Rules
Comprehensive mapping rules for converting Claude Code skills to Codex-native skills.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 1 | Reference when analyzing Claude source skill |
| Phase 2 | Apply when generating Codex orchestrator |
| Phase 3 | Apply when converting agent definitions |
---
## 1. API Mapping
### 1.1 Core API Conversion
| Claude Pattern | Codex Equivalent | Notes |
|----------------|-----------------|-------|
| `Task({ subagent_type, prompt })` | `spawn_agent({ message })` + `wait()` | Split create and result retrieval |
| `Task({ run_in_background: false })` | `spawn_agent()` + immediate `wait()` | Synchronous equivalent |
| `Task({ run_in_background: true })` | `spawn_agent()` (wait later) | Deferred wait |
| `Task({ resume: agentId })` | `send_input({ id: agentId })` | Agent must not be closed |
| `TaskOutput({ task_id, block: true })` | `wait({ ids: [id] })` | Blocking wait |
| `TaskOutput({ task_id, block: false })` | `wait({ ids: [id], timeout_ms: 1000 })` | Polling with short timeout |
| Agent auto-cleanup | `close_agent({ id })` | Must be explicit |
### 1.2 Parallel Task Conversion
**Claude**:
```javascript
// Multiple Task() calls in single message (parallel)
const result1 = Task({ subagent_type: "agent-a", prompt: promptA })
const result2 = Task({ subagent_type: "agent-b", prompt: promptB })
const result3 = Task({ subagent_type: "agent-c", prompt: promptC })
```
**Codex**:
```javascript
// Explicit parallel: spawn all, then batch wait
const idA = spawn_agent({ message: promptA_with_role })
const idB = spawn_agent({ message: promptB_with_role })
const idC = spawn_agent({ message: promptC_with_role })
const results = wait({ ids: [idA, idB, idC], timeout_ms: 600000 })
// Process results
const resultA = results.status[idA].completed
const resultB = results.status[idB].completed
const resultC = results.status[idC].completed
// Cleanup
;[idA, idB, idC].forEach(id => close_agent({ id }))
```
### 1.3 Resume/Continue Conversion
**Claude**:
```javascript
// Resume a previous agent
Task({ subagent_type: "agent-a", resume: previousAgentId, prompt: "Continue..." })
```
**Codex**:
```javascript
// send_input to continue (agent must still be alive)
send_input({
id: previousAgentId,
message: "Continue..."
})
const continued = wait({ ids: [previousAgentId] })
```
### 1.4 TaskOutput Polling Conversion
**Claude**:
```javascript
while (!done) {
const output = TaskOutput({ task_id: id, block: false })
if (output.status === 'completed') done = true
sleep(1000)
}
```
**Codex**:
```javascript
let result = wait({ ids: [id], timeout_ms: 30000 })
while (result.timed_out) {
result = wait({ ids: [id], timeout_ms: 30000 })
}
```
## 2. Role Loading Conversion
### 2.1 subagent_type → MANDATORY FIRST STEPS
**Claude**: Role automatically loaded via `subagent_type` parameter.
**Codex**: Role must be explicitly loaded by agent as first action.
**Conversion**:
```javascript
// Claude
Task({
subagent_type: "cli-explore-agent",
prompt: "Explore the codebase for authentication patterns"
})
// Codex
spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: Explore the codebase for authentication patterns
Deliverables: Structured findings following output template
`
})
```
### 2.2 Role Mapping Table
| Claude subagent_type | Codex Role Path |
|----------------------|-----------------|
| `Explore` | `~/.codex/agents/cli-explore-agent.md` |
| `Plan` | `~/.codex/agents/cli-lite-planning-agent.md` |
| `code-developer` | `~/.codex/agents/code-developer.md` |
| `context-search-agent` | `~/.codex/agents/context-search-agent.md` |
| `debug-explore-agent` | `~/.codex/agents/debug-explore-agent.md` |
| `doc-generator` | `~/.codex/agents/doc-generator.md` |
| `action-planning-agent` | `~/.codex/agents/action-planning-agent.md` |
| `test-fix-agent` | `~/.codex/agents/test-fix-agent.md` |
| `universal-executor` | `~/.codex/agents/universal-executor.md` |
| `tdd-developer` | `~/.codex/agents/tdd-developer.md` |
| `general-purpose` | `~/.codex/agents/universal-executor.md` |
| `Bash` | Direct shell execution (no agent needed) |
| `haiku` / `sonnet` / `opus` | Model selection via agent_type parameter |
## 3. Structural Conversion
### 3.1 SKILL.md → orchestrator.md
| Claude SKILL.md Section | Codex orchestrator.md Section |
|--------------------------|-------------------------------|
| Frontmatter (name, description, allowed-tools) | Frontmatter (name, description, agents, phases) |
| Architecture Overview | Architecture Overview (spawn/wait/close flow) |
| Execution Flow (Ref: markers) | Phase Execution (spawn_agent code blocks) |
| Data Flow (variables, files) | Data Flow (wait results, context passing) |
| TodoWrite Pattern | update_plan tracking (Codex convention) |
| Interactive Preference Collection | User interaction via orchestrator prompts |
| Error Handling | Timeout + Lifecycle error handling |
| Phase Reference Documents table | Agent Registry + Phase detail files |
### 3.2 Phase Files → Phase Detail or Inline
**Simple phases** (single agent, no branching): Inline in orchestrator.md
**Complex phases** (multi-agent, conditional): Separate `phases/0N-{name}.md`
### 3.3 Pattern-Level Conversion
| Claude Pattern | Codex Pattern |
|----------------|---------------|
| Orchestrator + Progressive Loading | Orchestrator + Agent Registry + on-demand phase loading |
| TodoWrite Attachment/Collapse | update_plan pending → in_progress → completed |
| Inter-Phase Data Flow (variables) | wait() result passing between phases |
| Conditional Phase Execution | if/else on wait() results |
| Direct Phase Handoff (Read phase doc) | Inline execution or separate phase files |
| AskUserQuestion | Direct user interaction in orchestrator |
## 4. Content Preservation Rules
When converting Claude skills:
1. **Agent prompts**: Preserve task descriptions, goals, scope, deliverables VERBATIM
2. **Bash commands**: Preserve all shell commands unchanged
3. **Code blocks**: Preserve implementation code unchanged
4. **Validation logic**: Preserve quality checks and success criteria
5. **Error handling**: Convert to Codex timeout/lifecycle patterns, preserve intent
**Transform** (structure changes):
- `Task()` calls → `spawn_agent()` + `wait()` + `close_agent()`
- `subagent_type` → MANDATORY FIRST STEPS role path
- Synchronous returns → Explicit `wait()` calls
- Auto-cleanup → Explicit `close_agent()` calls
**Preserve** (content unchanged):
- Task descriptions and goals
- Scope definitions
- Quality criteria
- File paths and patterns
- Shell commands
- Business logic
## 5. Anti-Patterns to Avoid
| Anti-Pattern | Why | Correct Pattern |
|-------------|-----|-----------------|
| Using close_agent for results | Returns are unreliable | Use wait() for results |
| Inline role content in message | Bloats message, wastes tokens | Pass role file path in MANDATORY FIRST STEPS |
| Early close_agent before potential follow-up | Cannot resume closed agent | Delay close until certain no more interaction |
| Sequential wait for parallel agents | Wasted time | Batch wait({ ids: [...] }) |
| No timeout_ms | Indefinite hang risk | Always specify timeout_ms |
| No timed_out handling | Silent failures | Always check result.timed_out |
| Claude Task() remaining in output | Runtime incompatibility | Convert all Task() to spawn_agent |
| Claude resume: in output | Runtime incompatibility | Convert to send_input() |
## 6. Conversion Checklist
Before delivering converted skill:
- [ ] All `Task()` calls converted to `spawn_agent()` + `wait()` + `close_agent()`
- [ ] All `subagent_type` mapped to MANDATORY FIRST STEPS role paths
- [ ] All `resume` converted to `send_input()`
- [ ] All `TaskOutput` polling converted to `wait()` with timeout
- [ ] No Claude-specific patterns remain (Task, TaskOutput, resume, subagent_type)
- [ ] Timeout handling added for all `wait()` calls
- [ ] Lifecycle balanced (spawn count ≤ close count)
- [ ] Structured output template enforced for all agents
- [ ] Agent prompts/goals/scope preserved verbatim
- [ ] Error handling converted to Codex patterns

View File

@@ -1,163 +0,0 @@
# Quality Standards
Quality criteria and validation gates for generated Codex skills.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 3 | Reference during generation |
| Phase 4 | Apply during validation |
---
## 1. Quality Dimensions
### 1.1 Structural Completeness (30%)
| Check | Weight | Criteria |
|-------|--------|----------|
| Orchestrator exists | 5 | File present at expected path |
| Frontmatter valid | 3 | Contains name, description |
| Architecture diagram | 3 | ASCII flow showing spawn/wait/close |
| Agent Registry | 4 | Table with all agents, role paths, responsibilities |
| Phase Execution blocks | 5 | Code blocks for each phase with spawn/wait/close |
| Lifecycle Management | 5 | Timeout handling + cleanup protocol |
| Agent files complete | 5 | All new agent roles have complete role files |
**Scoring**: Each check passes (full weight) or fails (0). Total = sum / max.
### 1.2 Pattern Compliance (40%)
| Check | Weight | Criteria |
|-------|--------|----------|
| Lifecycle balanced | 6 | Every spawn_agent has matching close_agent |
| Role loading correct | 6 | MANDATORY FIRST STEPS pattern used (not inline content) |
| Wait for results | 5 | wait() used for results (not close_agent) |
| Batch wait for parallel | 5 | Parallel agents use wait({ ids: [...] }) |
| Timeout specified | 4 | All wait() calls have timeout_ms |
| Timeout handled | 4 | timed_out checked after every wait() |
| Structured output | 5 | Agents produce Summary/Findings/Changes/Tests/Questions |
| No Claude patterns | 5 | No Task(), TaskOutput(), resume: remaining |
**Scoring**: Each check passes (full weight) or fails (0). Total = sum / max.
### 1.3 Content Quality (30%)
| Check | Weight | Criteria |
|-------|--------|----------|
| Orchestrator substantive | 4 | Content > 500 chars, not boilerplate |
| Code blocks present | 3 | >= 4 code blocks with executable patterns |
| Error handling | 3 | Timeout + recovery + partial results handling |
| No placeholders | 4 | No `{{...}}` or `TODO` remaining in output |
| Agent roles substantive | 4 | Each agent role > 300 chars with actionable steps |
| Output format defined | 3 | Structured output template in each agent |
| Goals/scope clear | 4 | Every spawn_agent has Goal + Scope + Deliverables |
| Conversion faithful | 5 | Source content preserved (if converting) |
**Scoring**: Each check passes (full weight) or fails (0). Total = sum / max.
## 2. Quality Gates
| Verdict | Score | Action |
|---------|-------|--------|
| **PASS** | >= 80% | Deliver to target location |
| **REVIEW** | 60-79% | Report issues, user decides |
| **FAIL** | < 60% | Block delivery, list critical issues |
### 2.1 Critical Failures (Auto-FAIL)
These issues force FAIL regardless of overall score:
1. **No orchestrator file** — skill has no entry point
2. **Task() calls in output** — runtime incompatible with Codex
3. **No agent registry** — agents cannot be identified
4. **Missing close_agent** — resource leak risk
5. **Inline role content** — violates Codex pattern (message bloat)
### 2.2 Warnings (Non-blocking)
1. **Missing timeout handling** — degraded reliability
2. **No error handling section** — reduced robustness
3. **Placeholder text remaining** — needs manual completion
4. **Phase files missing** — acceptable for simple skills
## 3. Validation Process
### 3.1 Automated Checks
```javascript
function validateSkill(generatedFiles, codexSkillConfig) {
const checks = []
// Structural
checks.push(checkFileExists(generatedFiles.orchestrator))
checks.push(checkFrontmatter(generatedFiles.orchestrator))
checks.push(checkSection(generatedFiles.orchestrator, "Architecture"))
checks.push(checkSection(generatedFiles.orchestrator, "Agent Registry"))
// ...
// Pattern compliance
const content = Read(generatedFiles.orchestrator)
checks.push(checkBalancedLifecycle(content))
checks.push(checkRoleLoading(content))
checks.push(checkWaitPattern(content))
// ...
// Content quality
checks.push(checkNoPlaceholders(content))
checks.push(checkSubstantiveContent(content))
// ...
// Critical failures
const criticals = checkCriticalFailures(content, generatedFiles)
if (criticals.length > 0) return { verdict: "FAIL", criticals }
// Score
const score = calculateWeightedScore(checks)
const verdict = score >= 80 ? "PASS" : score >= 60 ? "REVIEW" : "FAIL"
return { score, verdict, checks, issues: checks.filter(c => !c.passed) }
}
```
### 3.2 Manual Review Points
For REVIEW verdict, highlight these for user attention:
1. Agent role completeness — are all capabilities covered?
2. Interaction model appropriateness — right pattern for use case?
3. Timeout values — appropriate for expected task duration?
4. Scope definitions — clear boundaries for each agent?
5. Output format — suitable for downstream consumers?
## 4. Scoring Formula
```
Overall = Structural × 0.30 + PatternCompliance × 0.40 + ContentQuality × 0.30
```
Pattern compliance weighted highest because Codex runtime correctness is critical.
## 5. Quality Improvement Guidance
### Low Structural Score
- Add missing sections to orchestrator
- Create missing agent role files
- Add frontmatter to all files
### Low Pattern Score
- Add MANDATORY FIRST STEPS to all spawn_agent messages
- Replace inline role content with path references
- Add close_agent for every spawn_agent
- Add timeout_ms and timed_out handling to all wait calls
- Remove any remaining Claude patterns
### Low Content Score
- Expand agent role definitions with more specific steps
- Add concrete Goal/Scope/Deliverables to spawn messages
- Replace placeholders with actual content
- Add error handling for each phase

View File

@@ -1,215 +0,0 @@
# Agent Role Template
Template for generating per-agent role definition files.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand agent role file structure |
| Phase 3 | Apply with agent-specific content |
---
## Template
```markdown
---
name: {{agent_name}}
description: |
{{description}}
color: {{color}}
skill: {{parent_skill_name}}
---
# {{agent_display_name}}
{{description_paragraph}}
## Core Capabilities
{{#each capabilities}}
{{@index}}. **{{this.name}}**: {{this.description}}
{{/each}}
## Execution Process
### Step 1: Context Loading
**MANDATORY**: Execute these steps FIRST before any other action.
1. Read this role definition file (already done if you're reading this)
2. Read: `.workflow/project-tech.json` — understand project technology stack
3. Read: `.workflow/project-guidelines.json` — understand project conventions
4. Parse the TASK ASSIGNMENT from the spawn message for:
- **Goal**: What to achieve
- **Scope**: What's allowed and forbidden
- **Context**: Relevant background information
- **Deliverables**: Expected output format
- **Quality bar**: Success criteria
### Step 2: {{primary_action_name}}
{{primary_action_detail}}
\`\`\`javascript
// {{primary_action_description}}
{{primary_action_code}}
\`\`\`
### Step 3: {{secondary_action_name}}
{{secondary_action_detail}}
\`\`\`javascript
// {{secondary_action_description}}
{{secondary_action_code}}
\`\`\`
### Step 4: Output Delivery
Produce structured output following this EXACT template:
\`\`\`text
Summary:
- One-sentence completion summary
Findings:
- Finding 1: [specific description with file:line references]
- Finding 2: [specific description]
Proposed changes:
- File: [path/to/file]
- Change: [specific modification description]
- Risk: [low/medium/high] - [impact description]
Tests:
- Test cases: [list of needed test cases]
- Commands: [test commands to verify]
Open questions:
1. [Question needing clarification, if any]
2. [Question needing clarification, if any]
\`\`\`
**Important**: If there are open questions that block progress, prepend output with:
\`\`\`
CLARIFICATION_NEEDED:
Q1: [question] | Options: [A, B, C] | Recommended: [A]
Q2: [question] | Options: [A, B] | Recommended: [B]
\`\`\`
## Key Reminders
**ALWAYS**:
- Read role definition file as FIRST action (Step 1)
- Follow structured output template EXACTLY
- Stay within the assigned Scope boundaries
- Include file:line references in Findings
- Report open questions via CLARIFICATION_NEEDED format
- Provide actionable, specific deliverables
**NEVER**:
- Modify files outside the assigned Scope
- Skip context loading (Step 1)
- Produce unstructured or free-form output
- Make assumptions about unclear requirements (ask instead)
- Exceed the defined Quality bar without explicit approval
- Ignore the Goal/Scope/Deliverables from TASK ASSIGNMENT
## Error Handling
| Scenario | Action |
|----------|--------|
| Cannot access required file | Report in Open questions, continue with available data |
| Task scope unclear | Output CLARIFICATION_NEEDED, provide best-effort findings |
| Unexpected error | Report error details in Summary, include partial results |
| Quality bar not achievable | Report gap in Summary, explain constraints |
```
---
## Template Variants by Responsibility Type
### Exploration Agent
**Step 2**: Codebase Discovery
```javascript
// Search for relevant code patterns
const files = Glob("src/**/*.{ts,js,tsx,jsx}")
const matches = Grep(targetPattern, files)
// Trace call chains, identify entry points
```
**Step 3**: Pattern Analysis
```javascript
// Analyze discovered patterns
// Cross-reference with project conventions
// Identify similar implementations
```
### Implementation Agent
**Step 2**: Code Implementation
```javascript
// Implement changes according to plan
// Follow existing code patterns
// Maintain backward compatibility
```
**Step 3**: Self-Validation
```javascript
// Run relevant tests
// Check for syntax/type errors
// Verify changes match acceptance criteria
```
### Analysis Agent
**Step 2**: Multi-Dimensional Analysis
```javascript
// Analyze from assigned perspective (security/perf/quality/etc.)
// Collect evidence with file:line references
// Classify findings by severity
```
**Step 3**: Recommendation Generation
```javascript
// Propose fixes for each finding
// Assess risk and effort
// Prioritize by impact
```
### Testing Agent
**Step 2**: Test Design
```javascript
// Identify test scenarios from requirements
// Design test cases with expected results
// Map to test frameworks
```
**Step 3**: Test Execution & Validation
```javascript
// Run tests
// Collect pass/fail results
// Iterate on failures
```
---
## Variable Reference
| Variable | Source | Description |
|----------|--------|-------------|
| `{{agent_name}}` | config.name | Agent identifier (lowercase, hyphenated) |
| `{{agent_display_name}}` | Derived from name | Human-readable title |
| `{{description}}` | config.description | Short description (1-3 lines) |
| `{{description_paragraph}}` | config.description | Full paragraph description |
| `{{color}}` | Auto-assigned | Terminal color for output |
| `{{parent_skill_name}}` | codexSkillConfig.name | Parent skill identifier |
| `{{capabilities}}` | Inferred from responsibility | Array of capability objects |
| `{{primary_action_name}}` | Derived from responsibility | Step 2 title |
| `{{primary_action_detail}}` | Generated or from source | Step 2 content |
| `{{secondary_action_name}}` | Derived from responsibility | Step 3 title |
| `{{secondary_action_detail}}` | Generated or from source | Step 3 content |

View File

@@ -1,414 +0,0 @@
# Command Pattern Template
Pre-built Codex command patterns for common agent interaction scenarios.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand available command patterns |
| Phase 2 | Select appropriate patterns for orchestrator |
| Phase 3 | Apply patterns to agent definitions |
---
## Pattern 1: Explore (Parallel Fan-out)
**Use When**: Multi-angle codebase exploration needed.
```javascript
// ==================== Explore Pattern ====================
// Step 1: Define exploration angles
const angles = ["architecture", "dependencies", "patterns", "testing"]
// Step 2: Create parallel exploration agents
const agents = angles.map(angle =>
spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
---
Goal: Execute ${angle} exploration for ${task_description}
Scope:
- Include: All source files relevant to ${angle}
- Exclude: node_modules, dist, build artifacts
Context:
- Task: ${task_description}
- Angle: ${angle}
Deliverables:
- Structured findings following output template
- File:line references for key discoveries
- Open questions for unclear areas
Quality bar:
- At least 3 relevant files identified
- Findings backed by concrete evidence
`
})
)
// Step 3: Batch wait
const results = wait({ ids: agents, timeout_ms: 600000 })
// Step 4: Aggregate
const findings = agents.map((id, i) => ({
angle: angles[i],
result: results.status[id].completed
}))
// Step 5: Cleanup
agents.forEach(id => close_agent({ id }))
```
## Pattern 2: Analyze (Multi-Perspective)
**Use When**: Code analysis from multiple dimensions needed.
```javascript
// ==================== Analyze Pattern ====================
const perspectives = [
{ name: "security", focus: "OWASP Top 10, injection, auth bypass" },
{ name: "performance", focus: "O(n²), memory leaks, blocking I/O" },
{ name: "maintainability", focus: "complexity, coupling, duplication" }
]
const agents = perspectives.map(p =>
spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
---
Goal: Analyze ${targetModule} from ${p.name} perspective
Focus: ${p.focus}
Scope:
- Include: ${targetPaths}
- Exclude: Test files, generated code
Deliverables:
- Severity-classified findings (Critical/High/Medium/Low)
- File:line references for each finding
- Remediation recommendations
Quality bar:
- Every finding must have evidence (code reference)
- Remediation must be actionable
`
})
)
const results = wait({ ids: agents, timeout_ms: 600000 })
// Merge findings by severity
const merged = {
critical: [], high: [], medium: [], low: []
}
agents.forEach((id, i) => {
const parsed = parseFindings(results.status[id].completed)
Object.keys(merged).forEach(sev => merged[sev].push(...(parsed[sev] || [])))
})
agents.forEach(id => close_agent({ id }))
```
## Pattern 3: Implement (Sequential Delegation)
**Use When**: Code implementation following a plan.
```javascript
// ==================== Implement Pattern ====================
const implementAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/code-developer.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: Implement ${featureDescription}
Scope:
- Include: ${targetPaths}
- Exclude: Unrelated modules
- Constraints: No breaking changes, follow existing patterns
Context:
- Plan: ${planContent}
- Dependencies: ${dependencies}
- Existing patterns: ${patterns}
Deliverables:
- Working implementation following plan
- Updated/new test files
- Summary of changes with file:line references
Quality bar:
- All existing tests pass
- New code follows project conventions
- No TypeScript errors
- Backward compatible
`
})
const result = wait({ ids: [implementAgent], timeout_ms: 900000 })
// Check for open questions (might need clarification)
if (result.status[implementAgent].completed.includes('CLARIFICATION_NEEDED')) {
// Handle clarification via send_input
const answers = getUserAnswers(result)
send_input({ id: implementAgent, message: `## ANSWERS\n${answers}\n\n## CONTINUE\nProceed with implementation.` })
const final = wait({ ids: [implementAgent], timeout_ms: 900000 })
}
close_agent({ id: implementAgent })
```
## Pattern 4: Validate (Test-Fix Cycle)
**Use When**: Running tests and fixing failures iteratively.
```javascript
// ==================== Validate Pattern ====================
const validateAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/test-fix-agent.md (MUST read first)
---
Goal: Validate ${component} — run tests, fix failures, iterate
Scope:
- Include: ${testPaths}
- Exclude: Unrelated test suites
Context:
- Recent changes: ${changedFiles}
- Test framework: ${testFramework}
Deliverables:
- All tests passing (or documented blocked tests)
- Fix summary with file:line references
- Coverage report
Quality bar:
- Pass rate >= 95%
- No new test regressions
- Max 5 fix iterations
`
})
const round1 = wait({ ids: [validateAgent], timeout_ms: 600000 })
// Check if more iterations needed
let iteration = 1
while (
iteration < 5 &&
round1.status[validateAgent].completed.includes('TESTS_FAILING')
) {
send_input({
id: validateAgent,
message: `## ITERATION ${iteration + 1}\nContinue fixing remaining failures. Focus on:\n${remainingFailures}`
})
const roundN = wait({ ids: [validateAgent], timeout_ms: 300000 })
iteration++
}
close_agent({ id: validateAgent })
```
## Pattern 5: Review (Multi-Dimensional)
**Use When**: Code review from multiple dimensions.
```javascript
// ==================== Review Pattern ====================
const dimensions = [
{ name: "correctness", agent: "cli-explore-agent" },
{ name: "security", agent: "cli-explore-agent" },
{ name: "performance", agent: "cli-explore-agent" },
{ name: "style", agent: "cli-explore-agent" }
]
const agents = dimensions.map(d =>
spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/${d.agent}.md (MUST read first)
---
Goal: Review ${targetCode} for ${d.name}
Scope: ${changedFiles}
Deliverables: Findings with severity, file:line, remediation
`
})
)
const results = wait({ ids: agents, timeout_ms: 600000 })
// Aggregate review findings
const review = {
approved: true,
findings: [],
blockers: []
}
agents.forEach((id, i) => {
const parsed = parseReview(results.status[id].completed)
review.findings.push(...parsed.findings)
if (parsed.blockers.length > 0) {
review.approved = false
review.blockers.push(...parsed.blockers)
}
})
agents.forEach(id => close_agent({ id }))
```
## Pattern 6: Deep Interact (Merged Explore + Plan)
**Use When**: Exploration and planning are tightly coupled.
```javascript
// ==================== Deep Interact Pattern ====================
const agent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. **Also read**: ~/.codex/agents/cli-lite-planning-agent.md (dual role)
---
### Phase A: Exploration
Goal: Explore codebase for ${task_description}
Output: Structured findings + CLARIFICATION_NEEDED questions (if any)
### Phase B: Planning (activated after clarification)
Goal: Generate implementation plan based on exploration + answers
Output: Structured plan following plan schema
Deliverables:
- Phase A: exploration findings (Summary/Findings/Open questions)
- Phase B: implementation plan (after receiving clarification answers)
`
})
// Phase A: Exploration
const exploration = wait({ ids: [agent], timeout_ms: 600000 })
if (exploration.status[agent].completed.includes('CLARIFICATION_NEEDED')) {
const answers = getUserAnswers(exploration)
// Phase B: Planning (same agent, preserved context)
send_input({
id: agent,
message: `
## CLARIFICATION ANSWERS
${answers}
## PROCEED TO PHASE B
Generate implementation plan based on your exploration findings and these answers.
`
})
const plan = wait({ ids: [agent], timeout_ms: 900000 })
}
close_agent({ id: agent })
```
## Pattern 7: Two-Phase (Clarify → Execute)
**Use When**: Task requires explicit clarification before execution.
```javascript
// ==================== Two-Phase Pattern ====================
// Phase 1: Clarification
const agent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/${agentType}.md (MUST read first)
---
### PHASE: CLARIFICATION ONLY
Goal: Understand ${task_description} and identify unclear points
Output ONLY:
1. Your understanding of the task (2-3 sentences)
2. CLARIFICATION_NEEDED questions (if any)
3. Recommended approach (1-2 sentences)
DO NOT execute any changes yet.
`
})
const clarification = wait({ ids: [agent], timeout_ms: 300000 })
// Collect user confirmation/answers
const userResponse = processUserInput(clarification)
// Phase 2: Execution
send_input({
id: agent,
message: `
## USER CONFIRMATION
${userResponse}
## PROCEED TO EXECUTION
Now execute the task with full implementation.
Output: Complete deliverable following structured output template.
`
})
const execution = wait({ ids: [agent], timeout_ms: 900000 })
close_agent({ id: agent })
```
---
## Pattern Selection Guide
| Scenario | Recommended Pattern | Reason |
|----------|-------------------|--------|
| Explore codebase from N angles | Pattern 1: Explore | Parallel fan-out, independent angles |
| Analyze code quality | Pattern 2: Analyze | Multi-perspective, severity classification |
| Implement from plan | Pattern 3: Implement | Sequential, plan-driven |
| Run tests + fix | Pattern 4: Validate | Iterative send_input loop |
| Code review | Pattern 5: Review | Multi-dimensional, aggregated verdict |
| Explore then plan | Pattern 6: Deep Interact | Context preservation, merged phases |
| Complex/unclear task | Pattern 7: Two-Phase | Clarify first, reduce rework |
| Simple one-shot task | Standard (no pattern) | spawn → wait → close |

View File

@@ -1,306 +0,0 @@
# Orchestrator Template
Template for the generated Codex orchestrator document.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand orchestrator output structure |
| Phase 2 | Apply with skill-specific content |
---
## Template
```markdown
---
name: {{skill_name}}
description: |
{{description}}
agents: {{agent_count}}
phases: {{phase_count}}
---
# {{skill_display_name}}
{{one_paragraph_description}}
## Architecture Overview
\`\`\`
{{architecture_diagram}}
\`\`\`
## Agent Registry
| Agent | Role File | Responsibility | New/Existing |
|-------|-----------|----------------|--------------|
{{#each agents}}
| `{{this.name}}` | `{{this.role_file}}` | {{this.responsibility}} | {{this.status}} |
{{/each}}
## Phase Execution
{{#each phases}}
### Phase {{this.index}}: {{this.name}}
{{this.description}}
{{#if this.is_parallel}}
#### Parallel Fan-out
\`\`\`javascript
// Create parallel agents
const agentIds = [
{{#each this.agents}}
spawn_agent({
message: \`
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: {{this.role_file}} (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: {{this.goal}}
Scope:
- Include: {{this.scope_include}}
- Exclude: {{this.scope_exclude}}
Context:
{{this.context}}
Deliverables:
{{this.deliverables}}
Quality bar:
{{this.quality_bar}}
\`
}),
{{/each}}
]
// Batch wait
const results = wait({
ids: agentIds,
timeout_ms: {{this.timeout_ms}}
})
// Handle timeout
if (results.timed_out) {
const completed = agentIds.filter(id => results.status[id]?.completed)
const pending = agentIds.filter(id => !results.status[id]?.completed)
// Use completed results, log pending
}
// Aggregate results
const phaseResults = agentIds.map(id => results.status[id].completed)
// Cleanup
agentIds.forEach(id => close_agent({ id }))
\`\`\`
{{/if}}
{{#if this.is_standard}}
#### Standard Execution
\`\`\`javascript
const agentId = spawn_agent({
message: \`
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: {{this.agent.role_file}} (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: {{this.goal}}
Scope:
- Include: {{this.scope_include}}
- Exclude: {{this.scope_exclude}}
Context:
{{this.context}}
Deliverables:
{{this.deliverables}}
Quality bar:
{{this.quality_bar}}
\`
})
const result = wait({ ids: [agentId], timeout_ms: {{this.timeout_ms}} })
if (result.timed_out) {
// Timeout handling: continue wait or urge convergence
send_input({ id: agentId, message: "Please finalize and output current findings." })
const retry = wait({ ids: [agentId], timeout_ms: 60000 })
}
close_agent({ id: agentId })
\`\`\`
{{/if}}
{{#if this.is_deep_interaction}}
#### Deep Interaction (Multi-round)
\`\`\`javascript
const agent = spawn_agent({
message: \`
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: {{this.agent.role_file}} (MUST read first)
---
### Phase A: {{this.initial_goal}}
Output: Findings + Open Questions (CLARIFICATION_NEEDED format)
### Phase B: {{this.followup_goal}} (after clarification)
Output: Complete deliverable
\`
})
// Round 1: Initial exploration
const round1 = wait({ ids: [agent], timeout_ms: {{this.timeout_ms}} })
// Check for clarification needs
if (round1.status[agent].completed.includes('CLARIFICATION_NEEDED')) {
// Parse questions, collect user answers
const answers = collectUserAnswers(round1.status[agent].completed)
// Round 2: Continue with answers
send_input({
id: agent,
message: \`
## CLARIFICATION ANSWERS
\${answers}
## NEXT STEP
Proceed with Phase B.
\`
})
const round2 = wait({ ids: [agent], timeout_ms: {{this.followup_timeout_ms}} })
}
close_agent({ id: agent })
\`\`\`
{{/if}}
{{#if this.is_pipeline}}
#### Pipeline (Sequential Chain)
\`\`\`javascript
{{#each this.stages}}
// Stage {{this.index}}: {{this.name}}
const stage{{this.index}}Agent = spawn_agent({
message: \`
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: {{this.role_file}} (MUST read first)
---
Goal: {{this.goal}}
{{#if this.previous_output}}
## PREVIOUS STAGE OUTPUT
\${stage{{this.previous_index}}Result}
{{/if}}
Deliverables: {{this.deliverables}}
\`
})
const stage{{this.index}}Result = wait({ ids: [stage{{this.index}}Agent], timeout_ms: {{this.timeout_ms}} })
close_agent({ id: stage{{this.index}}Agent })
{{/each}}
\`\`\`
{{/if}}
{{/each}}
## Result Aggregation
\`\`\`javascript
// Merge results from all phases
const finalResult = {
{{#each phases}}
phase{{this.index}}: phase{{this.index}}Results,
{{/each}}
}
// Output summary
console.log(\`
## Skill Execution Complete
{{#each phases}}
### Phase {{this.index}}: {{this.name}}
Status: \${phase{{this.index}}Results.status}
{{/each}}
\`)
\`\`\`
## Lifecycle Management
### Timeout Handling
| Timeout Scenario | Action |
|-----------------|--------|
| Single agent timeout | send_input to urge convergence, retry wait |
| Parallel partial timeout | Use completed results if >= 70%, close pending |
| All agents timeout | Log error, abort with partial state |
### Cleanup Protocol
\`\`\`javascript
// Track all agents created during execution
const allAgentIds = []
// ... (agents added during phase execution) ...
// Final cleanup (end of orchestrator or on error)
allAgentIds.forEach(id => {
try { close_agent({ id }) } catch { /* already closed */ }
})
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Agent produces invalid output | Retry with clarified instructions via send_input |
| Agent timeout | Urge convergence, retry, or abort |
| Missing role file | Log error, skip agent or use fallback |
| Partial results | Proceed with available data, log gaps |
```
---
## Variable Reference
| Variable | Source | Description |
|----------|--------|-------------|
| `{{skill_name}}` | codexSkillConfig.name | Skill identifier |
| `{{skill_display_name}}` | Derived from name | Human-readable title |
| `{{description}}` | codexSkillConfig.description | Skill description |
| `{{agent_count}}` | codexSkillConfig.agents.length | Number of agents |
| `{{phase_count}}` | codexSkillConfig.phases.length | Number of phases |
| `{{architecture_diagram}}` | Generated from phase/agent topology | ASCII flow diagram |
| `{{agents}}` | codexSkillConfig.agents | Array of agent configs |
| `{{phases}}` | codexSkillConfig.phases | Array of phase configs |
| `{{phases[].is_parallel}}` | phase.interaction_model === "parallel_fanout" | Boolean |
| `{{phases[].is_standard}}` | phase.interaction_model === "standard" | Boolean |
| `{{phases[].is_deep_interaction}}` | phase.interaction_model === "deep_interaction" | Boolean |
| `{{phases[].is_pipeline}}` | phase.interaction_model === "pipeline" | Boolean |
| `{{phases[].timeout_ms}}` | Phase-specific timeout | Default: 300000 |

View File

@@ -1,132 +0,0 @@
---
name: copyright-docs
description: Generate software copyright design specification documents compliant with China Copyright Protection Center (CPCC) standards. Creates complete design documents with Mermaid diagrams based on source code analysis. Use for software copyright registration, generating design specification, creating CPCC-compliant documents, or documenting software for intellectual property protection. Triggers on "软件著作权", "设计说明书", "版权登记", "CPCC", "软著申请".
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write
---
# Software Copyright Documentation Skill
Generate CPCC-compliant software design specification documents (软件设计说明书) through multi-phase code analysis.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Context-Optimized Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Metadata → project-metadata.json │
│ ↓ │
│ Phase 2: 6 Parallel → sections/section-N.md (直接写MD) │
│ Agents ↓ 返回简要JSON │
│ ↓ │
│ Phase 2.5: Consolidation → cross-module-summary.md │
│ Agent ↓ 返回问题列表 │
│ ↓ │
│ Phase 4: Assembly → 合并MD + 跨模块总结 │
│ ↓ │
│ Phase 5: Refinement → 最终文档 │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **Agent 直接输出 MD**: 避免 JSON → MD 转换的上下文开销
2. **简要返回**: Agent 只返回路径+摘要,不返回完整内容
3. **汇总 Agent**: 独立 Agent 负责跨模块问题检测
4. **引用合并**: Phase 4 读取文件合并,不在上下文中传递
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
│ Phase 1: Metadata Collection │
│ → Read: phases/01-metadata-collection.md │
│ → Collect: software name, version, category, scope │
│ → Output: project-metadata.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2: Deep Code Analysis (6 Parallel Agents) │
│ → Read: phases/02-deep-analysis.md │
│ → Reference: specs/cpcc-requirements.md │
│ → Each Agent: 分析代码 → 直接写 sections/section-N.md │
│ → Return: {"status", "output_file", "summary", "cross_notes"} │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2.5: Consolidation (New!) │
│ → Read: phases/02.5-consolidation.md │
│ → Input: Agent 返回的简要信息 + cross_module_notes │
│ → Analyze: 一致性/完整性/关联性/质量检查 │
│ → Output: cross-module-summary.md │
│ → Return: {"issues": {errors, warnings, info}, "stats"} │
├─────────────────────────────────────────────────────────────────┤
│ Phase 4: Document Assembly │
│ → Read: phases/04-document-assembly.md │
│ → Check: 如有 errors提示用户处理 │
│ → Merge: Section 1 + sections/*.md + 跨模块附录 │
│ → Output: {软件名称}-软件设计说明书.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 5: Compliance Review & Refinement │
│ → Read: phases/05-compliance-refinement.md │
│ → Reference: specs/cpcc-requirements.md │
│ → Loop: 发现问题 → 提问 → 修复 → 重新检查 │
└─────────────────────────────────────────────────────────────────┘
```
## Document Sections (7 Required)
| Section | Title | Diagram | Agent |
|---------|-------|---------|-------|
| 1 | 软件概述 | - | Phase 4 生成 |
| 2 | 系统架构图 | graph TD | architecture |
| 3 | 功能模块设计 | flowchart TD | functions |
| 4 | 核心算法与流程 | flowchart TD | algorithms |
| 5 | 数据结构设计 | classDiagram | data_structures |
| 6 | 接口设计 | sequenceDiagram | interfaces |
| 7 | 异常处理设计 | flowchart TD | exceptions |
## Directory Setup
```javascript
// 生成时间戳目录名
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const dir = `.workflow/.scratchpad/copyright-${timestamp}`;
// Windows (cmd)
Bash(`mkdir "${dir}\\sections"`);
Bash(`mkdir "${dir}\\iterations"`);
// Unix/macOS
// Bash(`mkdir -p "${dir}/sections" "${dir}/iterations"`);
```
## Output Structure
```
.workflow/.scratchpad/copyright-{timestamp}/
├── project-metadata.json # Phase 1
├── sections/ # Phase 2 (Agent 直接写入)
│ ├── section-2-architecture.md
│ ├── section-3-functions.md
│ ├── section-4-algorithms.md
│ ├── section-5-data-structures.md
│ ├── section-6-interfaces.md
│ └── section-7-exceptions.md
├── cross-module-summary.md # Phase 2.5
├── iterations/ # Phase 5
│ ├── v1.md
│ └── v2.md
└── {软件名称}-软件设计说明书.md # Final Output
```
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/01-metadata-collection.md](phases/01-metadata-collection.md) | Software info collection |
| [phases/02-deep-analysis.md](phases/02-deep-analysis.md) | 6-agent parallel analysis |
| [phases/02.5-consolidation.md](phases/02.5-consolidation.md) | Cross-module consolidation |
| [phases/04-document-assembly.md](phases/04-document-assembly.md) | Document merge & assembly |
| [phases/05-compliance-refinement.md](phases/05-compliance-refinement.md) | Iterative refinement loop |
| [specs/cpcc-requirements.md](specs/cpcc-requirements.md) | CPCC compliance checklist |
| [templates/agent-base.md](templates/agent-base.md) | Agent prompt templates |
| [../_shared/mermaid-utils.md](../_shared/mermaid-utils.md) | Shared Mermaid utilities |

View File

@@ -1,78 +0,0 @@
# Phase 1: Metadata Collection
Collect software metadata for document header and context.
## Execution
### Step 1: Software Name & Version
```javascript
AskUserQuestion({
questions: [{
question: "请输入软件名称(将显示在文档页眉):",
header: "软件名称",
multiSelect: false,
options: [
{label: "自动检测", description: "从 package.json 或项目配置读取"},
{label: "手动输入", description: "输入自定义名称"}
]
}]
})
```
### Step 2: Software Category
```javascript
AskUserQuestion({
questions: [{
question: "软件属于哪种类型?",
header: "软件类型",
multiSelect: false,
options: [
{label: "命令行工具 (CLI)", description: "重点描述命令、参数"},
{label: "后端服务/API", description: "重点描述端点、协议"},
{label: "SDK/库", description: "重点描述接口、集成"},
{label: "数据处理系统", description: "重点描述数据流、转换"},
{label: "自动化脚本", description: "重点描述工作流、触发器"}
]
}]
})
```
### Step 3: Scope Definition
```javascript
AskUserQuestion({
questions: [{
question: "分析范围是什么?",
header: "分析范围",
multiSelect: false,
options: [
{label: "整个项目", description: "分析全部源代码"},
{label: "指定目录", description: "仅分析 src/ 或其他目录"},
{label: "自定义路径", description: "手动指定路径"}
]
}]
})
```
## Output
Save metadata to `project-metadata.json`:
```json
{
"software_name": "智能数据分析系统",
"version": "V1.0.0",
"category": "后端服务/API",
"scope_path": "src/",
"tech_stack": {
"language": "TypeScript",
"runtime": "Node.js 18+",
"framework": "Express.js",
"dependencies": ["mongoose", "redis", "bull"]
},
"entry_points": ["src/index.ts", "src/cli.ts"],
"main_modules": ["auth", "data", "api", "worker"]
}
```

View File

@@ -1,150 +0,0 @@
# Phase 1.5: Project Exploration
基于元数据,启动并行探索 Agent 收集代码信息。
## Execution
### Step 1: Intelligent Angle Selection
```javascript
// 根据软件类型选择探索角度
const ANGLE_PRESETS = {
'CLI': ['architecture', 'commands', 'algorithms', 'exceptions'],
'API': ['architecture', 'endpoints', 'data-structures', 'interfaces'],
'SDK': ['architecture', 'interfaces', 'data-structures', 'algorithms'],
'DataProcessing': ['architecture', 'algorithms', 'data-structures', 'dataflow'],
'Automation': ['architecture', 'algorithms', 'exceptions', 'dataflow']
};
// 从 metadata.category 映射到预设
function getCategoryKey(category) {
if (category.includes('CLI') || category.includes('命令行')) return 'CLI';
if (category.includes('API') || category.includes('后端')) return 'API';
if (category.includes('SDK') || category.includes('库')) return 'SDK';
if (category.includes('数据处理')) return 'DataProcessing';
if (category.includes('自动化')) return 'Automation';
return 'API'; // default
}
const categoryKey = getCategoryKey(metadata.category);
const selectedAngles = ANGLE_PRESETS[categoryKey];
console.log(`
## Exploration Plan
Software: ${metadata.software_name}
Category: ${metadata.category}${categoryKey}
Selected Angles: ${selectedAngles.join(', ')}
Launching ${selectedAngles.length} parallel explorations...
`);
```
### Step 2: Launch Parallel Agents (Direct Output)
**⚠️ CRITICAL**: Agents write output files directly.
```javascript
const explorationTasks = selectedAngles.map((angle, index) =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore: ${angle}`,
prompt: `
## Exploration Objective
为 CPCC 软著申请文档执行 **${angle}** 探索。
## Assigned Context
- **Exploration Angle**: ${angle}
- **Software Name**: ${metadata.software_name}
- **Scope Path**: ${metadata.scope_path}
- **Category**: ${metadata.category}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
3. Analyze from ${angle} perspective
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan**
- 识别与 ${angle} 相关的模块和文件
- 分析导入/导出关系
**Step 2: Pattern Recognition**
- ${angle} 相关的设计模式
- 代码组织方式
**Step 3: Write Output**
- 输出 JSON 到指定路径
## Expected Output Schema
**File**: ${sessionFolder}/exploration-${angle}.json
\`\`\`json
{
"angle": "${angle}",
"findings": {
"structure": [
{ "component": "...", "type": "module|layer|service", "path": "...", "description": "..." }
],
"patterns": [
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
],
"key_files": [
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
]
},
"insights": [
{ "observation": "...", "cpcc_section": "2|3|4|5|6|7", "recommendation": "..." }
],
"_metadata": {
"exploration_angle": "${angle}",
"exploration_index": ${index + 1},
"software_name": "${metadata.software_name}",
"timestamp": "ISO8601"
}
}
\`\`\`
## Success Criteria
- [ ] get_modules_by_depth 执行完成
- [ ] 至少识别 3 个相关文件
- [ ] patterns 包含具体代码示例
- [ ] insights 关联到 CPCC 章节 (2-7)
- [ ] JSON 输出到指定路径
- [ ] Return: 2-3 句话总结 ${angle} 发现
`
})
);
// Execute all exploration tasks in parallel
```
## Output
Session folder structure after exploration:
```
${sessionFolder}/
├── exploration-architecture.json
├── exploration-{angle2}.json
├── exploration-{angle3}.json
└── exploration-{angle4}.json
```
## Downstream Usage (Phase 2 Analysis Input)
Phase 2 agents read exploration files as context:
```javascript
// Discover exploration files by known angle pattern
const explorationData = {};
selectedAngles.forEach(angle => {
const filePath = `${sessionFolder}/exploration-${angle}.json`;
explorationData[angle] = JSON.parse(Read(filePath));
});
```

View File

@@ -1,664 +0,0 @@
# Phase 2: Deep Code Analysis
6 个并行 Agent各自直接写入 MD 章节文件。
> **模板参考**: [../templates/agent-base.md](../templates/agent-base.md)
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
## Exploration → Agent 自动分配
根据 Phase 1.5 生成的 exploration 文件名自动分配对应的 analysis agent。
### 映射规则
```javascript
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
const EXPLORATION_TO_AGENT = {
'architecture': 'architecture',
'commands': 'functions', // CLI 命令 → 功能模块
'endpoints': 'interfaces', // API 端点 → 接口设计
'algorithms': 'algorithms',
'data-structures': 'data_structures',
'dataflow': 'data_structures', // 数据流 → 数据结构
'interfaces': 'interfaces',
'exceptions': 'exceptions'
};
// 从文件名提取角度
function extractAngle(filename) {
// exploration-architecture.json → architecture
const match = filename.match(/exploration-(.+)\.json$/);
return match ? match[1] : null;
}
// 分配 agent
function assignAgent(explorationFile) {
const angle = extractAngle(path.basename(explorationFile));
return EXPLORATION_TO_AGENT[angle] || null;
}
// Agent 配置(用于 buildAgentPrompt
const AGENT_CONFIGS = {
architecture: {
role: '系统架构师,专注于分层设计和模块依赖',
section: '2',
output: 'section-2-architecture.md',
focus: '分层结构、模块依赖、数据流向'
},
functions: {
role: '功能分析师,专注于功能点识别和交互',
section: '3',
output: 'section-3-functions.md',
focus: '功能点枚举、模块分组、入口文件、功能交互'
},
algorithms: {
role: '算法工程师,专注于核心逻辑和复杂度分析',
section: '4',
output: 'section-4-algorithms.md',
focus: '核心算法、流程步骤、复杂度、输入输出'
},
data_structures: {
role: '数据建模师,专注于实体关系和类型定义',
section: '5',
output: 'section-5-data-structures.md',
focus: '实体定义、属性类型、关系映射、枚举'
},
interfaces: {
role: 'API设计师专注于接口契约和协议',
section: '6',
output: 'section-6-interfaces.md',
focus: 'API端点、参数校验、响应格式、时序'
},
exceptions: {
role: '可靠性工程师,专注于异常处理和恢复策略',
section: '7',
output: 'section-7-exceptions.md',
focus: '异常类型、错误码、处理模式、恢复策略'
}
};
```
### 自动发现与分配流程
```javascript
// 1. 发现所有 exploration 文件(仅看文件名)
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
.split('\n')
.filter(f => f.trim());
// 2. 按文件名自动分配 agent
const agentAssignments = explorationFiles.map(file => {
const angle = extractAngle(path.basename(file));
const agentName = EXPLORATION_TO_AGENT[angle];
return {
exploration_file: file,
angle: angle,
agent: agentName,
output_file: AGENT_CONFIGS[agentName]?.output
};
}).filter(a => a.agent);
// 3. 补充未被 exploration 覆盖的必需 agent分配相关 exploration
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
const missingAgents = requiredAgents.filter(a => !coveredAgents.has(a));
// 相关性映射:为缺失 agent 分配最相关的 exploration
const RELATED_EXPLORATIONS = {
architecture: ['architecture', 'dataflow', 'interfaces'],
functions: ['commands', 'endpoints', 'architecture'],
algorithms: ['algorithms', 'dataflow', 'architecture'],
data_structures: ['data-structures', 'dataflow', 'architecture'],
interfaces: ['interfaces', 'endpoints', 'architecture'],
exceptions: ['exceptions', 'algorithms', 'architecture']
};
function findRelatedExploration(agent, availableFiles) {
const preferences = RELATED_EXPLORATIONS[agent] || ['architecture'];
for (const pref of preferences) {
const match = availableFiles.find(f => f.includes(`exploration-${pref}.json`));
if (match) return { file: match, angle: pref, isRelated: true };
}
// 最后兜底:任意 exploration 都比没有强
return availableFiles.length > 0
? { file: availableFiles[0], angle: extractAngle(path.basename(availableFiles[0])), isRelated: true }
: { file: null, angle: null, isRelated: false };
}
missingAgents.forEach(agent => {
const related = findRelatedExploration(agent, explorationFiles);
agentAssignments.push({
exploration_file: related.file,
angle: related.angle,
agent: agent,
output_file: AGENT_CONFIGS[agent].output,
is_related: related.isRelated // 标记为相关而非直接匹配
});
});
console.log(`
## Agent Auto-Assignment
Found ${explorationFiles.length} exploration files:
${agentAssignments.map(a => {
if (!a.exploration_file) return `- ${a.agent} agent (no exploration)`;
if (a.is_related) return `- ${a.agent} agent ← ${a.angle} (related)`;
return `- ${a.agent} agent ← ${a.angle} (direct)`;
}).join('\n')}
`);
```
---
## Agent 执行前置条件
**每个 Agent 接收 exploration 文件路径,自行读取内容**
```javascript
// Agent prompt 中包含文件路径
// Agent 启动后的操作顺序:
// 1. Read exploration 文件(如有)
// 2. Read CPCC 规范文件
// 3. 执行分析任务
```
规范文件路径(相对于 skill 根目录):
- `specs/cpcc-requirements.md` - CPCC 软著申请规范要求
---
## Agent 配置
| Agent | 输出文件 | 章节 |
|-------|----------|------|
| architecture | section-2-architecture.md | 系统架构图 |
| functions | section-3-functions.md | 功能模块设计 |
| algorithms | section-4-algorithms.md | 核心算法与流程 |
| data_structures | section-5-data-structures.md | 数据结构设计 |
| interfaces | section-6-interfaces.md | 接口设计 |
| exceptions | section-7-exceptions.md | 异常处理设计 |
## CPCC 规范要点 (所有 Agent 共用)
```
[CPCC_SPEC]
1. 内容基于代码分析,无臆测或未来计划
2. 图表编号格式: 图N-M (如图2-1, 图3-1)
3. 每个子章节内容不少于100字
4. Mermaid 语法必须正确可渲染
5. 包含具体文件路径引用
6. 中文输出,技术术语可用英文
```
## 执行流程
```javascript
// 1. 发现 exploration 文件并自动分配 agent
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
.split('\n')
.filter(f => f.trim());
const agentAssignments = explorationFiles.map(file => {
const angle = extractAngle(path.basename(file));
const agentName = EXPLORATION_TO_AGENT[angle];
return { exploration_file: file, angle, agent: agentName };
}).filter(a => a.agent);
// 补充必需 agent
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
requiredAgents.filter(a => !coveredAgents.has(a)).forEach(agent => {
agentAssignments.push({ exploration_file: null, angle: null, agent });
});
// 2. 准备目录
Bash(`mkdir -p ${outputDir}/sections`);
// 3. 并行启动所有 Agent传递 exploration 文件路径)
const results = await Promise.all(
agentAssignments.map(assignment =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Analyze: ${assignment.agent}`,
prompt: buildAgentPrompt(assignment, metadata, outputDir)
})
)
);
// 4. 收集返回信息
const summaries = results.map(r => JSON.parse(r));
// 5. 传递给 Phase 2.5
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
```
### Agent Prompt 构建
```javascript
function buildAgentPrompt(assignment, metadata, outputDir) {
const config = AGENT_CONFIGS[assignment.agent];
let contextSection = '';
if (assignment.exploration_file) {
const matchType = assignment.is_related ? '相关' : '直接匹配';
contextSection = `[CONTEXT]
**Exploration 文件**: ${assignment.exploration_file}
**匹配类型**: ${matchType}
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
${assignment.is_related ? `注意:这是相关探索结果(非直接匹配),请提取与 ${config.focus} 相关的信息。` : ''}
`;
}
return `
${contextSection}
[SPEC]
读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
[ROLE] ${config.role}
[TASK]
分析 ${metadata.scope_path},生成 Section ${config.section}
输出: ${outputDir}/sections/${config.output}
[CPCC_SPEC]
- 内容基于代码分析,无臆测
- 图表编号: 图${config.section}-1, 图${config.section}-2...
- 每个子章节 ≥100字
- 包含文件路径引用
[FOCUS]
${config.focus}
[RETURN JSON]
{"status":"completed","output_file":"${config.output}","summary":"<50字>","cross_module_notes":[],"stats":{}}
`;
}
```
---
## Agent 提示词
### Architecture
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
严格遵循 CPCC 软著申请规范要求。
[ROLE] 系统架构师,专注于分层设计和模块依赖。
[TASK]
分析 ${meta.scope_path},生成 Section 2: 系统架构图。
输出: ${outDir}/sections/section-2-architecture.md
[CPCC_SPEC]
- 内容基于代码分析,无臆测
- 图表编号: 图2-1, 图2-2...
- 每个子章节 ≥100字
- 包含文件路径引用
[TEMPLATE]
## 2. 系统架构图
本章节展示${meta.software_name}的系统架构设计。
\`\`\`mermaid
graph TD
subgraph Layer1["层名"]
Comp1[组件1]
end
Comp1 --> Comp2
\`\`\`
**图2-1 系统架构图**
### 2.1 分层说明
| 层级 | 组件 | 职责 |
|------|------|------|
### 2.2 模块依赖
| 模块 | 依赖 | 说明 |
|------|------|------|
[FOCUS]
1. 分层: 识别代码层次 (Controller/Service/Repository 或其他)
2. 模块: 核心模块及职责边界
3. 依赖: 模块间依赖方向
4. 数据流: 请求/数据的流动路径
[RETURN JSON]
{"status":"completed","output_file":"section-2-architecture.md","summary":"<50字摘要>","cross_module_notes":["跨模块发现"],"stats":{"diagrams":1,"subsections":2}}
`
})
```
### Functions
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
严格遵循 CPCC 软著申请规范要求。
[ROLE] 功能分析师,专注于功能点识别和交互。
[TASK]
分析 ${meta.scope_path},生成 Section 3: 功能模块设计。
输出: ${outDir}/sections/section-3-functions.md
[CPCC_SPEC]
- 内容基于代码分析,无臆测
- 图表编号: 图3-1, 图3-2...
- 每个子章节 ≥100字
- 包含文件路径引用
[TEMPLATE]
## 3. 功能模块设计
本章节展示${meta.software_name}的功能模块结构。
\`\`\`mermaid
flowchart TD
ROOT["${meta.software_name}"]
subgraph Group1["模块组1"]
F1["功能1"]
end
ROOT --> Group1
\`\`\`
**图3-1 功能模块结构图**
### 3.1 功能清单
| ID | 功能名称 | 模块 | 入口文件 | 说明 |
|----|----------|------|----------|------|
### 3.2 功能交互
| 调用方 | 被调用方 | 触发条件 |
|--------|----------|----------|
[FOCUS]
1. 功能点: 枚举所有用户可见功能
2. 模块分组: 按业务域分组
3. 入口: 每个功能的代码入口 \`src/path/file.ts\`
4. 交互: 功能间的调用关系
[RETURN JSON]
{"status":"completed","output_file":"section-3-functions.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Algorithms
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
严格遵循 CPCC 软著申请规范要求。
[ROLE] 算法工程师,专注于核心逻辑和复杂度分析。
[TASK]
分析 ${meta.scope_path},生成 Section 4: 核心算法与流程。
输出: ${outDir}/sections/section-4-algorithms.md
[CPCC_SPEC]
- 内容基于代码分析,无臆测
- 图表编号: 图4-1, 图4-2... (每个算法一个流程图)
- 每个算法说明 ≥100字
- 包含文件路径和行号引用
[TEMPLATE]
## 4. 核心算法与流程
本章节展示${meta.software_name}的核心算法设计。
### 4.1 {算法名称}
**说明**: {描述≥100字}
**位置**: \`src/path/file.ts:line\`
**输入**: param1 (type) - 说明
**输出**: result (type) - 说明
\`\`\`mermaid
flowchart TD
Start([开始]) --> Input[/输入/]
Input --> Check{判断}
Check -->|是| P1[步骤1]
Check -->|否| P2[步骤2]
P1 --> End([结束])
P2 --> End
\`\`\`
**图4-1 {算法名称}流程图**
### 4.N 复杂度分析
| 算法 | 时间 | 空间 | 文件 |
|------|------|------|------|
[FOCUS]
1. 核心算法: 业务逻辑的关键算法 (>10行或含分支循环)
2. 流程步骤: 分支/循环/条件逻辑
3. 复杂度: 时间/空间复杂度估算
4. 输入输出: 参数类型和返回值
[RETURN JSON]
{"status":"completed","output_file":"section-4-algorithms.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Data Structures
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
严格遵循 CPCC 软著申请规范要求。
[ROLE] 数据建模师,专注于实体关系和类型定义。
[TASK]
分析 ${meta.scope_path},生成 Section 5: 数据结构设计。
输出: ${outDir}/sections/section-5-data-structures.md
[CPCC_SPEC]
- 内容基于代码分析,无臆测
- 图表编号: 图5-1 (数据结构类图)
- 每个子章节 ≥100字
- 包含文件路径引用
[TEMPLATE]
## 5. 数据结构设计
本章节展示${meta.software_name}的核心数据结构。
\`\`\`mermaid
classDiagram
class Entity1 {
+type field1
+method1()
}
Entity1 "1" --> "*" Entity2 : 关系
\`\`\`
**图5-1 数据结构类图**
### 5.1 实体说明
| 实体 | 类型 | 文件 | 说明 |
|------|------|------|------|
### 5.2 关系说明
| 源 | 目标 | 类型 | 基数 |
|----|------|------|------|
[FOCUS]
1. 实体: class/interface/type 定义
2. 属性: 字段类型和可见性 (+public/-private/#protected)
3. 关系: 继承(--|>)/组合(*--)/关联(-->)
4. 枚举: enum 类型及其值
[RETURN JSON]
{"status":"completed","output_file":"section-5-data-structures.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Interfaces
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
严格遵循 CPCC 软著申请规范要求。
[ROLE] API设计师专注于接口契约和协议。
[TASK]
分析 ${meta.scope_path},生成 Section 6: 接口设计。
输出: ${outDir}/sections/section-6-interfaces.md
[CPCC_SPEC]
- 内容基于代码分析,无臆测
- 图表编号: 图6-1, 图6-2... (每个核心接口一个时序图)
- 每个接口详情 ≥100字
- 包含文件路径引用
[TEMPLATE]
## 6. 接口设计
本章节展示${meta.software_name}的接口设计。
\`\`\`mermaid
sequenceDiagram
participant C as Client
participant A as API
participant S as Service
C->>A: POST /api/xxx
A->>S: method()
S-->>A: result
A-->>C: 200 OK
\`\`\`
**图6-1 {接口名}时序图**
### 6.1 接口清单
| 接口 | 方法 | 路径 | 说明 |
|------|------|------|------|
### 6.2 接口详情
#### METHOD /path
**请求**:
| 参数 | 类型 | 必填 | 说明 |
|------|------|------|------|
**响应**:
| 字段 | 类型 | 说明 |
|------|------|------|
[FOCUS]
1. API端点: 路径/方法/说明
2. 参数: 请求参数类型和校验规则
3. 响应: 响应格式、状态码、错误码
4. 时序: 典型调用流程 (选2-3个核心接口)
[RETURN JSON]
{"status":"completed","output_file":"section-6-interfaces.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Exceptions
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
严格遵循 CPCC 软著申请规范要求。
[ROLE] 可靠性工程师,专注于异常处理和恢复策略。
[TASK]
分析 ${meta.scope_path},生成 Section 7: 异常处理设计。
输出: ${outDir}/sections/section-7-exceptions.md
[CPCC_SPEC]
- 内容基于代码分析,无臆测
- 图表编号: 图7-1 (异常处理流程图)
- 每个子章节 ≥100字
- 包含文件路径引用
[TEMPLATE]
## 7. 异常处理设计
本章节展示${meta.software_name}的异常处理机制。
\`\`\`mermaid
flowchart TD
Req[请求] --> Try{Try-Catch}
Try -->|正常| Process[处理]
Try -->|异常| ErrType{类型}
ErrType -->|E1| H1[处理1]
ErrType -->|E2| H2[处理2]
H1 --> Log[日志]
H2 --> Log
Process --> Resp[响应]
\`\`\`
**图7-1 异常处理流程图**
### 7.1 异常类型
| 异常类 | 错误码 | HTTP状态 | 说明 |
|--------|--------|----------|------|
### 7.2 恢复策略
| 场景 | 策略 | 说明 |
|------|------|------|
[FOCUS]
1. 异常类型: 自定义异常类及继承关系
2. 错误码: 错误码定义和分类
3. 处理模式: try-catch/中间件/装饰器
4. 恢复策略: 重试/降级/熔断/告警
[RETURN JSON]
{"status":"completed","output_file":"section-7-exceptions.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
---
## Output
各 Agent 写入 `sections/section-N-xxx.md`,返回简要 JSON 供 Phase 2.5 汇总。

View File

@@ -1,192 +0,0 @@
# Phase 2.5: Consolidation Agent
汇总所有分析 Agent 的产出,生成设计综述,为 Phase 4 索引文档提供内容。
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
## 核心职责
1. **设计综述**:生成 synthesis软件整体设计思路
2. **章节摘要**:生成 section_summaries导航表格内容
3. **跨模块分析**:识别问题和关联
4. **质量检查**:验证 CPCC 合规性
## 输入
```typescript
interface ConsolidationInput {
output_dir: string;
agent_summaries: AgentReturn[];
cross_module_notes: string[];
metadata: ProjectMetadata;
}
```
## 执行
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
## 规范前置
首先读取规范文件:
- Read: ${skillRoot}/specs/cpcc-requirements.md
严格遵循 CPCC 软著申请规范要求。
## 任务
作为汇总 Agent读取所有章节文件生成设计综述和跨模块分析报告。
## 输入
- 章节文件: ${outputDir}/sections/section-*.md
- Agent 摘要: ${JSON.stringify(agent_summaries)}
- 跨模块备注: ${JSON.stringify(cross_module_notes)}
- 软件信息: ${JSON.stringify(metadata)}
## 核心产出
### 1. 设计综述 (synthesis)
用 2-3 段落描述软件整体设计思路:
- 第一段:软件定位与核心设计理念
- 第二段:模块划分与协作机制
- 第三段:技术选型与设计特点
### 2. 章节摘要 (section_summaries)
为每个章节提取一句话说明,用于导航表格:
| 章节 | 文件 | 一句话说明 |
|------|------|------------|
| 2. 系统架构设计 | section-2-architecture.md | ... |
| 3. 功能模块设计 | section-3-functions.md | ... |
| 4. 核心算法与流程 | section-4-algorithms.md | ... |
| 5. 数据结构设计 | section-5-data-structures.md | ... |
| 6. 接口设计 | section-6-interfaces.md | ... |
| 7. 异常处理设计 | section-7-exceptions.md | ... |
### 3. 跨模块分析
- 一致性:术语、命名规范
- 完整性:功能-接口对应、异常覆盖
- 关联性:模块依赖、数据流向
## 输出文件
写入: ${outputDir}/cross-module-summary.md
### 文件格式
\`\`\`markdown
# 跨模块分析报告
## 设计综述
[2-3 段落的软件设计思路描述]
## 章节摘要
| 章节 | 文件 | 说明 |
|------|------|------|
| 2. 系统架构设计 | section-2-architecture.md | 一句话说明 |
| ... | ... | ... |
## 文档统计
| 章节 | 图表数 | 字数 |
|------|--------|------|
| ... | ... | ... |
## 发现的问题
### 严重问题 (必须修复)
| ID | 类型 | 位置 | 描述 | 建议 |
|----|------|------|------|------|
| E001 | ... | ... | ... | ... |
### 警告 (建议修复)
| ID | 类型 | 位置 | 描述 | 建议 |
|----|------|------|------|------|
| W001 | ... | ... | ... | ... |
### 提示 (可选修复)
| ID | 类型 | 位置 | 描述 |
|----|------|------|------|
| I001 | ... | ... | ... |
## 跨模块关联图
\`\`\`mermaid
graph LR
S2[架构] --> S3[功能]
S3 --> S4[算法]
S3 --> S6[接口]
S5[数据结构] --> S6
S6 --> S7[异常]
\`\`\`
## 修复建议优先级
[按优先级排序的建议,段落式描述]
\`\`\`
## 返回格式 (JSON)
{
"status": "completed",
"output_file": "cross-module-summary.md",
// Phase 4 索引文档所需
"synthesis": "2-3 段落的设计综述文本",
"section_summaries": [
{"file": "section-2-architecture.md", "title": "2. 系统架构设计", "summary": "一句话说明"},
{"file": "section-3-functions.md", "title": "3. 功能模块设计", "summary": "一句话说明"},
{"file": "section-4-algorithms.md", "title": "4. 核心算法与流程", "summary": "一句话说明"},
{"file": "section-5-data-structures.md", "title": "5. 数据结构设计", "summary": "一句话说明"},
{"file": "section-6-interfaces.md", "title": "6. 接口设计", "summary": "一句话说明"},
{"file": "section-7-exceptions.md", "title": "7. 异常处理设计", "summary": "一句话说明"}
],
// 质量信息
"stats": {
"total_sections": 6,
"total_diagrams": 8,
"total_words": 3500
},
"issues": {
"errors": [...],
"warnings": [...],
"info": [...]
},
"cross_refs": {
"found": 12,
"missing": 3
}
}
`
})
```
## 问题分类
| 严重级别 | 前缀 | 含义 | 处理方式 |
|----------|------|------|----------|
| Error | E | 阻塞合规检查 | 必须修复 |
| Warning | W | 影响文档质量 | 建议修复 |
| Info | I | 可改进项 | 可选修复 |
## 问题类型
| 类型 | 说明 |
|------|------|
| missing | 缺失内容(功能-接口对应、异常覆盖)|
| inconsistency | 不一致(术语、命名、编号)|
| circular | 循环依赖 |
| orphan | 孤立内容(未被引用)|
| syntax | Mermaid 语法错误 |
| enhancement | 增强建议 |
## Output
- **文件**: `cross-module-summary.md`(完整汇总报告)
- **返回**: JSON 包含 Phase 4 所需的 synthesis 和 section_summaries

View File

@@ -1,261 +0,0 @@
# Phase 4: Document Assembly
生成索引式文档,通过 markdown 链接引用章节文件。
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
## 设计原则
1. **引用而非嵌入**:主文档通过链接引用章节,不复制内容
2. **索引 + 综述**:主文档提供导航和软件概述
3. **CPCC 合规**:保持章节编号符合软著申请要求
4. **独立可读**:各章节文件可单独阅读
## 输入
```typescript
interface AssemblyInput {
output_dir: string;
metadata: ProjectMetadata;
consolidation: {
synthesis: string; // 跨章节综合分析
section_summaries: Array<{
file: string;
title: string;
summary: string;
}>;
issues: { errors: Issue[], warnings: Issue[], info: Issue[] };
stats: { total_sections: number, total_diagrams: number };
};
}
```
## 执行流程
```javascript
// 1. 检查是否有阻塞性问题
if (consolidation.issues.errors.length > 0) {
const response = await AskUserQuestion({
questions: [{
question: `发现 ${consolidation.issues.errors.length} 个严重问题,如何处理?`,
header: "阻塞问题",
multiSelect: false,
options: [
{label: "查看并修复", description: "显示问题列表,手动修复后重试"},
{label: "忽略继续", description: "跳过问题检查,继续装配"},
{label: "终止", description: "停止文档生成"}
]
}]
});
if (response === "查看并修复") {
return { action: "fix_required", errors: consolidation.issues.errors };
}
if (response === "终止") {
return { action: "abort" };
}
}
// 2. 生成索引式文档(不读取章节内容)
const doc = generateIndexDocument(metadata, consolidation);
// 3. 写入最终文件
Write(`${outputDir}/${metadata.software_name}-软件设计说明书.md`, doc);
```
## 文档模板
```markdown
<!-- 页眉:{软件名称} - 版本号:{版本号} -->
# {软件名称} 软件设计说明书
## 文档信息
| 项目 | 内容 |
|------|------|
| 软件名称 | {software_name} |
| 版本号 | {version} |
| 生成日期 | {date} |
---
## 1. 软件概述
### 1.1 软件背景与用途
[从 metadata 生成的软件背景描述]
### 1.2 开发目标与特点
[从 metadata 生成的目标和特点]
### 1.3 运行环境与技术架构
[从 metadata.tech_stack 生成]
---
## 文档导航
{consolidation.synthesis - 软件整体设计思路综述}
| 章节 | 说明 | 详情 |
|------|------|------|
| 2. 系统架构设计 | {summary} | [查看](./sections/section-2-architecture.md) |
| 3. 功能模块设计 | {summary} | [查看](./sections/section-3-functions.md) |
| 4. 核心算法与流程 | {summary} | [查看](./sections/section-4-algorithms.md) |
| 5. 数据结构设计 | {summary} | [查看](./sections/section-5-data-structures.md) |
| 6. 接口设计 | {summary} | [查看](./sections/section-6-interfaces.md) |
| 7. 异常处理设计 | {summary} | [查看](./sections/section-7-exceptions.md) |
---
## 附录
- [跨模块分析报告](./cross-module-summary.md)
- [章节文件目录](./sections/)
---
<!-- 页脚:生成时间 {timestamp} -->
```
## 生成函数
```javascript
function generateIndexDocument(metadata, consolidation) {
const date = new Date().toLocaleDateString('zh-CN');
// 章节导航表格
const sectionTable = consolidation.section_summaries
.map(s => `| ${s.title} | ${s.summary} | [查看](./sections/${s.file}) |`)
.join('\n');
return `<!-- 页眉:${metadata.software_name} - 版本号:${metadata.version} -->
# ${metadata.software_name} 软件设计说明书
## 文档信息
| 项目 | 内容 |
|------|------|
| 软件名称 | ${metadata.software_name} |
| 版本号 | ${metadata.version} |
| 生成日期 | ${date} |
---
## 1. 软件概述
### 1.1 软件背景与用途
${generateBackground(metadata)}
### 1.2 开发目标与特点
${generateObjectives(metadata)}
### 1.3 运行环境与技术架构
${generateTechStack(metadata)}
---
## 设计综述
${consolidation.synthesis}
---
## 文档导航
| 章节 | 说明 | 详情 |
|------|------|------|
${sectionTable}
---
## 附录
- [跨模块分析报告](./cross-module-summary.md)
- [章节文件目录](./sections/)
---
<!-- 页脚:生成时间 ${new Date().toISOString()} -->
`;
}
function generateBackground(metadata) {
const categoryDescriptions = {
"命令行工具 (CLI)": "提供命令行界面,用户通过终端命令与系统交互",
"后端服务/API": "提供 RESTful/GraphQL API 接口,支持前端或其他服务调用",
"SDK/库": "提供可复用的代码库,供其他项目集成使用",
"数据处理系统": "处理数据导入、转换、分析和导出",
"自动化脚本": "自动执行重复性任务,提高工作效率"
};
return `${metadata.software_name}是一款${metadata.category}软件。${categoryDescriptions[metadata.category] || ''}
本软件基于${metadata.tech_stack.language}语言开发,运行于${metadata.tech_stack.runtime}环境,采用${metadata.tech_stack.framework || '原生'}框架实现核心功能。`;
}
function generateObjectives(metadata) {
return `本软件旨在${metadata.purpose || '解决特定领域的技术问题'}
主要技术特点包括${metadata.tech_stack.framework ? `采用 ${metadata.tech_stack.framework} 框架` : '模块化设计'},具备良好的可扩展性和可维护性。`;
}
function generateTechStack(metadata) {
return `**运行环境**
- 操作系统:${metadata.os || 'Windows/Linux/macOS'}
- 运行时:${metadata.tech_stack.runtime}
- 依赖环境:${metadata.tech_stack.dependencies?.join(', ') || '无特殊依赖'}
**技术架构**
- 架构模式:${metadata.architecture_pattern || '分层架构'}
- 核心框架:${metadata.tech_stack.framework || '原生实现'}
- 主要模块详见第2章系统架构设计`;
}
```
## 输出结构
```
.workflow/.scratchpad/copyright-{timestamp}/
├── sections/ # 独立章节Phase 2 产出)
│ ├── section-2-architecture.md
│ ├── section-3-functions.md
│ └── ...
├── cross-module-summary.md # 跨模块报告Phase 2.5 产出)
└── {软件名称}-软件设计说明书.md # 索引文档(本阶段产出)
```
## 与 Phase 2.5 的协作
Phase 2.5 consolidation agent 需要提供:
```typescript
interface ConsolidationOutput {
synthesis: string; // 设计思路综述2-3 段落)
section_summaries: Array<{
file: string; // 文件名
title: string; // 章节标题(如"2. 系统架构设计"
summary: string; // 一句话说明
}>;
issues: {...};
stats: {...};
}
```
## 关键变更
| 原设计 | 新设计 |
|--------|--------|
| 读取章节内容并拼接 | 链接引用,不读取内容 |
| 嵌入完整章节 | 仅提供导航索引 |
| 重复生成统计 | 引用 cross-module-summary.md |
| 大文件 | 精简索引文档 |

View File

@@ -1,192 +0,0 @@
# Phase 5: Compliance Review & Iterative Refinement
Discovery-driven refinement loop until CPCC compliance is met.
## Execution
### Step 1: Extract Compliance Issues
```javascript
function extractComplianceIssues(validationResult, deepAnalysis) {
return {
// Missing or incomplete sections
missingSections: validationResult.details
.filter(d => !d.pass)
.map(d => ({
section: d.name,
severity: 'critical',
suggestion: `需要补充 ${d.name} 相关内容`
})),
// Features with weak descriptions (< 50 chars)
weakDescriptions: (deepAnalysis.functions?.feature_list || [])
.filter(f => !f.description || f.description.length < 50)
.map(f => ({
feature: f.name,
current: f.description || '(无描述)',
severity: 'warning'
})),
// Complex algorithms without detailed flowcharts
complexAlgorithms: (deepAnalysis.algorithms?.algorithms || [])
.filter(a => (a.complexity || 0) > 10 && (a.steps?.length || 0) < 5)
.map(a => ({
algorithm: a.name,
complexity: a.complexity,
file: a.file,
severity: 'warning'
})),
// Data relationships without descriptions
incompleteRelationships: (deepAnalysis.data_structures?.relationships || [])
.filter(r => !r.description)
.map(r => ({from: r.from, to: r.to, severity: 'info'})),
// Diagram validation issues
diagramIssues: (deepAnalysis.diagrams?.validation || [])
.filter(d => !d.valid)
.map(d => ({file: d.file, issues: d.issues, severity: 'critical'}))
};
}
```
### Step 2: Build Dynamic Questions
```javascript
function buildComplianceQuestions(issues) {
const questions = [];
if (issues.missingSections.length > 0) {
questions.push({
question: `发现 ${issues.missingSections.length} 个章节内容不完整,需要补充哪些?`,
header: "章节补充",
multiSelect: true,
options: issues.missingSections.slice(0, 4).map(s => ({
label: s.section,
description: s.suggestion
}))
});
}
if (issues.weakDescriptions.length > 0) {
questions.push({
question: `以下 ${issues.weakDescriptions.length} 个功能描述过于简短,请选择需要详细说明的:`,
header: "功能描述",
multiSelect: true,
options: issues.weakDescriptions.slice(0, 4).map(f => ({
label: f.feature,
description: `当前:${f.current.substring(0, 30)}...`
}))
});
}
if (issues.complexAlgorithms.length > 0) {
questions.push({
question: `发现 ${issues.complexAlgorithms.length} 个复杂算法缺少详细流程图,是否生成?`,
header: "算法详解",
multiSelect: false,
options: [
{label: "全部生成 (推荐)", description: "为所有复杂算法生成含分支/循环的流程图"},
{label: "仅最复杂的", description: `仅为 ${issues.complexAlgorithms[0]?.algorithm} 生成`},
{label: "跳过", description: "保持当前简单流程图"}
]
});
}
questions.push({
question: "如何处理当前文档?",
header: "操作",
multiSelect: false,
options: [
{label: "应用修改并继续", description: "应用上述选择,继续检查"},
{label: "完成文档", description: "当前文档满足要求,生成最终版本"},
{label: "重新分析", description: "使用不同配置重新分析代码"}
]
});
return questions.slice(0, 4);
}
```
### Step 3: Apply Updates
```javascript
async function applyComplianceUpdates(responses, issues, analyses, outputDir) {
const updates = [];
if (responses['章节补充']) {
for (const section of responses['章节补充']) {
const sectionAnalysis = await Task({
subagent_type: "cli-explore-agent",
prompt: `深入分析 ${section.section} 所需内容...`
});
updates.push({type: 'section_supplement', section: section.section, data: sectionAnalysis});
}
}
if (responses['算法详解'] === '全部生成 (推荐)') {
for (const algo of issues.complexAlgorithms) {
const detailedSteps = await analyzeAlgorithmInDepth(algo, analyses);
const flowchart = generateAlgorithmFlowchart({
name: algo.algorithm,
inputs: detailedSteps.inputs,
outputs: detailedSteps.outputs,
steps: detailedSteps.steps
});
Write(`${outputDir}/diagrams/algorithm-${sanitizeId(algo.algorithm)}-detailed.mmd`, flowchart);
updates.push({type: 'algorithm_flowchart', algorithm: algo.algorithm});
}
}
return updates;
}
```
### Step 4: Iteration Loop
```javascript
async function runComplianceLoop(documentPath, analyses, metadata, outputDir) {
let iteration = 0;
const maxIterations = 5;
while (iteration < maxIterations) {
iteration++;
// Validate current document
const document = Read(documentPath);
const validation = validateCPCCCompliance(document, analyses);
// Extract issues
const issues = extractComplianceIssues(validation, analyses);
const totalIssues = Object.values(issues).flat().length;
if (totalIssues === 0) {
console.log("✅ 所有检查通过,文档符合 CPCC 要求");
break;
}
// Ask user
const questions = buildComplianceQuestions(issues);
const responses = await AskUserQuestion({questions});
if (responses['操作'] === '完成文档') break;
if (responses['操作'] === '重新分析') return {action: 'restart'};
// Apply updates
const updates = await applyComplianceUpdates(responses, issues, analyses, outputDir);
// Regenerate document
const updatedDocument = regenerateDocument(document, updates, analyses);
Write(documentPath, updatedDocument);
// Archive iteration
Write(`${outputDir}/iterations/v${iteration}.md`, document);
}
return {action: 'finalized', iterations: iteration};
}
```
## Output
Final compliant document + iteration history in `iterations/`.

View File

@@ -1,121 +0,0 @@
# CPCC Compliance Requirements
China Copyright Protection Center (CPCC) requirements for software design specification.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase 4 | Check document structure before assembly | Document Requirements, Mandatory Sections |
| Phase 4 | Apply correct figure numbering | Figure Numbering Convention |
| Phase 5 | Validate before each iteration | Validation Function |
| Phase 5 | Handle failures during refinement | Error Handling |
---
## Document Requirements
### Format
- [ ] 页眉包含软件名称和版本号
- [ ] 页码位于右上角说明
- [ ] 每页不少于30行文字图表页除外
- [ ] A4纵向排版文字从左至右
### Mandatory Sections (7 章节)
- [ ] 1. 软件概述
- [ ] 2. 系统架构图
- [ ] 3. 功能模块设计
- [ ] 4. 核心算法与流程
- [ ] 5. 数据结构设计
- [ ] 6. 接口设计
- [ ] 7. 异常处理设计
### Content Requirements
- [ ] 所有内容基于代码分析
- [ ] 无臆测或未来计划
- [ ] 无原始指令性文字
- [ ] Mermaid 语法正确
- [ ] 图表编号和说明完整
## Validation Function
```javascript
function validateCPCCCompliance(document, analyses) {
const checks = [
{name: "软件概述完整性", pass: document.includes("## 1. 软件概述")},
{name: "系统架构图存在", pass: document.includes("图2-1 系统架构图")},
{name: "功能模块设计完整", pass: document.includes("## 3. 功能模块设计")},
{name: "核心算法描述", pass: document.includes("## 4. 核心算法与流程")},
{name: "数据结构设计", pass: document.includes("## 5. 数据结构设计")},
{name: "接口设计说明", pass: document.includes("## 6. 接口设计")},
{name: "异常处理设计", pass: document.includes("## 7. 异常处理设计")},
{name: "Mermaid图表语法", pass: !document.includes("mermaid error")},
{name: "页眉信息", pass: document.includes("页眉")},
{name: "页码说明", pass: document.includes("页码")}
];
return {
passed: checks.filter(c => c.pass).length,
total: checks.length,
details: checks
};
}
```
## Software Categories
| Category | Document Focus |
|----------|----------------|
| 命令行工具 (CLI) | 命令、参数、使用流程 |
| 后端服务/API | 端点、协议、数据流 |
| SDK/库 | 接口、集成、使用示例 |
| 数据处理系统 | 数据流、转换、ETL |
| 自动化脚本 | 工作流、触发器、调度 |
## Figure Numbering Convention
| Section | Figure | Title |
|---------|--------|-------|
| 2 | 图2-1 | 系统架构图 |
| 3 | 图3-1 | 功能模块结构图 |
| 4 | 图4-N | {算法名称}流程图 |
| 5 | 图5-1 | 数据结构类图 |
| 6 | 图6-N | {接口名称}时序图 |
| 7 | 图7-1 | 异常处理流程图 |
## Error Handling
| Error | Recovery |
|-------|----------|
| Analysis timeout | Reduce scope, retry |
| Missing section data | Re-run targeted agent |
| Diagram validation fails | Regenerate with fixes |
| User abandons iteration | Save progress, allow resume |
---
## Integration with Phases
**Phase 4 - Document Assembly**:
```javascript
// Before assembling document
const docChecks = [
{check: "页眉格式", value: `<!-- 页眉:${metadata.software_name} - 版本号:${metadata.version} -->`},
{check: "页码说明", value: `<!-- 注:最终文档页码位于每页右上角 -->`}
];
// Apply figure numbering from convention table
const figureNumbers = getFigureNumbers(sectionIndex);
```
**Phase 5 - Compliance Refinement**:
```javascript
// In 05-compliance-refinement.md
const validation = validateCPCCCompliance(document, analyses);
if (validation.passed < validation.total) {
// Failed checks become discovery questions
const failedChecks = validation.details.filter(d => !d.pass);
discoveries.complianceIssues = failedChecks;
}
```

View File

@@ -1,200 +0,0 @@
# Agent Base Template
所有分析 Agent 的基础模板,确保一致性和高效执行。
## 通用提示词结构
```
[ROLE] 你是{角色},专注于{职责}。
[TASK]
分析代码库,生成 CPCC 合规的章节文档。
- 输出: {output_dir}/sections/{filename}
- 格式: Markdown + Mermaid
- 范围: {scope_path}
[CONSTRAINTS]
- 只描述已实现的代码,不臆测
- 中文输出,技术术语可用英文
- Mermaid 图表必须可渲染
- 文件/类/函数需包含路径引用
[OUTPUT_FORMAT]
1. 直接写入 MD 文件
2. 返回 JSON 简要信息
[QUALITY_CHECKLIST]
- [ ] 包含至少1个 Mermaid 图表
- [ ] 每个子章节有实质内容 (>100字)
- [ ] 代码引用格式: `src/path/file.ts:line`
- [ ] 图表编号正确 (图N-M)
```
## 变量说明
| 变量 | 来源 | 示例 |
|------|------|------|
| {output_dir} | Phase 1 创建 | .workflow/.scratchpad/copyright-xxx |
| {software_name} | metadata.software_name | 智能数据分析系统 |
| {scope_path} | metadata.scope_path | src/ |
| {tech_stack} | metadata.tech_stack | TypeScript/Node.js |
## Agent 提示词模板
### 精简版 (推荐)
```javascript
const agentPrompt = (agent, meta, outDir) => `
[ROLE] ${AGENT_ROLES[agent]}
[TASK]
分析 ${meta.scope_path},生成 ${AGENT_SECTIONS[agent]}
输出: ${outDir}/sections/${AGENT_FILES[agent]}
[TEMPLATE]
${AGENT_TEMPLATES[agent]}
[FOCUS]
${AGENT_FOCUS[agent].join('\n')}
[RETURN]
{"status":"completed","output_file":"${AGENT_FILES[agent]}","summary":"<50字>","cross_module_notes":[],"stats":{}}
`;
```
### 配置映射
```javascript
const AGENT_ROLES = {
architecture: "系统架构师,专注于分层设计和模块依赖",
functions: "功能分析师,专注于功能点识别和交互",
algorithms: "算法工程师,专注于核心逻辑和复杂度",
data_structures: "数据建模师,专注于实体关系和类型",
interfaces: "API设计师专注于接口契约和协议",
exceptions: "可靠性工程师,专注于异常处理和恢复"
};
const AGENT_SECTIONS = {
architecture: "Section 2: 系统架构图",
functions: "Section 3: 功能模块设计",
algorithms: "Section 4: 核心算法与流程",
data_structures: "Section 5: 数据结构设计",
interfaces: "Section 6: 接口设计",
exceptions: "Section 7: 异常处理设计"
};
const AGENT_FILES = {
architecture: "section-2-architecture.md",
functions: "section-3-functions.md",
algorithms: "section-4-algorithms.md",
data_structures: "section-5-data-structures.md",
interfaces: "section-6-interfaces.md",
exceptions: "section-7-exceptions.md"
};
const AGENT_FOCUS = {
architecture: [
"1. 分层: 识别代码层次 (Controller/Service/Repository)",
"2. 模块: 核心模块及职责边界",
"3. 依赖: 模块间依赖方向",
"4. 数据流: 请求/数据的流动路径"
],
functions: [
"1. 功能点: 枚举所有用户可见功能",
"2. 模块分组: 按业务域分组",
"3. 入口: 每个功能的代码入口",
"4. 交互: 功能间的调用关系"
],
algorithms: [
"1. 核心算法: 业务逻辑的关键算法",
"2. 流程步骤: 分支/循环/条件",
"3. 复杂度: 时间/空间复杂度",
"4. 输入输出: 参数和返回值"
],
data_structures: [
"1. 实体: class/interface/type 定义",
"2. 属性: 字段类型和可见性",
"3. 关系: 继承/组合/关联",
"4. 枚举: 枚举类型及其值"
],
interfaces: [
"1. API端点: 路径/方法/说明",
"2. 参数: 请求参数类型和校验",
"3. 响应: 响应格式和状态码",
"4. 时序: 典型调用流程"
],
exceptions: [
"1. 异常类型: 自定义异常类",
"2. 错误码: 错误码定义和含义",
"3. 处理模式: try-catch/中间件",
"4. 恢复策略: 重试/降级/告警"
]
};
```
## 效率优化
### 1. 减少冗余
**Before (冗余)**:
```
你是一个专业的系统架构师,具有丰富的软件设计经验。
你需要分析代码库,识别系统的分层结构...
```
**After (精简)**:
```
[ROLE] 系统架构师,专注于分层设计和模块依赖。
[TASK] 分析 src/,生成系统架构图章节。
```
### 2. 模板驱动
**Before (描述性)**:
```
请按照以下格式输出:
首先写一个二级标题...
然后添加一个Mermaid图...
```
**After (模板)**:
```
[TEMPLATE]
## 2. 系统架构图
{intro}
\`\`\`mermaid
{diagram}
\`\`\`
**图2-1 系统架构图**
### 2.1 {subsection}
{content}
```
### 3. 焦点明确
**Before (模糊)**:
```
分析项目的各个方面,包括架构、模块、依赖等
```
**After (具体)**:
```
[FOCUS]
1. 分层: Controller/Service/Repository
2. 模块: 职责边界
3. 依赖: 方向性
4. 数据流: 路径
```
### 4. 返回简洁
**Before (冗长)**:
```
请返回详细的分析结果,包括所有发现的问题...
```
**After (结构化)**:
```
[RETURN]
{"status":"completed","output_file":"xxx.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
```

View File

@@ -1,636 +0,0 @@
---
name: flow-coordinator
description: Template-driven workflow coordinator with minimal state tracking. Executes command chains from workflow templates OR unified PromptTemplate workflows. Supports slash-command and DAG-based execution. Triggers on "flow-coordinator", "workflow template", "orchestrate".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
---
# Flow Coordinator
Lightweight workflow coordinator supporting two workflow formats:
1. **Legacy Templates**: Command chains with slash-command execution
2. **Unified Workflows**: DAG-based PromptTemplate nodes (spec: `spec/unified-workflow-spec.md`)
## Specification Reference
- **Unified Workflow Spec**: @spec/unified-workflow-spec.md
- **Demo Workflow**: `ccw/data/flows/demo-unified-workflow.json`
## Architecture
```
User Task → Detect Format → Select Workflow → Init Status → Execute → Complete
│ │
├─ Legacy Template │
│ └─ Sequential cmd execution │
│ │
└─ Unified Workflow │
└─ DAG traversal with contextRefs │
└──────────────── Resume (from status.json) ──────────────┘
Execution Modes:
├─ analysis → Read-only, CLI --mode analysis
├─ write → File changes, CLI --mode write
├─ mainprocess → Blocking, synchronous
└─ async → Background, ccw cli
```
## Core Concepts
**Dual Format Support**:
- Legacy: `templates/*.json` with `cmd`, `args`, `execution`
- Unified: `ccw/data/flows/*.json` with `nodes`, `edges`, `contextRefs`
**Unified PromptTemplate Model**: All workflow steps are natural language instructions with:
- `instruction`: What to execute (natural language)
- `slashCommand`: Optional slash command name (e.g., "workflow:plan")
- `slashArgs`: Optional arguments for slash command (supports {{variable}})
- `outputName`: Name for output reference
- `contextRefs`: References to previous step outputs
- `tool`: Optional CLI tool (gemini/qwen/codex/claude)
- `mode`: Execution mode (analysis/write/mainprocess/async)
**DAG Execution**: Unified workflows execute as directed acyclic graphs with parallel branches and conditional edges.
**Dynamic Discovery**: Both formats discovered at runtime via Glob.
---
## Execution Flow
```javascript
async function execute(task) {
// 1. Discover and select template
const templates = await discoverTemplates();
const template = await selectTemplate(templates);
// 2. Init status
const sessionId = `fc-${timestamp()}`;
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = initStatus(template, task);
write(statusPath, JSON.stringify(status, null, 2));
// 3. Execute steps based on execution config
await executeSteps(status, statusPath);
}
async function executeSteps(status, statusPath) {
for (let i = status.current; i < status.steps.length; i++) {
const step = status.steps[i];
status.current = i;
// Execute based on step mode (all steps use slash-command type)
const execConfig = step.execution || { type: 'slash-command', mode: 'mainprocess' };
if (execConfig.mode === 'async') {
// Async execution - stop and wait for hook callback
await executeSlashCommandAsync(step, status, statusPath);
break;
} else {
// Mainprocess execution - continue immediately
await executeSlashCommandSync(step, status);
step.status = 'done';
write(statusPath, JSON.stringify(status, null, 2));
}
}
// All steps complete
if (status.current >= status.steps.length) {
status.complete = true;
write(statusPath, JSON.stringify(status, null, 2));
}
}
```
---
## Unified Workflow Execution
For workflows using the unified PromptTemplate format (`ccw/data/flows/*.json`):
```javascript
async function executeUnifiedWorkflow(workflow, task) {
// 1. Initialize execution state
const sessionId = `ufc-${timestamp()}`;
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const state = {
id: sessionId,
workflow: workflow.id,
goal: task,
nodeStates: {}, // nodeId -> { status, result, error }
outputs: {}, // outputName -> result
complete: false
};
// 2. Topological sort for execution order
const executionOrder = topologicalSort(workflow.nodes, workflow.edges);
// 3. Execute nodes respecting DAG dependencies
await executeDAG(workflow, executionOrder, state, statusPath);
}
async function executeDAG(workflow, order, state, statusPath) {
for (const nodeId of order) {
const node = workflow.nodes.find(n => n.id === nodeId);
const data = node.data;
// Check if all dependencies are satisfied
if (!areDependenciesSatisfied(nodeId, workflow.edges, state)) {
continue; // Will be executed when dependencies complete
}
// Build instruction from slashCommand or raw instruction
let instruction = buildNodeInstruction(data, state.outputs);
// Execute based on mode
state.nodeStates[nodeId] = { status: 'running' };
write(statusPath, JSON.stringify(state, null, 2));
const result = await executeNode(instruction, data.tool, data.mode);
// Store output for downstream nodes
state.nodeStates[nodeId] = { status: 'completed', result };
if (data.outputName) {
state.outputs[data.outputName] = result;
}
write(statusPath, JSON.stringify(state, null, 2));
}
state.complete = true;
write(statusPath, JSON.stringify(state, null, 2));
}
/**
* Build node instruction from slashCommand or raw instruction
* Handles slashCommand/slashArgs fields from frontend orchestrator
*/
function buildNodeInstruction(data, outputs) {
const refs = data.contextRefs || [];
// If slashCommand is set, construct instruction from it
if (data.slashCommand) {
// Resolve variables in slashArgs
const args = data.slashArgs
? resolveContextRefs(data.slashArgs, refs, outputs)
: '';
// Build slash command instruction
let instruction = `/${data.slashCommand}${args ? ' ' + args : ''}`;
// Append additional instruction if provided
if (data.instruction) {
const additionalInstruction = resolveContextRefs(data.instruction, refs, outputs);
instruction = `${instruction}\n\n${additionalInstruction}`;
}
return instruction;
}
// Fallback: use raw instruction with context refs resolved
return resolveContextRefs(data.instruction || '', refs, outputs);
}
function resolveContextRefs(instruction, refs, outputs) {
let resolved = instruction;
for (const ref of refs) {
const value = outputs[ref];
const placeholder = `{{${ref}}}`;
resolved = resolved.replace(new RegExp(placeholder, 'g'),
typeof value === 'object' ? JSON.stringify(value) : String(value));
}
return resolved;
}
async function executeNode(instruction, tool, mode) {
// Build CLI command based on tool and mode
const cliTool = tool || 'gemini';
const cliMode = mode === 'write' ? 'write' : 'analysis';
if (mode === 'async') {
// Background execution
return Bash(
`ccw cli -p "${escapePrompt(instruction)}" --tool ${cliTool} --mode ${cliMode}`,
{ run_in_background: true }
);
} else {
// Synchronous execution
return Bash(
`ccw cli -p "${escapePrompt(instruction)}" --tool ${cliTool} --mode ${cliMode}`
);
}
}
```
### Unified Workflow Discovery
```javascript
async function discoverUnifiedWorkflows() {
const files = Glob('*.json', { path: 'ccw/data/flows/' });
const workflows = [];
for (const file of files) {
const content = JSON.parse(Read(file));
// Detect unified format by checking for 'nodes' array
if (content.nodes && Array.isArray(content.nodes)) {
workflows.push({
id: content.id,
name: content.name,
description: content.description,
nodeCount: content.nodes.length,
format: 'unified',
file: file
});
}
}
return workflows;
}
```
### Format Detection
```javascript
function detectWorkflowFormat(content) {
if (content.nodes && content.edges) {
return 'unified'; // PromptTemplate DAG format
}
if (content.steps && content.steps[0]?.cmd) {
return 'legacy'; // Command chain format
}
throw new Error('Unknown workflow format');
}
```
---
## Legacy Template Discovery
**Dynamic query** - never hardcode template list:
```javascript
async function discoverTemplates() {
// Discover all JSON templates
const files = Glob('*.json', { path: 'templates/' });
// Parse each template
const templates = [];
for (const file of files) {
const content = JSON.parse(Read(file));
templates.push({
name: content.name,
description: content.description,
steps: content.steps.map(s => s.cmd).join(' → '),
file: file
});
}
return templates;
}
```
---
## Template Selection
User chooses from discovered templates:
```javascript
async function selectTemplate(templates) {
// Build options from discovered templates
const options = templates.slice(0, 4).map(t => ({
label: t.name,
description: t.steps
}));
const response = await AskUserQuestion({
questions: [{
question: 'Select workflow template:',
header: 'Template',
options: options,
multiSelect: false
}]
});
// Handle "Other" - show remaining templates or custom input
if (response.template === 'Other') {
return await selectFromRemainingTemplates(templates.slice(4));
}
return templates.find(t => t.name === response.template);
}
```
---
## Status Schema
**Creation**: Copy template JSON → Update `id`, `template`, `goal`, set all steps `status: "pending"`
**Location**: `.workflow/.flow-coordinator/{session-id}/status.json`
**Core Fields**:
- `id`: Session ID (fc-YYYYMMDD-HHMMSS)
- `template`: Template name
- `goal`: User task description
- `current`: Current step index
- `steps[]`: Step array from template (with runtime `status`, `session`, `taskId`)
- `complete`: All steps done?
**Step Status**: `pending``running``done` | `failed` | `skipped`
---
## Extended Template Schema
**Templates stored in**: `templates/*.json` (discovered at runtime via Glob)
**TemplateStep Fields**:
- `cmd`: Skill name or command path (e.g., `workflow-lite-plan`, `workflow:debug-with-file`, `issue:discover`)
- `route?`: Sub-mode for multi-mode Skills (e.g., `lite-execute`, `plan-verify`, `test-cycle-execute`)
- `args?`: Arguments with `{{goal}}` and `{{prev}}` placeholders
- `unit?`: Minimum execution unit name (groups related commands)
- `optional?`: Can be skipped by user
- `execution`: Type and mode configuration
- `type`: Always `'slash-command'` (invoked via Skill tool)
- `mode`: `'mainprocess'` (blocking) or `'async'` (background)
- `contextHint?`: Natural language guidance for context assembly
**cmd 命名规则**:
- **Skills已迁移**: 使用连字符格式 Skill 名称,如 `workflow-lite-plan``review-cycle`
- **Commands仍存在**: 使用冒号格式命令路径,如 `workflow:brainstorm-with-file``issue:discover`
**route 字段**:
多模式 Skill 通过 `route` 区分子模式。同一 Skill 的不同步骤共享 `cmd`,通过 `route` 路由:
| Skill | 默认模式 (无 route) | route 值 |
|-------|-------------------|----------|
| `workflow-lite-plan` | lite-plan | `lite-execute` |
| `workflow-plan` | plan | `plan-verify`, `replan` |
| `workflow-test-fix` | test-fix-gen | `test-cycle-execute` |
| `workflow-tdd` | tdd-plan | `tdd-verify` |
| `review-cycle` | - | `session`, `module`, `fix` |
**Template Example**:
```json
{
"name": "rapid",
"steps": [
{
"cmd": "workflow-lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "mainprocess" },
"contextHint": "Create lightweight implementation plan"
},
{
"cmd": "workflow-lite-plan",
"route": "lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": { "type": "slash-command", "mode": "async" },
"contextHint": "Execute plan from previous step"
}
]
}
```
---
## Execution Implementation
### Mainprocess Mode (Blocking)
```javascript
async function executeSlashCommandSync(step, status) {
// Build Skill invocation args
const args = buildSkillArgs(step, status);
// Invoke via Skill tool: step.cmd is skill name or command path
const result = await Skill({ skill: step.cmd, args: args });
step.session = result.session_id;
step.status = 'done';
return result;
}
```
### Async Mode (Background)
```javascript
async function executeSlashCommandAsync(step, status, statusPath) {
// Build prompt for ccw cli: /<cmd> [--route <route>] args + context
const prompt = buildCommandPrompt(step, status);
step.status = 'running';
write(statusPath, JSON.stringify(status, null, 2));
// Execute via ccw cli in background
const taskId = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool claude --mode write`,
{ run_in_background: true }
).task_id;
step.taskId = taskId;
write(statusPath, JSON.stringify(status, null, 2));
console.log(`Executing: ${step.cmd}${step.route ? ' --route ' + step.route : ''} (async)`);
console.log(`Resume: /flow-coordinator --resume ${status.id}`);
}
```
---
## Prompt Building
Prompts are built in format: `/<cmd> [--route <route>] -y args` + context
```javascript
function buildCommandPrompt(step, status) {
// step.cmd is skill name or command path
let prompt = `/${step.cmd}`;
// Add route for multi-mode Skills
if (step.route) {
prompt += ` --route ${step.route}`;
}
prompt += ' -y';
// Add arguments (with placeholder replacement)
if (step.args) {
const args = step.args
.replace('{{goal}}', status.goal)
.replace('{{prev}}', getPreviousSessionId(status));
prompt += ` ${args}`;
}
// Add context based on contextHint
if (step.contextHint) {
const context = buildContextFromHint(step.contextHint, status);
prompt += `\n\nContext:\n${context}`;
} else {
// Default context: previous session IDs
const previousContext = collectPreviousResults(status);
if (previousContext) {
prompt += `\n\nPrevious results:\n${previousContext}`;
}
}
return prompt;
}
/**
* Build args for Skill() invocation (mainprocess mode)
*/
function buildSkillArgs(step, status) {
let args = '';
// Add route for multi-mode Skills
if (step.route) {
args += `--route ${step.route} `;
}
args += '-y';
// Add step arguments
if (step.args) {
const resolvedArgs = step.args
.replace('{{goal}}', status.goal)
.replace('{{prev}}', getPreviousSessionId(status));
args += ` ${resolvedArgs}`;
}
return args;
}
function buildContextFromHint(hint, status) {
// Parse contextHint instruction and build context accordingly
return parseAndBuildContext(hint, status);
}
```
### Example Prompt Output
```
/workflow-lite-plan -y "Implement user registration"
Context:
Task: Implement user registration
Previous results:
- None (first step)
```
```
/workflow-lite-plan --route lite-execute -y --in-memory
Context:
Task: Implement user registration
Previous results:
- lite-plan: WFS-plan-20250130 (planning-context.md)
```
---
## User Interaction
### Step 1: Select Template
```
Select workflow template:
○ rapid lite-plan → lite-execute → test-cycle-execute
○ coupled plan → plan-verify → execute → review → test
○ bugfix lite-plan --bugfix → lite-execute → test-cycle-execute
○ tdd tdd-plan → execute → tdd-verify
○ Other (more templates or custom)
```
### Step 2: Review Execution Plan
```
Template: coupled
Steps:
1. workflow-plan (mainprocess)
2. workflow-plan --route plan-verify (mainprocess)
3. workflow-execute (async)
4. review-cycle --route session (mainprocess)
5. review-cycle --route fix (mainprocess)
6. workflow-test-fix (mainprocess)
7. workflow-test-fix --route test-cycle-execute (async)
Proceed? [Confirm / Cancel]
```
---
## Resume Capability
```javascript
async function resume(sessionId) {
const statusPath = `.workflow/.flow-coordinator/${sessionId}/status.json`;
const status = JSON.parse(Read(statusPath));
// Find first incomplete step
status.current = status.steps.findIndex(s => s.status !== 'done');
if (status.current === -1) {
console.log('All steps complete');
return;
}
// Continue executing steps
await executeSteps(status, statusPath);
}
```
---
## Available Templates
Templates discovered from `templates/*.json`:
| Template | Use Case | Steps |
|----------|----------|-------|
| rapid | Simple feature | workflow-lite-plan → workflow-lite-plan[lite-execute] → workflow-test-fix → workflow-test-fix[test-cycle-execute] |
| coupled | Complex feature | workflow-plan → workflow-plan[plan-verify] → workflow-execute → review-cycle[session] → review-cycle[fix] → workflow-test-fix → workflow-test-fix[test-cycle-execute] |
| bugfix | Bug fix | workflow-lite-plan --bugfix → workflow-lite-plan[lite-execute] → workflow-test-fix → workflow-test-fix[test-cycle-execute] |
| bugfix-hotfix | Urgent hotfix | workflow-lite-plan --hotfix |
| tdd | Test-driven | workflow-tdd → workflow-execute → workflow-tdd[tdd-verify] |
| test-fix | Fix failing tests | workflow-test-fix → workflow-test-fix[test-cycle-execute] |
| review | Code review | review-cycle[session] → review-cycle[fix] → workflow-test-fix → workflow-test-fix[test-cycle-execute] |
| multi-cli-plan | Multi-perspective planning | workflow-multi-cli-plan → workflow-lite-plan[lite-execute] → workflow-test-fix → workflow-test-fix[test-cycle-execute] |
| full | Complete workflow | brainstorm → workflow-plan → workflow-plan[plan-verify] → workflow-execute → workflow-test-fix → workflow-test-fix[test-cycle-execute] |
| docs | Documentation | workflow-lite-plan → workflow-lite-plan[lite-execute] |
| brainstorm | Exploration | workflow:brainstorm-with-file |
| debug | Debug with docs | workflow:debug-with-file |
| analyze | Collaborative analysis | workflow:analyze-with-file |
| issue | Issue workflow | issue:discover → issue:plan → issue:queue → issue:execute |
| rapid-to-issue | Plan to issue bridge | workflow-lite-plan → issue:convert-to-plan → issue:queue → issue:execute |
| brainstorm-to-issue | Brainstorm to issue | issue:from-brainstorm → issue:queue → issue:execute |
**注**: `[route]` 表示该步骤使用 `route` 字段路由到多模式 Skill 的特定子模式。
---
## Design Principles
1. **Minimal fields**: Only essential tracking data
2. **Flat structure**: No nested objects beyond steps array
3. **Step-level execution**: Each step defines how it's executed
4. **Resumable**: Any step can be resumed from status
5. **Human readable**: Clear JSON format
---
## Reference Documents
| Document | Purpose |
|----------|---------|
| spec/unified-workflow-spec.md | Unified PromptTemplate workflow specification |
| ccw/data/flows/*.json | Unified workflows (DAG format, dynamic discovery) |
| templates/*.json | Legacy workflow templates (command chain format) |
### Demo Workflows (Unified Format)
| File | Description | Nodes |
|------|-------------|-------|
| `demo-unified-workflow.json` | Auth implementation | 7 nodes: Analyze → Plan → Implement → Review → Tests → Report |
| `parallel-ci-workflow.json` | CI/CD pipeline | 8 nodes: Parallel checks → Merge → Conditional notify |
| `simple-analysis-workflow.json` | Analysis pipeline | 3 nodes: Explore → Analyze → Report |

View File

@@ -1,332 +0,0 @@
# Unified Workflow Specification v1.0
> Standard format for PromptTemplate-based workflow definitions
## Overview
This specification defines the JSON schema for unified workflows where **all nodes are prompt templates** with natural language instructions. This replaces the previous multi-type node system with a single, flexible model.
**Design Philosophy**: Every workflow step is a natural language instruction that can optionally specify execution tool and mode. Data flows through named outputs referenced by subsequent steps.
---
## Schema Definition
### Root Object: `Flow`
```typescript
interface Flow {
id: string; // Unique identifier (kebab-case)
name: string; // Display name
description?: string; // Human-readable description
version: number; // Schema version (currently 1)
created_at: string; // ISO 8601 timestamp
updated_at: string; // ISO 8601 timestamp
nodes: FlowNode[]; // Workflow steps
edges: FlowEdge[]; // Step connections (DAG)
variables: Record<string, unknown>; // Global workflow variables
metadata: FlowMetadata; // Classification and source info
}
```
### FlowNode
```typescript
interface FlowNode {
id: string; // Unique node ID
type: 'prompt-template'; // Always 'prompt-template'
position: { x: number; y: number }; // Canvas position
data: PromptTemplateNodeData; // Node configuration
}
```
### PromptTemplateNodeData
```typescript
interface PromptTemplateNodeData {
// === Required ===
label: string; // Display label in editor
instruction: string; // Natural language instruction
// === Slash Command (optional, overrides instruction) ===
slashCommand?: string; // Slash command name (e.g., "workflow:plan")
slashArgs?: string; // Arguments for slash command (supports {{variable}})
// === Data Flow ===
outputName?: string; // Name for output reference
contextRefs?: string[]; // References to previous outputs
// === Execution Config ===
tool?: CliTool; // 'gemini' | 'qwen' | 'codex' | 'claude'
mode?: ExecutionMode; // 'analysis' | 'write' | 'mainprocess' | 'async'
// === Runtime State (populated during execution) ===
executionStatus?: ExecutionStatus;
executionError?: string;
executionResult?: unknown;
}
```
**Instruction Resolution Priority**:
1. If `slashCommand` is set: `/{slashCommand} {slashArgs}` + optional `instruction` as context
2. Otherwise: `instruction` directly
### FlowEdge
```typescript
interface FlowEdge {
id: string; // Unique edge ID
source: string; // Source node ID
target: string; // Target node ID
type?: string; // Edge type (default: 'default')
data?: {
label?: string; // Edge label (e.g., 'parallel')
condition?: string; // Conditional expression
};
}
```
### FlowMetadata
```typescript
interface FlowMetadata {
source?: 'template' | 'custom' | 'imported';
tags?: string[];
category?: string;
}
```
---
## Instruction Syntax
### Context References
Use `{{outputName}}` syntax to reference outputs from previous steps:
```
Analyze {{requirements_analysis}} and create implementation plan.
```
### Nested Property Access
```
If {{ci_report.status}} === 'failed', stop execution.
```
### Multiple References
```
Combine {{lint_result}}, {{typecheck_result}}, and {{test_result}} into report.
```
---
## Execution Modes
| Mode | Behavior | Use Case |
|------|----------|----------|
| `analysis` | Read-only, no file changes | Code review, exploration |
| `write` | Can create/modify/delete files | Implementation, fixes |
| `mainprocess` | Blocking, synchronous | Interactive steps |
| `async` | Background, non-blocking | Long-running tasks |
---
## DAG Execution Semantics
### Sequential Execution
Nodes with single input edge execute after predecessor completes.
```
[A] ──▶ [B] ──▶ [C]
```
### Parallel Execution
Multiple edges from same source trigger parallel execution:
```
┌──▶ [B]
[A] ──┤
└──▶ [C]
```
### Merge Point
Node with multiple input edges waits for all predecessors:
```
[B] ──┐
├──▶ [D]
[C] ──┘
```
### Conditional Branching
Edge `data.condition` specifies branch condition:
```json
{
"id": "e-decision-success",
"source": "decision",
"target": "notify-success",
"data": { "condition": "decision.result === 'pass'" }
}
```
---
## Example: Minimal Workflow
```json
{
"id": "simple-analysis",
"name": "Simple Analysis",
"version": 1,
"created_at": "2026-02-04T00:00:00.000Z",
"updated_at": "2026-02-04T00:00:00.000Z",
"nodes": [
{
"id": "analyze",
"type": "prompt-template",
"position": { "x": 100, "y": 100 },
"data": {
"label": "Analyze Code",
"instruction": "Analyze the authentication module for security issues.",
"outputName": "analysis",
"tool": "gemini",
"mode": "analysis"
}
},
{
"id": "report",
"type": "prompt-template",
"position": { "x": 100, "y": 250 },
"data": {
"label": "Generate Report",
"instruction": "Based on {{analysis}}, generate a security report with recommendations.",
"outputName": "report",
"contextRefs": ["analysis"]
}
}
],
"edges": [
{ "id": "e1", "source": "analyze", "target": "report" }
],
"variables": {},
"metadata": { "source": "custom", "tags": ["security"] }
}
```
---
## Example: Parallel with Merge
```json
{
"nodes": [
{
"id": "start",
"type": "prompt-template",
"position": { "x": 200, "y": 50 },
"data": {
"label": "Prepare",
"instruction": "Set up build environment",
"outputName": "env"
}
},
{
"id": "lint",
"type": "prompt-template",
"position": { "x": 100, "y": 200 },
"data": {
"label": "Lint",
"instruction": "Run linter checks",
"outputName": "lint_result",
"tool": "codex",
"mode": "analysis",
"contextRefs": ["env"]
}
},
{
"id": "test",
"type": "prompt-template",
"position": { "x": 300, "y": 200 },
"data": {
"label": "Test",
"instruction": "Run unit tests",
"outputName": "test_result",
"tool": "codex",
"mode": "analysis",
"contextRefs": ["env"]
}
},
{
"id": "merge",
"type": "prompt-template",
"position": { "x": 200, "y": 350 },
"data": {
"label": "Merge Results",
"instruction": "Combine {{lint_result}} and {{test_result}} into CI report",
"outputName": "ci_report",
"contextRefs": ["lint_result", "test_result"]
}
}
],
"edges": [
{ "id": "e1", "source": "start", "target": "lint", "data": { "label": "parallel" } },
{ "id": "e2", "source": "start", "target": "test", "data": { "label": "parallel" } },
{ "id": "e3", "source": "lint", "target": "merge" },
{ "id": "e4", "source": "test", "target": "merge" }
]
}
```
---
## Migration from Old Format
### Old Template Step
```json
{
"cmd": "/workflow:lite-plan",
"args": "\"{{goal}}\"",
"execution": { "type": "slash-command", "mode": "mainprocess" }
}
```
### New PromptTemplate Node
```json
{
"id": "plan",
"type": "prompt-template",
"data": {
"label": "Create Plan",
"instruction": "Execute /workflow:lite-plan for: {{goal}}",
"outputName": "plan_result",
"mode": "mainprocess"
}
}
```
---
## Validation Rules
1. **Unique IDs**: All node and edge IDs must be unique within the flow
2. **Valid References**: `contextRefs` must reference existing `outputName` values
3. **DAG Structure**: No circular dependencies allowed
4. **Required Fields**: `id`, `name`, `version`, `nodes`, `edges` are required
5. **Node Type**: All nodes must have `type: 'prompt-template'`
---
## File Location
Workflow files stored in: `ccw/data/flows/*.json`
Template discovery: `Glob('*.json', { path: 'ccw/data/flows/' })`

View File

@@ -1,17 +0,0 @@
{
"name": "analyze",
"description": "Collaborative analysis with multi-round discussion - deep exploration and understanding",
"level": 3,
"steps": [
{
"cmd": "workflow:analyze-with-file",
"args": "\"{{goal}}\"",
"unit": "analyze-with-file",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-round collaborative analysis with iterative understanding. Generate discussion.md with comprehensive analysis and conclusions"
}
]
}

View File

@@ -1,36 +0,0 @@
{
"name": "brainstorm-to-issue",
"description": "Bridge brainstorm session to issue workflow - convert exploration insights to executable issues",
"level": 4,
"steps": [
{
"cmd": "issue:from-brainstorm",
"args": "--auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert brainstorm session findings into issue plans and solutions"
},
{
"cmd": "issue:queue",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted brainstorm issues"
},
{
"cmd": "issue:execute",
"args": "--queue auto",
"unit": "brainstorm-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -1,17 +0,0 @@
{
"name": "brainstorm",
"description": "Multi-perspective ideation with documentation - explore possibilities with multiple analytical viewpoints",
"level": 4,
"steps": [
{
"cmd": "workflow:brainstorm-with-file",
"args": "\"{{goal}}\"",
"unit": "brainstorm-with-file",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective ideation with interactive diverge-converge cycles. Generate brainstorm.md with synthesis of ideas and recommendations"
}
]
}

View File

@@ -1,17 +0,0 @@
{
"name": "bugfix-hotfix",
"description": "Urgent production fix - immediate diagnosis and fix with minimal overhead",
"level": 1,
"steps": [
{
"cmd": "workflow-lite-plan",
"args": "--hotfix \"{{goal}}\"",
"unit": "standalone",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Urgent hotfix mode: quick diagnosis and immediate fix for critical production issue"
}
]
}

View File

@@ -1,49 +0,0 @@
{
"name": "bugfix",
"description": "Standard bug fix workflow - lightweight diagnosis and execution with testing",
"level": 2,
"steps": [
{
"cmd": "workflow-lite-plan",
"args": "--bugfix \"{{goal}}\"",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze bug report, trace execution flow, identify root cause with fix strategy"
},
{
"cmd": "workflow-lite-plan",
"route": "lite-execute",
"args": "--in-memory",
"unit": "bug-fix",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Implement fix based on diagnosis. Execute against in-memory state from lite-plan analysis."
},
{
"cmd": "workflow-test-fix",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks to verify bug fix and prevent regression"
},
{
"cmd": "workflow-test-fix",
"route": "test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -1,75 +0,0 @@
{
"name": "coupled",
"description": "Full workflow for complex features - detailed planning with verification, execution, review, and testing",
"level": 3,
"steps": [
{
"cmd": "workflow-plan",
"args": "\"{{goal}}\"",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan with architecture design, file structure, dependencies, and milestones"
},
{
"cmd": "workflow-plan",
"route": "plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify IMPL_PLAN.md against requirements, check for missing details, conflicts, and quality gates"
},
{
"cmd": "workflow-execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation based on verified plan. Resume from planning session with all context preserved."
},
{
"cmd": "review-cycle",
"route": "session",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform multi-dimensional code review across correctness, security, performance, maintainability. Reference execution session for full code context."
},
{
"cmd": "review-cycle",
"route": "fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix issues identified in review findings with prioritization by severity levels"
},
{
"cmd": "workflow-test-fix",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks for the implementation with coverage analysis"
},
{
"cmd": "workflow-test-fix",
"route": "test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -1,17 +0,0 @@
{
"name": "debug",
"description": "Hypothesis-driven debugging with documentation - systematic troubleshooting and logging",
"level": 3,
"steps": [
{
"cmd": "workflow:debug-with-file",
"args": "\"{{goal}}\"",
"unit": "debug-with-file",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Systematic debugging with hypothesis formation and verification. Generate understanding.md with root cause analysis and fix recommendations"
}
]
}

View File

@@ -1,28 +0,0 @@
{
"name": "docs",
"description": "Documentation generation workflow",
"level": 2,
"steps": [
{
"cmd": "workflow-lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Plan documentation structure and content organization"
},
{
"cmd": "workflow-lite-plan",
"route": "lite-execute",
"args": "--in-memory",
"unit": "quick-documentation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute documentation generation from plan"
}
]
}

View File

@@ -1,63 +0,0 @@
{
"name": "full",
"description": "Comprehensive workflow - brainstorm exploration, planning verification, execution, and testing",
"level": 4,
"steps": [
{
"cmd": "brainstorm",
"args": "\"{{goal}}\"",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective exploration of requirements and possible approaches"
},
{
"cmd": "workflow-plan",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create detailed implementation plan based on brainstorm insights"
},
{
"cmd": "workflow-plan",
"route": "plan-verify",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify plan quality and completeness"
},
{
"cmd": "workflow-execute",
"unit": "verified-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute implementation from verified plan"
},
{
"cmd": "workflow-test-fix",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate comprehensive test tasks"
},
{
"cmd": "workflow-test-fix",
"route": "test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -1,43 +0,0 @@
{
"name": "issue",
"description": "Issue workflow - discover issues, create plans, queue execution, and resolve",
"level": "issue",
"steps": [
{
"cmd": "issue:discover",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Discover pending issues from codebase for potential fixes"
},
{
"cmd": "issue:plan",
"args": "--all-pending",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create execution plans for all discovered pending issues"
},
{
"cmd": "issue:queue",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue with issue prioritization and dependencies"
},
{
"cmd": "issue:execute",
"unit": "issue-workflow",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking and completion reporting"
}
]
}

View File

@@ -1,49 +0,0 @@
{
"name": "multi-cli-plan",
"description": "Multi-perspective planning with cross-tool verification and execution",
"level": 3,
"steps": [
{
"cmd": "workflow-multi-cli-plan",
"args": "\"{{goal}}\"",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Multi-perspective analysis comparing different implementation approaches with trade-off analysis"
},
{
"cmd": "workflow-lite-plan",
"route": "lite-execute",
"args": "--in-memory",
"unit": "multi-cli-planning",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute best approach selected from multi-perspective analysis"
},
{
"cmd": "workflow-test-fix",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for the implementation"
},
{
"cmd": "workflow-test-fix",
"route": "test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -1,46 +0,0 @@
{
"name": "rapid-to-issue",
"description": "Bridge lite workflow to issue workflow - convert simple plan to structured issue execution",
"level": 2.5,
"steps": [
{
"cmd": "workflow-lite-plan",
"args": "\"{{goal}}\"",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create lightweight plan for the task"
},
{
"cmd": "issue:convert-to-plan",
"args": "--latest-lite-plan -y",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Convert lite plan to structured issue plan"
},
{
"cmd": "issue:queue",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Build execution queue from converted plan"
},
{
"cmd": "issue:execute",
"args": "--queue auto",
"unit": "rapid-to-issue",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute issues from queue with state tracking"
}
]
}

View File

@@ -1,49 +0,0 @@
{
"name": "rapid",
"description": "Quick implementation - lightweight plan and immediate execution for simple features",
"level": 2,
"steps": [
{
"cmd": "workflow-lite-plan",
"args": "\"{{goal}}\"",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze requirements and create a lightweight implementation plan with key decisions and file structure"
},
{
"cmd": "workflow-lite-plan",
"route": "lite-execute",
"args": "--in-memory",
"unit": "quick-implementation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Use the plan from previous step to implement code. Execute against in-memory state."
},
{
"cmd": "workflow-test-fix",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks from the implementation session"
},
{
"cmd": "workflow-test-fix",
"route": "test-cycle-execute",
"unit": "test-validation",
"optional": true,
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute test-fix cycle until all tests pass"
}
]
}

View File

@@ -1,46 +0,0 @@
{
"name": "review",
"description": "Code review workflow - multi-dimensional review, fix issues, and test validation",
"level": 3,
"steps": [
{
"cmd": "review-cycle",
"route": "session",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Perform comprehensive multi-dimensional code review across correctness, security, performance, maintainability dimensions"
},
{
"cmd": "review-cycle",
"route": "fix",
"unit": "code-review",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Fix all review findings prioritized by severity level (critical -> high -> medium -> low)"
},
{
"cmd": "workflow-test-fix",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Generate test tasks for fixed code with coverage analysis"
},
{
"cmd": "workflow-test-fix",
"route": "test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle until pass rate >= 95%"
}
]
}

View File

@@ -1,35 +0,0 @@
{
"name": "tdd",
"description": "Test-driven development - write tests first, implement to pass tests, verify Red-Green-Refactor cycles",
"level": 3,
"steps": [
{
"cmd": "workflow-tdd",
"args": "\"{{goal}}\"",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Create TDD task plan with Red-Green-Refactor cycles, test specifications, and implementation strategy"
},
{
"cmd": "workflow-execute",
"unit": "tdd-planning-execution",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute TDD tasks following Red-Green-Refactor workflow with test-first development"
},
{
"cmd": "workflow-tdd",
"route": "tdd-verify",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Verify TDD cycle compliance, test coverage, and code quality against Red-Green-Refactor principles"
}
]
}

View File

@@ -1,27 +0,0 @@
{
"name": "test-fix",
"description": "Fix failing tests - generate test tasks and execute iterative test-fix cycle",
"level": 2,
"steps": [
{
"cmd": "workflow-test-fix",
"args": "\"{{goal}}\"",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "mainprocess"
},
"contextHint": "Analyze failing tests, generate targeted test tasks with root cause and fix strategy"
},
{
"cmd": "workflow-test-fix",
"route": "test-cycle-execute",
"unit": "test-validation",
"execution": {
"type": "slash-command",
"mode": "async"
},
"contextHint": "Execute iterative test-fix cycle with pass rate tracking until >= 95% pass rate achieved"
}
]
}

View File

@@ -1,281 +0,0 @@
---
name: issue-discover
description: Unified issue discovery and creation. Create issues from GitHub/text, discover issues via multi-perspective analysis, or prompt-driven iterative exploration. Triggers on "issue:new", "issue:discover", "issue:discover-by-prompt", "create issue", "discover issues", "find issues".
allowed-tools: Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep, Skill, mcp__ace-tool__search_context, mcp__exa__search
---
# Issue Discover
Unified issue discovery and creation skill covering three entry points: manual issue creation, perspective-based discovery, and prompt-driven exploration.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Issue Discover Orchestrator (SKILL.md) │
│ → Action selection → Route to phase → Execute → Summary │
└───────────────┬─────────────────────────────────────────────────┘
├─ AskUserQuestion: Select action
┌───────────┼───────────┬───────────┐
↓ ↓ ↓ │
┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │
│ Create │ │Discover │ │Discover │ │
│ New │ │ Multi │ │by Prompt│ │
└─────────┘ └─────────┘ └─────────┘ │
↓ ↓ ↓ │
Issue Discoveries Discoveries │
(registered) (export) (export) │
│ │ │ │
└───────────┴───────────┘ │
↓ │
issue-resolve (plan/queue) │
↓ │
/issue:execute │
```
## Key Design Principles
1. **Action-Driven Routing**: AskUserQuestion selects action, then load single phase
2. **Progressive Phase Loading**: Only read the selected phase document
3. **CLI-First Data Access**: All issue CRUD via `ccw issue` CLI commands
4. **Auto Mode Support**: `-y` flag skips action selection with auto-detection
## Auto Mode
When `--yes` or `-y`: Skip action selection, auto-detect action from input type.
## Usage
```
Skill(skill="issue-discover", args="<input>")
Skill(skill="issue-discover", args="[FLAGS] \"<input>\"")
# Flags
-y, --yes Skip all confirmations (auto mode)
--action <type> Pre-select action: new|discover|discover-by-prompt
# Phase-specific flags
--priority <1-5> Issue priority (new mode)
--perspectives <list> Comma-separated perspectives (discover mode)
--external Enable Exa research (discover mode)
--scope <pattern> File scope (discover/discover-by-prompt mode)
--depth <level> standard|deep (discover-by-prompt mode)
--max-iterations <n> Max exploration iterations (discover-by-prompt mode)
# Examples
Skill(skill="issue-discover", args="https://github.com/org/repo/issues/42") # Create from GitHub
Skill(skill="issue-discover", args="\"Login fails with special chars\"") # Create from text
Skill(skill="issue-discover", args="--action discover src/auth/**") # Multi-perspective discovery
Skill(skill="issue-discover", args="--action discover src/api/** --perspectives=security,bug") # Focused discovery
Skill(skill="issue-discover", args="--action discover-by-prompt \"Check API contracts\"") # Prompt-driven discovery
Skill(skill="issue-discover", args="-y \"auth broken\"") # Auto mode create
```
## Execution Flow
```
Input Parsing:
└─ Parse flags (--action, -y, --perspectives, etc.) and positional args
Action Selection:
├─ --action flag provided → Route directly
├─ Auto-detect from input:
│ ├─ GitHub URL or #number → Create New (Phase 1)
│ ├─ Path pattern (src/**, *.ts) → Discover (Phase 2)
│ ├─ Short text (< 80 chars) → Create New (Phase 1)
│ └─ Long descriptive text (≥ 80 chars) → Discover by Prompt (Phase 3)
└─ Otherwise → AskUserQuestion to select action
Phase Execution (load one phase):
├─ Phase 1: Create New → phases/01-issue-new.md
├─ Phase 2: Discover → phases/02-discover.md
└─ Phase 3: Discover by Prompt → phases/03-discover-by-prompt.md
Post-Phase:
└─ Summary + Next steps recommendation
```
### Phase Reference Documents
| Phase | Document | Load When | Purpose |
|-------|----------|-----------|---------|
| Phase 1 | [phases/01-issue-new.md](phases/01-issue-new.md) | Action = Create New | Create issue from GitHub URL or text description |
| Phase 2 | [phases/02-discover.md](phases/02-discover.md) | Action = Discover | Multi-perspective issue discovery (bug, security, test, etc.) |
| Phase 3 | [phases/03-discover-by-prompt.md](phases/03-discover-by-prompt.md) | Action = Discover by Prompt | Prompt-driven iterative exploration with Gemini planning |
## Core Rules
1. **Action Selection First**: Always determine action before loading any phase
2. **Single Phase Load**: Only read the selected phase document, never load all phases
3. **CLI Data Access**: Use `ccw issue` CLI for all issue operations, NEVER read files directly
4. **Content Preservation**: Each phase contains complete execution logic from original commands
5. **Auto-Detect Input**: Smart input parsing reduces need for explicit --action flag
## Input Processing
### Auto-Detection Logic
```javascript
function detectAction(input, flags) {
// 1. Explicit --action flag
if (flags.action) return flags.action;
const trimmed = input.trim();
// 2. GitHub URL → new
if (trimmed.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/) || trimmed.match(/^#\d+$/)) {
return 'new';
}
// 3. Path pattern (contains **, /, or --perspectives) → discover
if (trimmed.match(/\*\*/) || trimmed.match(/^src\//) || flags.perspectives) {
return 'discover';
}
// 4. Short text (< 80 chars, no special patterns) → new
if (trimmed.length > 0 && trimmed.length < 80 && !trimmed.includes('--')) {
return 'new';
}
// 5. Long descriptive text → discover-by-prompt
if (trimmed.length >= 80) {
return 'discover-by-prompt';
}
// Cannot auto-detect → ask user
return null;
}
```
### Action Selection (AskUserQuestion)
```javascript
// When action cannot be auto-detected
const answer = AskUserQuestion({
questions: [{
question: "What would you like to do?",
header: "Action",
multiSelect: false,
options: [
{
label: "Create New Issue (Recommended)",
description: "Create issue from GitHub URL, text description, or structured input"
},
{
label: "Discover Issues",
description: "Multi-perspective discovery: bug, security, test, quality, performance, etc."
},
{
label: "Discover by Prompt",
description: "Describe what to find — Gemini plans the exploration strategy iteratively"
}
]
}]
});
// Route based on selection
const actionMap = {
"Create New Issue": "new",
"Discover Issues": "discover",
"Discover by Prompt": "discover-by-prompt"
};
```
## Data Flow
```
User Input (URL / text / path pattern / descriptive prompt)
[Parse Flags + Auto-Detect Action]
[Action Selection] ← AskUserQuestion (if needed)
[Read Selected Phase Document]
[Execute Phase Logic]
[Summary + Next Steps]
├─ After Create → Suggest issue-resolve (plan solution)
└─ After Discover → Suggest export to issues, then issue-resolve
```
## TodoWrite Pattern
```json
[
{"content": "Select action", "status": "completed"},
{"content": "Execute: [selected phase name]", "status": "in_progress"},
{"content": "Summary & next steps", "status": "pending"}
]
```
Phase-specific sub-tasks are attached when the phase executes (see individual phase docs for details).
## Core Guidelines
**Data Access Principle**: Issues files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Create issue | `echo '...' \| ccw issue create` | Direct file write |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` directly.
## Error Handling
| Error | Resolution |
|-------|------------|
| No action detected | Show AskUserQuestion with all 3 options |
| Invalid action type | Show available actions, re-prompt |
| Phase execution fails | Report error, suggest manual intervention |
| No files matched (discover) | Check target pattern, verify path exists |
| Gemini planning failed (discover-by-prompt) | Retry with qwen fallback |
## Post-Phase Next Steps
After successful phase execution, recommend next action:
```javascript
// After Create New (issue created)
AskUserQuestion({
questions: [{
question: "Issue created. What next?",
header: "Next",
multiSelect: false,
options: [
{ label: "Plan Solution", description: "Generate solution via issue-resolve" },
{ label: "Create Another", description: "Create more issues" },
{ label: "View Issues", description: "Review all issues" },
{ label: "Done", description: "Exit workflow" }
]
}]
});
// After Discover / Discover by Prompt (discoveries generated)
AskUserQuestion({
questions: [{
question: "Discovery complete. What next?",
header: "Next",
multiSelect: false,
options: [
{ label: "Export to Issues", description: "Convert discoveries to issues" },
{ label: "Plan Solutions", description: "Plan solutions for exported issues via issue-resolve" },
{ label: "Done", description: "Exit workflow" }
]
}]
});
```
## Related Skills & Commands
- `issue-resolve` - Plan solutions, convert artifacts, form queues, from brainstorm
- `issue-manage` - Interactive issue CRUD operations
- `/issue:execute` - Execute queue with DAG-based parallel orchestration
- `ccw issue list` - List all issues
- `ccw issue status <id>` - View issue details

View File

@@ -1,348 +0,0 @@
# Phase 1: Create New Issue
> 来源: `commands/issue/new.md`
## Overview
Create structured issue from GitHub URL or text description with clarity-based flow control.
**Core workflow**: Input Analysis → Clarity Detection → Data Extraction → Optional Clarification → GitHub Publishing → Create Issue
**Input sources**:
- **GitHub URL** - `https://github.com/owner/repo/issues/123` or `#123`
- **Structured text** - Text with expected/actual/affects keywords
- **Vague text** - Short description that needs clarification
**Output**:
- **Issue** (GH-xxx or ISS-YYYYMMDD-HHMMSS) - Registered issue ready for planning
## Prerequisites
- `gh` CLI available (for GitHub URLs)
- `ccw issue` CLI available
## Auto Mode
When `--yes` or `-y`: Skip clarification questions, create issue with inferred details.
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| input | Yes | String | - | GitHub URL, `#number`, or text description |
| --priority | No | Integer | auto | Priority 1-5 (auto-inferred if omitted) |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Issue Structure
```typescript
interface Issue {
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
title: string;
status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed';
priority: number; // 1 (critical) to 5 (low)
context: string; // Problem description (single source of truth)
source: 'github' | 'text' | 'discovery';
source_url?: string;
labels?: string[];
// GitHub binding (for non-GitHub sources that publish to GitHub)
github_url?: string;
github_number?: number;
// Optional structured fields
expected_behavior?: string;
actual_behavior?: string;
affected_components?: string[];
// Feedback history
feedback?: {
type: 'failure' | 'clarification' | 'rejection';
stage: string;
content: string;
created_at: string;
}[];
bound_solution_id: string | null;
created_at: string;
updated_at: string;
}
```
## Execution Steps
### Step 1.1: Input Analysis & Clarity Detection
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput);
// Detect input type and clarity
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
const isGitHubShort = input.match(/^#(\d+)$/);
const hasStructure = input.match(/(expected|actual|affects|steps):/i);
// Clarity score: 0-3
let clarityScore = 0;
if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear
else if (hasStructure) clarityScore = 2; // Structured text = clear
else if (input.length > 50) clarityScore = 1; // Long text = somewhat clear
else clarityScore = 0; // Vague
let issueData = {};
```
### Step 1.2: Data Extraction (GitHub or Text)
```javascript
if (isGitHubUrl || isGitHubShort) {
// GitHub - fetch via gh CLI
const result = Bash(`gh issue view ${extractIssueRef(input)} --json number,title,body,labels,url`);
const gh = JSON.parse(result);
issueData = {
id: `GH-${gh.number}`,
title: gh.title,
source: 'github',
source_url: gh.url,
labels: gh.labels.map(l => l.name),
context: gh.body?.substring(0, 500) || gh.title,
...parseMarkdownBody(gh.body)
};
} else {
// Text description
issueData = {
id: `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`,
source: 'text',
...parseTextDescription(input)
};
}
```
### Step 1.3: Lightweight Context Hint (Conditional)
```javascript
// ACE search ONLY for medium clarity (1-2) AND missing components
// Skip for: GitHub (has context), vague (needs clarification first)
if (clarityScore >= 1 && clarityScore <= 2 && !issueData.affected_components?.length) {
const keywords = extractKeywords(issueData.context);
if (keywords.length >= 2) {
try {
const aceResult = mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: keywords.slice(0, 3).join(' ')
});
issueData.affected_components = aceResult.files?.slice(0, 3) || [];
} catch {
// ACE failure is non-blocking
}
}
}
```
### Step 1.4: Conditional Clarification (Only if Unclear)
```javascript
// ONLY ask questions if clarity is low
if (clarityScore < 2 && (!issueData.context || issueData.context.length < 20)) {
const answer = AskUserQuestion({
questions: [{
question: 'Please describe the issue in more detail:',
header: 'Clarify',
multiSelect: false,
options: [
{ label: 'Provide details', description: 'Describe what, where, and expected behavior' }
]
}]
});
if (answer.customText) {
issueData.context = answer.customText;
issueData.title = answer.customText.split(/[.\n]/)[0].substring(0, 60);
issueData.feedback = [{
type: 'clarification',
stage: 'new',
content: answer.customText,
created_at: new Date().toISOString()
}];
}
}
```
### Step 1.5: GitHub Publishing Decision (Non-GitHub Sources)
```javascript
// For non-GitHub sources, ask if user wants to publish to GitHub
let publishToGitHub = false;
if (issueData.source !== 'github') {
const publishAnswer = AskUserQuestion({
questions: [{
question: 'Would you like to publish this issue to GitHub?',
header: 'Publish',
multiSelect: false,
options: [
{ label: 'Yes, publish to GitHub', description: 'Create issue on GitHub and link it' },
{ label: 'No, keep local only', description: 'Store as local issue without GitHub sync' }
]
}]
});
publishToGitHub = publishAnswer.answers?.['Publish']?.includes('Yes');
}
```
### Step 1.6: Create Issue
**Issue Creation** (via CLI endpoint):
```bash
# Option 1: Pipe input (recommended for complex JSON)
echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create
# Option 2: Heredoc (for multi-line JSON)
ccw issue create << 'EOF'
{"title":"...", "context":"含\"引号\"的内容", "priority":3}
EOF
```
**GitHub Publishing** (if user opted in):
```javascript
// Step 1: Create local issue FIRST
const localIssue = createLocalIssue(issueData); // ccw issue create
// Step 2: Publish to GitHub if requested
if (publishToGitHub) {
const ghResult = Bash(`gh issue create --title "${issueData.title}" --body "${issueData.context}"`);
const ghUrl = ghResult.match(/https:\/\/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/)?.[0];
const ghNumber = parseInt(ghUrl?.match(/\/issues\/(\d+)/)?.[1]);
if (ghNumber) {
Bash(`ccw issue update ${localIssue.id} --github-url "${ghUrl}" --github-number ${ghNumber}`);
}
}
```
**Workflow:**
```
1. Create local issue (ISS-YYYYMMDD-NNN) → stored in .workflow/issues.jsonl
2. If publishToGitHub:
a. gh issue create → returns GitHub URL
b. Update local issue with github_url + github_number binding
3. Both local and GitHub issues exist, linked together
```
## Execution Flow
```
Phase 1: Input Analysis
└─ Detect clarity score (GitHub URL? Structured text? Keywords?)
Phase 2: Data Extraction (branched by clarity)
┌────────────┬─────────────────┬──────────────┐
│ Score 3 │ Score 1-2 │ Score 0 │
│ GitHub │ Text + ACE │ Vague │
├────────────┼─────────────────┼──────────────┤
│ gh CLI │ Parse struct │ AskQuestion │
│ → parse │ + quick hint │ (1 question) │
│ │ (3 files max) │ → feedback │
└────────────┴─────────────────┴──────────────┘
Phase 3: GitHub Publishing Decision (non-GitHub only)
├─ Source = github: Skip (already from GitHub)
└─ Source ≠ github: AskUserQuestion
├─ Yes → publishToGitHub = true
└─ No → publishToGitHub = false
Phase 4: Create Issue
├─ Score ≥ 2: Direct creation
└─ Score < 2: Confirm first → Create
└─ If publishToGitHub: gh issue create → link URL
Note: Deep exploration & lifecycle deferred to /issue:plan
```
## Helper Functions
```javascript
function extractKeywords(text) {
const stopWords = new Set(['the', 'a', 'an', 'is', 'are', 'was', 'were', 'not', 'with']);
return text
.toLowerCase()
.split(/\W+/)
.filter(w => w.length > 3 && !stopWords.has(w))
.slice(0, 5);
}
function parseTextDescription(text) {
const result = { title: '', context: '' };
const sentences = text.split(/\.(?=\s|$)/);
result.title = sentences[0]?.trim().substring(0, 60) || 'Untitled';
result.context = text.substring(0, 500);
const expected = text.match(/expected:?\s*([^.]+)/i);
const actual = text.match(/actual:?\s*([^.]+)/i);
const affects = text.match(/affects?:?\s*([^.]+)/i);
if (expected) result.expected_behavior = expected[1].trim();
if (actual) result.actual_behavior = actual[1].trim();
if (affects) {
result.affected_components = affects[1].split(/[,\s]+/).filter(c => c.includes('/') || c.includes('.'));
}
return result;
}
function parseMarkdownBody(body) {
if (!body) return {};
const result = {};
const problem = body.match(/##?\s*(problem|description)[:\s]*([\s\S]*?)(?=##|$)/i);
const expected = body.match(/##?\s*expected[:\s]*([\s\S]*?)(?=##|$)/i);
const actual = body.match(/##?\s*actual[:\s]*([\s\S]*?)(?=##|$)/i);
if (problem) result.context = problem[2].trim().substring(0, 500);
if (expected) result.expected_behavior = expected[2].trim();
if (actual) result.actual_behavior = actual[2].trim();
return result;
}
```
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| GitHub fetch failed | gh CLI error | Check gh auth, verify URL |
| Clarity too low | Input unclear | Ask clarification question |
| Issue creation failed | CLI error | Verify ccw issue endpoint |
| GitHub publish failed | gh issue create error | Create local-only, skip GitHub |
## Examples
### Clear Input (No Questions)
```bash
Skill(skill="issue-discover", args="https://github.com/org/repo/issues/42")
# → Fetches, parses, creates immediately
Skill(skill="issue-discover", args="\"Login fails with special chars. Expected: success. Actual: 500\"")
# → Parses structure, creates immediately
```
### Vague Input (1 Question)
```bash
Skill(skill="issue-discover", args="\"auth broken\"")
# → Asks: "Please describe the issue in more detail"
# → User provides details → saved to feedback[]
# → Creates issue
```
## Post-Phase Update
After issue creation:
- Issue created with `status: registered`
- Report: issue ID, title, source, affected components
- Show GitHub URL (if published)
- Recommend next step: `/issue:plan <id>` or `Skill(skill="issue-resolve", args="<id>")`

View File

@@ -1,337 +0,0 @@
# Phase 2: Discover Issues (Multi-Perspective)
> 来源: `commands/issue/discover.md`
## Overview
Multi-perspective issue discovery orchestrator that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items.
**Core workflow**: Initialize → Select Perspectives → Parallel Analysis → Aggregate → Generate Issues → User Action
**Discovery Scope**: Specified modules/files only
**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/`
**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
**Exa Integration**: Auto-enabled for security and best-practices perspectives
**CLI Tools**: Gemini → Qwen → Codex (fallback chain)
## Prerequisites
- Target file/module pattern (e.g., `src/auth/**`)
- `ccw issue` CLI available
## Auto Mode
When `--yes` or `-y`: Auto-select all perspectives, skip confirmations.
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| target | Yes | String | - | File/module glob pattern (e.g., `src/auth/**`) |
| --perspectives | No | String | interactive | Comma-separated: bug,ux,test,quality,security,performance,maintainability,best-practices |
| --external | No | Flag | false | Enable Exa research for all perspectives |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Perspectives
| Perspective | Focus | Categories | Exa |
|-------------|-------|------------|-----|
| **bug** | Potential Bugs | edge-case, null-check, resource-leak, race-condition, boundary, exception-handling | - |
| **ux** | User Experience | error-message, loading-state, feedback, accessibility, interaction, consistency | - |
| **test** | Test Coverage | missing-test, edge-case-test, integration-gap, coverage-hole, assertion-quality | - |
| **quality** | Code Quality | complexity, duplication, naming, documentation, code-smell, readability | - |
| **security** | Security Issues | injection, auth, encryption, input-validation, data-exposure, access-control | ✓ |
| **performance** | Performance | n-plus-one, memory-usage, caching, algorithm, blocking-operation, resource | - |
| **maintainability** | Maintainability | coupling, cohesion, tech-debt, extensibility, module-boundary, interface-design | - |
| **best-practices** | Best Practices | convention, pattern, framework-usage, anti-pattern, industry-standard | ✓ |
## Execution Steps
### Step 2.1: Discovery & Initialization
```javascript
// Parse target pattern and resolve files
const resolvedFiles = await expandGlobPattern(targetPattern);
if (resolvedFiles.length === 0) {
throw new Error(`No files matched pattern: ${targetPattern}`);
}
// Generate discovery ID
const discoveryId = `DSC-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
// Create output directory
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
await mkdir(outputDir, { recursive: true });
await mkdir(`${outputDir}/perspectives`, { recursive: true });
// Initialize unified discovery state
await writeJson(`${outputDir}/discovery-state.json`, {
discovery_id: discoveryId,
target_pattern: targetPattern,
phase: "initialization",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
target: { files_count: { total: resolvedFiles.length }, project: {} },
perspectives: [],
external_research: { enabled: false, completed: false },
results: { total_findings: 0, issues_generated: 0, priority_distribution: {} }
});
```
### Step 2.2: Interactive Perspective Selection
```javascript
let selectedPerspectives = [];
if (args.perspectives) {
selectedPerspectives = args.perspectives.split(',').map(p => p.trim());
} else {
// Interactive selection via AskUserQuestion
const response = AskUserQuestion({
questions: [{
question: "Select primary discovery focus:",
header: "Focus",
multiSelect: false,
options: [
{ label: "Bug + Test + Quality", description: "Quick scan: potential bugs, test gaps, code quality (Recommended)" },
{ label: "Security + Performance", description: "System audit: security issues, performance bottlenecks" },
{ label: "Maintainability + Best-practices", description: "Long-term health: coupling, tech debt, conventions" },
{ label: "Full analysis", description: "All 8 perspectives (comprehensive, takes longer)" }
]
}]
});
selectedPerspectives = parseSelectedPerspectives(response);
}
```
### Step 2.3: Parallel Perspective Analysis
Launch N agents in parallel (one per selected perspective):
```javascript
const agentPromises = selectedPerspectives.map(perspective =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Discover ${perspective} issues`,
prompt: buildPerspectivePrompt(perspective, discoveryId, resolvedFiles, outputDir)
})
);
const results = await Promise.all(agentPromises);
```
### Step 2.4: Aggregation & Prioritization
```javascript
// Load all perspective JSON files written by agents
const allFindings = [];
for (const perspective of selectedPerspectives) {
const jsonPath = `${outputDir}/perspectives/${perspective}.json`;
if (await fileExists(jsonPath)) {
const data = await readJson(jsonPath);
allFindings.push(...data.findings.map(f => ({ ...f, perspective })));
}
}
// Deduplicate and prioritize
const prioritizedFindings = deduplicateAndPrioritize(allFindings);
```
### Step 2.5: Issue Generation & Summary
```javascript
// Convert high-priority findings to issues
const issueWorthy = prioritizedFindings.filter(f =>
f.priority === 'critical' || f.priority === 'high' || f.priority_score >= 0.7
);
// Write discovery-issues.jsonl
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
// Generate summary from agent returns
await writeSummaryFromAgentReturns(outputDir, results, prioritizedFindings, issues);
// Update final state
await updateDiscoveryState(outputDir, {
phase: 'complete',
updated_at: new Date().toISOString(),
'results.issues_generated': issues.length
});
```
### Step 2.6: User Action Prompt
```javascript
const hasHighPriority = issues.some(i => i.priority === 'critical' || i.priority === 'high');
await AskUserQuestion({
questions: [{
question: `Discovery complete: ${issues.length} issues generated, ${prioritizedFindings.length} total findings. What next?`,
header: "Next Step",
multiSelect: false,
options: hasHighPriority ? [
{ label: "Export to Issues (Recommended)", description: `${issues.length} high-priority issues found - export to tracker` },
{ label: "Open Dashboard", description: "Review findings in ccw view before exporting" },
{ label: "Skip", description: "Complete discovery without exporting" }
] : [
{ label: "Open Dashboard (Recommended)", description: "Review findings in ccw view to decide which to export" },
{ label: "Export to Issues", description: `Export ${issues.length} issues to tracker` },
{ label: "Skip", description: "Complete discovery without exporting" }
]
}]
});
if (response === "Export to Issues") {
await appendJsonl('.workflow/issues/issues.jsonl', issues);
}
```
## Agent Invocation Template
### Perspective Analysis Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Discover ${perspective} issues`,
prompt: `
## Task Objective
Discover potential ${perspective} issues in specified module files.
## Discovery Context
- Discovery ID: ${discoveryId}
- Perspective: ${perspective}
- Target Pattern: ${targetPattern}
- Resolved Files: ${resolvedFiles.length} files
- Output Directory: ${outputDir}
## MANDATORY FIRST STEPS
1. Read discovery state: ${outputDir}/discovery-state.json
2. Read schema: ~/.ccw/workflows/cli-templates/schemas/discovery-finding-schema.json
3. Analyze target files for ${perspective} concerns
## Output Requirements
**1. Write JSON file**: ${outputDir}/perspectives/${perspective}.json
- Follow discovery-finding-schema.json exactly
- Each finding: id, title, priority, category, description, file, line, snippet, suggested_issue, confidence
**2. Return summary** (DO NOT write report file):
- Total findings, priority breakdown, key issues
## Perspective-Specific Guidance
${getPerspectiveGuidance(perspective)}
## Success Criteria
- [ ] JSON written to ${outputDir}/perspectives/${perspective}.json
- [ ] Summary returned with findings count and key issues
- [ ] Each finding includes actionable suggested_issue
- [ ] Priority uses lowercase enum: critical/high/medium/low
`
})
```
### Exa Research Agent (for security and best-practices)
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `External research for ${perspective} via Exa`,
prompt: `
## Task Objective
Research industry best practices for ${perspective} using Exa search
## Research Steps
1. Read project tech stack: .workflow/project-tech.json
2. Use Exa to search for best practices
3. Synthesize findings relevant to this project
## Output Requirements
**1. Write JSON file**: ${outputDir}/external-research.json
**2. Return summary** (DO NOT write report file)
## Success Criteria
- [ ] JSON written to ${outputDir}/external-research.json
- [ ] Findings are relevant to project's tech stack
`
})
```
## Perspective Guidance Reference
```javascript
function getPerspectiveGuidance(perspective) {
const guidance = {
bug: `Focus: Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling
Priority: Critical=data corruption/crash, High=malfunction, Medium=edge case issues, Low=minor`,
ux: `Focus: Error messages, loading states, feedback, accessibility, interaction patterns, form validation
Priority: Critical=inaccessible, High=confusing, Medium=inconsistent, Low=cosmetic`,
test: `Focus: Missing unit tests, edge case coverage, integration gaps, assertion quality, test isolation
Priority: Critical=no security tests, High=no core logic tests, Medium=weak coverage, Low=minor gaps`,
quality: `Focus: Complexity, duplication, naming, documentation, code smells, readability
Priority: Critical=unmaintainable, High=significant issues, Medium=naming/docs, Low=minor refactoring`,
security: `Focus: Input validation, auth/authz, injection, XSS/CSRF, data exposure, access control
Priority: Critical=auth bypass/injection, High=missing authz, Medium=weak validation, Low=headers`,
performance: `Focus: N+1 queries, memory leaks, caching, algorithm efficiency, blocking operations
Priority: Critical=memory leaks, High=N+1/inefficient, Medium=missing cache, Low=minor optimization`,
maintainability: `Focus: Coupling, interface design, tech debt, extensibility, module boundaries, configuration
Priority: Critical=unrelated code changes, High=unclear boundaries, Medium=coupling, Low=refactoring`,
'best-practices': `Focus: Framework conventions, language patterns, anti-patterns, deprecated APIs, coding standards
Priority: Critical=anti-patterns causing bugs, High=convention violations, Medium=style, Low=cosmetic`
};
return guidance[perspective] || 'General code discovery analysis';
}
```
## Output File Structure
```
.workflow/issues/discoveries/
├── index.json # Discovery session index
└── {discovery-id}/
├── discovery-state.json # Unified state
├── perspectives/
│ └── {perspective}.json # Per-perspective findings
├── external-research.json # Exa research results (if enabled)
├── discovery-issues.jsonl # Generated candidate issues
└── summary.md # Summary from agent returns
```
## Schema References
| Schema | Path | Purpose |
|--------|------|---------|
| **Discovery State** | `~/.ccw/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state machine |
| **Discovery Finding** | `~/.ccw/workflows/cli-templates/schemas/discovery-finding-schema.json` | Perspective analysis results |
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| No files matched | Pattern empty | Check target pattern, verify path exists |
| Agent failure | Perspective analysis error | Retry failed perspective, check agent logs |
| No findings | All perspectives clean | Report clean status, no issues to generate |
## Examples
```bash
# Quick scan with default perspectives
Skill(skill="issue-discover", args="--action discover src/auth/**")
# Security-focused audit
Skill(skill="issue-discover", args="--action discover src/payment/** --perspectives=security,bug")
# Full analysis with external research
Skill(skill="issue-discover", args="--action discover src/api/** --external")
```
## Post-Phase Update
After discovery:
- Findings aggregated with priority distribution
- Issue candidates written to discovery-issues.jsonl
- Report: total findings, issues generated, priority breakdown
- Recommend next step: Export to issues → `/issue:plan` or `issue-resolve`

View File

@@ -1,509 +0,0 @@
# Phase 3: Discover by Prompt
> 来源: `commands/issue/discover-by-prompt.md`
## Overview
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command analyzes user intent via Gemini, plans exploration strategy dynamically, and executes iterative multi-agent exploration with ACE semantic search.
**Core workflow**: Prompt Analysis → ACE Context → Gemini Planning → Iterative Exploration → Cross-Analysis → Issue Generation
**Core Difference from Phase 2 (Discover)**:
- Phase 2: Pre-defined perspectives (bug, security, etc.), parallel execution
- Phase 3: User-driven prompt, Gemini-planned strategy, iterative exploration
## Prerequisites
- User prompt describing what to discover
- `ccw cli` available (for Gemini planning)
- `ccw issue` CLI available
## Auto Mode
When `--yes` or `-y`: Auto-continue all iterations, skip confirmations.
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| prompt | Yes | String | - | Natural language description of what to find |
| --scope | No | String | `**/*` | File pattern to explore |
| --depth | No | String | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
| --max-iterations | No | Integer | 5 | Maximum exploration iterations |
| --plan-only | No | Flag | false | Stop after Gemini planning, show plan |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Use Cases
| Scenario | Example Prompt |
|----------|----------------|
| API Contract | "Check if frontend calls match backend endpoints" |
| Error Handling | "Find inconsistent error handling patterns" |
| Migration Gap | "Compare old auth with new auth implementation" |
| Feature Parity | "Verify mobile has all web features" |
| Schema Drift | "Check if TypeScript types match API responses" |
| Integration | "Find mismatches between service A and service B" |
## Execution Steps
### Step 3.1: Prompt Analysis & Initialization
```javascript
// Parse arguments
const { prompt, scope, depth, maxIterations } = parseArgs(args);
// Generate discovery ID
const discoveryId = `DBP-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
// Create output directory
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
await mkdir(outputDir, { recursive: true });
await mkdir(`${outputDir}/iterations`, { recursive: true });
// Detect intent type from prompt
const intentType = detectIntent(prompt);
// Returns: 'comparison' | 'search' | 'verification' | 'audit'
// Initialize discovery state
await writeJson(`${outputDir}/discovery-state.json`, {
discovery_id: discoveryId,
type: 'prompt-driven',
prompt: prompt,
intent_type: intentType,
scope: scope || '**/*',
depth: depth || 'standard',
max_iterations: maxIterations || 5,
phase: 'initialization',
created_at: new Date().toISOString(),
iterations: [],
cumulative_findings: [],
comparison_matrix: null
});
```
### Step 3.2: ACE Context Gathering
```javascript
// Extract keywords from prompt for semantic search
const keywords = extractKeywords(prompt);
// Use ACE to understand codebase structure
const aceQueries = [
`Project architecture and module structure for ${keywords.join(', ')}`,
`Where are ${keywords[0]} implementations located?`,
`How does ${keywords.slice(0, 2).join(' ')} work in this codebase?`
];
const aceResults = [];
for (const query of aceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: query
});
aceResults.push({ query, result });
}
// Build context package for Gemini (kept in memory)
const aceContext = {
prompt_keywords: keywords,
codebase_structure: aceResults[0].result,
relevant_modules: aceResults.slice(1).map(r => r.result),
detected_patterns: extractPatterns(aceResults)
};
```
**ACE Query Strategy by Intent Type**:
| Intent | ACE Queries |
|--------|-------------|
| **comparison** | "frontend API calls", "backend API handlers", "API contract definitions" |
| **search** | "{keyword} implementations", "{keyword} usage patterns" |
| **verification** | "expected behavior for {feature}", "test coverage for {feature}" |
| **audit** | "all {category} patterns", "{category} security concerns" |
### Step 3.3: Gemini Strategy Planning
```javascript
// Build Gemini planning prompt with ACE context
const planningPrompt = `
PURPOSE: Analyze discovery prompt and create exploration strategy based on codebase context
TASK:
• Parse user intent from prompt: "${prompt}"
• Use codebase context to identify specific modules and files to explore
• Create exploration dimensions with precise search targets
• Define comparison matrix structure (if comparison intent)
• Set success criteria and iteration strategy
MODE: analysis
CONTEXT: @${scope || '**/*'} | Discovery type: ${intentType}
## Codebase Context (from ACE semantic search)
${JSON.stringify(aceContext, null, 2)}
EXPECTED: JSON exploration plan:
{
"intent_analysis": { "type": "${intentType}", "primary_question": "...", "sub_questions": [...] },
"dimensions": [{ "name": "...", "description": "...", "search_targets": [...], "focus_areas": [...], "agent_prompt": "..." }],
"comparison_matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
"success_criteria": [...],
"estimated_iterations": N,
"termination_conditions": [...]
}
CONSTRAINTS: Use ACE context to inform targets | Focus on actionable plan
`;
// Execute Gemini planning
Bash({
command: `ccw cli -p "${planningPrompt}" --tool gemini --mode analysis`,
run_in_background: true,
timeout: 300000
});
// Parse and validate
const explorationPlan = await parseGeminiPlanOutput(geminiResult);
```
**Gemini Planning Output Schema**:
```json
{
"intent_analysis": {
"type": "comparison|search|verification|audit",
"primary_question": "string",
"sub_questions": ["string"]
},
"dimensions": [
{
"name": "frontend",
"description": "Client-side API calls and error handling",
"search_targets": ["src/api/**", "src/hooks/**"],
"focus_areas": ["fetch calls", "error boundaries", "response parsing"],
"agent_prompt": "Explore frontend API consumption patterns..."
}
],
"comparison_matrix": {
"dimension_a": "frontend",
"dimension_b": "backend",
"comparison_points": [
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"},
{"aspect": "errors", "frontend_check": "error handling", "backend_check": "error responses"}
]
},
"success_criteria": ["All API endpoints mapped", "Discrepancies identified with file:line"],
"estimated_iterations": 3,
"termination_conditions": ["All comparison points verified", "Confidence > 0.8"]
}
```
### Step 3.4: Iterative Agent Exploration (with ACE)
```javascript
let iteration = 0;
let cumulativeFindings = [];
let sharedContext = { aceDiscoveries: [], crossReferences: [] };
let shouldContinue = true;
while (shouldContinue && iteration < maxIterations) {
iteration++;
const iterationDir = `${outputDir}/iterations/${iteration}`;
await mkdir(iterationDir, { recursive: true });
// ACE-assisted iteration planning
const iterationAceQueries = iteration === 1
? explorationPlan.dimensions.map(d => d.focus_areas[0])
: deriveQueriesFromFindings(cumulativeFindings);
const iterationAceResults = [];
for (const query of iterationAceQueries) {
const result = await mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${query} in ${explorationPlan.scope}`
});
iterationAceResults.push({ query, result });
}
sharedContext.aceDiscoveries.push(...iterationAceResults);
// Plan this iteration
const iterationPlan = planIteration(iteration, explorationPlan, cumulativeFindings, iterationAceResults);
// Launch dimension agents with ACE context
const agentPromises = iterationPlan.dimensions.map(dimension =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore ${dimension.name} (iteration ${iteration})`,
prompt: buildDimensionPromptWithACE(dimension, iteration, cumulativeFindings, iterationAceResults, iterationDir)
})
);
const iterationResults = await Promise.all(agentPromises);
// Collect and analyze iteration findings
const iterationFindings = await collectIterationFindings(iterationDir, iterationPlan.dimensions);
// Cross-reference findings between dimensions
if (iterationPlan.dimensions.length > 1) {
const crossRefs = findCrossReferences(iterationFindings, iterationPlan.dimensions);
sharedContext.crossReferences.push(...crossRefs);
}
cumulativeFindings.push(...iterationFindings);
// Decide whether to continue
const convergenceCheck = checkConvergence(iterationFindings, cumulativeFindings, explorationPlan);
shouldContinue = !convergenceCheck.converged;
// Update state
await updateDiscoveryState(outputDir, {
iterations: [...state.iterations, {
number: iteration,
findings_count: iterationFindings.length,
ace_queries: iterationAceQueries.length,
cross_references: sharedContext.crossReferences.length,
new_discoveries: convergenceCheck.newDiscoveries,
confidence: convergenceCheck.confidence,
continued: shouldContinue
}],
cumulative_findings: cumulativeFindings
});
}
```
**Iteration Loop**:
```
┌─────────────────────────────────────────────────────────────┐
│ Iteration Loop │
├─────────────────────────────────────────────────────────────┤
│ 1. Plan: What to explore this iteration │
│ └─ Based on: previous findings + unexplored areas │
│ │
│ 2. Execute: Launch agents for this iteration │
│ └─ Each agent: explore → collect → return summary │
│ │
│ 3. Analyze: Process iteration results │
│ └─ New findings? Gaps? Contradictions? │
│ │
│ 4. Decide: Continue or terminate │
│ └─ Terminate if: max iterations OR convergence OR │
│ high confidence on all questions │
└─────────────────────────────────────────────────────────────┘
```
### Step 3.5: Cross-Analysis & Synthesis
```javascript
// For comparison intent, perform cross-analysis
if (intentType === 'comparison' && explorationPlan.comparison_matrix) {
const comparisonResults = [];
for (const point of explorationPlan.comparison_matrix.comparison_points) {
const dimensionAFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_a &&
f.category.includes(point.aspect)
);
const dimensionBFindings = cumulativeFindings.filter(f =>
f.related_dimension === explorationPlan.comparison_matrix.dimension_b &&
f.category.includes(point.aspect)
);
const discrepancies = findDiscrepancies(dimensionAFindings, dimensionBFindings, point);
comparisonResults.push({
aspect: point.aspect,
dimension_a_count: dimensionAFindings.length,
dimension_b_count: dimensionBFindings.length,
discrepancies: discrepancies,
match_rate: calculateMatchRate(dimensionAFindings, dimensionBFindings)
});
}
await writeJson(`${outputDir}/comparison-analysis.json`, {
matrix: explorationPlan.comparison_matrix,
results: comparisonResults,
summary: {
total_discrepancies: comparisonResults.reduce((sum, r) => sum + r.discrepancies.length, 0),
overall_match_rate: average(comparisonResults.map(r => r.match_rate)),
critical_mismatches: comparisonResults.filter(r => r.match_rate < 0.5)
}
});
}
const prioritizedFindings = prioritizeFindings(cumulativeFindings, explorationPlan);
```
### Step 3.6: Issue Generation & Summary
```javascript
// Convert high-confidence findings to issues
const issueWorthy = prioritizedFindings.filter(f =>
f.confidence >= 0.7 || f.priority === 'critical' || f.priority === 'high'
);
const issues = issueWorthy.map(finding => ({
id: `ISS-${discoveryId}-${finding.id}`,
title: finding.title,
description: finding.description,
source: { discovery_id: discoveryId, finding_id: finding.id, dimension: finding.related_dimension },
file: finding.file,
line: finding.line,
priority: finding.priority,
category: finding.category,
confidence: finding.confidence,
status: 'discovered',
created_at: new Date().toISOString()
}));
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
// Update final state
await updateDiscoveryState(outputDir, {
phase: 'complete',
updated_at: new Date().toISOString(),
results: {
total_iterations: iteration,
total_findings: cumulativeFindings.length,
issues_generated: issues.length,
comparison_match_rate: comparisonResults
? average(comparisonResults.map(r => r.match_rate))
: null
}
});
// Prompt user for next action
await AskUserQuestion({
questions: [{
question: `Discovery complete: ${issues.length} issues from ${cumulativeFindings.length} findings across ${iteration} iterations. What next?`,
header: "Next Step",
multiSelect: false,
options: [
{ label: "Export to Issues (Recommended)", description: `Export ${issues.length} issues for planning` },
{ label: "Review Details", description: "View comparison analysis and iteration details" },
{ label: "Run Deeper", description: "Continue with more iterations" },
{ label: "Skip", description: "Complete without exporting" }
]
}]
});
```
## Dimension Agent Prompt Template
```javascript
function buildDimensionPromptWithACE(dimension, iteration, previousFindings, aceResults, outputDir) {
const relevantAceResults = aceResults.filter(r =>
r.query.includes(dimension.name) || dimension.focus_areas.some(fa => r.query.includes(fa))
);
return `
## Task Objective
Explore ${dimension.name} dimension for issue discovery (Iteration ${iteration})
## Context
- Dimension: ${dimension.name}
- Description: ${dimension.description}
- Search Targets: ${dimension.search_targets.join(', ')}
- Focus Areas: ${dimension.focus_areas.join(', ')}
## ACE Semantic Search Results (Pre-gathered)
${JSON.stringify(relevantAceResults.map(r => ({ query: r.query, files: r.result.slice(0, 5) })), null, 2)}
**Use ACE for deeper exploration**: mcp__ace-tool__search_context available.
${iteration > 1 ? `
## Previous Findings to Build Upon
${summarizePreviousFindings(previousFindings, dimension.name)}
## This Iteration Focus
- Explore areas not yet covered
- Verify/deepen previous findings
- Follow leads from previous discoveries
` : ''}
## MANDATORY FIRST STEPS
1. Read schema: ~/.ccw/workflows/cli-templates/schemas/discovery-finding-schema.json
2. Review ACE results above for starting points
3. Explore files identified by ACE
## Exploration Instructions
${dimension.agent_prompt}
## Output Requirements
**1. Write JSON file**: ${outputDir}/${dimension.name}.json
- findings: [{id, title, category, description, file, line, snippet, confidence, related_dimension}]
- coverage: {files_explored, areas_covered, areas_remaining}
- leads: [{description, suggested_search}]
- ace_queries_used: [{query, result_count}]
**2. Return summary**: Total findings, key discoveries, recommended next areas
`;
}
```
## Output File Structure
```
.workflow/issues/discoveries/
└── {DBP-YYYYMMDD-HHmmss}/
├── discovery-state.json # Session state with iteration tracking
├── iterations/
│ ├── 1/
│ │ └── {dimension}.json # Dimension findings
│ ├── 2/
│ │ └── {dimension}.json
│ └── ...
├── comparison-analysis.json # Cross-dimension comparison (if applicable)
└── discovery-issues.jsonl # Generated issue candidates
```
## Configuration Options
| Flag | Default | Description |
|------|---------|-------------|
| `--scope` | `**/*` | File pattern to explore |
| `--depth` | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
| `--max-iterations` | 5 | Maximum exploration iterations |
| `--tool` | `gemini` | Planning tool (gemini/qwen) |
| `--plan-only` | `false` | Stop after Gemini planning, show plan |
## Schema References
| Schema | Path | Used By |
|--------|------|---------|
| **Discovery State** | `discovery-state-schema.json` | Orchestrator (state tracking) |
| **Discovery Finding** | `discovery-finding-schema.json` | Dimension agents (output) |
| **Exploration Plan** | `exploration-plan-schema.json` | Gemini output validation (memory only) |
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| Gemini planning failed | CLI error | Retry with qwen fallback |
| ACE search failed | No results | Fall back to file glob patterns |
| No findings after iterations | Convergence at 0 | Report clean status |
| Agent timeout | Exploration too large | Narrow scope, reduce iterations |
## Examples
```bash
# Single module deep dive
Skill(skill="issue-discover", args="--action discover-by-prompt \"Find all potential issues in auth\" --scope=src/auth/**")
# API contract comparison
Skill(skill="issue-discover", args="--action discover-by-prompt \"Check if API calls match implementations\" --scope=src/**")
# Plan only mode
Skill(skill="issue-discover", args="--action discover-by-prompt \"Find inconsistent patterns\" --plan-only")
```
## Post-Phase Update
After prompt-driven discovery:
- Findings aggregated across iterations with confidence scores
- Comparison analysis generated (if comparison intent)
- Issue candidates written to discovery-issues.jsonl
- Report: total iterations, findings, issues, match rate
- Recommend next step: Export → issue-resolve (plan solutions)

View File

@@ -1,277 +0,0 @@
---
name: issue-resolve
description: Unified issue resolution pipeline with source selection. Plan issues via AI exploration, convert from artifacts, import from brainstorm sessions, or form execution queues. Triggers on "issue:plan", "issue:queue", "issue:convert-to-plan", "issue:from-brainstorm", "resolve issue", "plan issue", "queue issues", "convert plan to issue".
allowed-tools: Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep, Skill
---
# Issue Resolve
Unified issue resolution pipeline that orchestrates solution creation from multiple sources and queue formation for execution.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Issue Resolve Orchestrator (SKILL.md) │
│ → Source selection → Route to phase → Execute → Summary │
└───────────────┬─────────────────────────────────────────────────┘
├─ AskUserQuestion: Select issue source
┌───────────┼───────────┬───────────┬───────────┐
↓ ↓ ↓ ↓ │
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │ │
│ Explore │ │ Convert │ │ From │ │ Form │ │
│ & Plan │ │Artifact │ │Brainstorm│ │ Queue │ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ │
↓ ↓ ↓ ↓ │
Solutions Solutions Issue+Sol Exec Queue │
(bound) (bound) (bound) (ordered) │
┌────────────────────────────────┘
/issue:execute
```
## Key Design Principles
1. **Source-Driven Routing**: AskUserQuestion selects workflow, then load single phase
2. **Progressive Phase Loading**: Only read the selected phase document
3. **CLI-First Data Access**: All issue/solution CRUD via `ccw issue` CLI commands
4. **Auto Mode Support**: `-y` flag skips source selection (defaults to Explore & Plan)
## Auto Mode
When `--yes` or `-y`: Skip source selection, use Explore & Plan for issue IDs, or auto-detect source type for paths.
## Usage
```
Skill(skill="issue-resolve", args="<task description or issue IDs>")
Skill(skill="issue-resolve", args="[FLAGS] \"<input>\"")
# Flags
-y, --yes Skip all confirmations (auto mode)
--source <type> Pre-select source: plan|convert|brainstorm|queue
--batch-size <n> Max issues per agent batch (plan mode, default: 3)
--issue <id> Bind to existing issue (convert mode)
--supplement Add tasks to existing solution (convert mode)
--queues <n> Number of parallel queues (queue mode, default: 1)
# Examples
Skill(skill="issue-resolve", args="GH-123,GH-124") # Explore & plan issues
Skill(skill="issue-resolve", args="--source plan --all-pending") # Plan all pending issues
Skill(skill="issue-resolve", args="--source convert \".workflow/.lite-plan/my-plan\"") # Convert artifact
Skill(skill="issue-resolve", args="--source brainstorm SESSION=\"BS-rate-limiting\"") # From brainstorm
Skill(skill="issue-resolve", args="--source queue") # Form execution queue
Skill(skill="issue-resolve", args="-y GH-123") # Auto mode, plan single issue
```
## Execution Flow
```
Input Parsing:
└─ Parse flags (--source, -y, --issue, etc.) and positional args
Source Selection:
├─ --source flag provided → Route directly
├─ Auto-detect from input:
│ ├─ Issue IDs (GH-xxx, ISS-xxx) → Explore & Plan
│ ├─ SESSION="..." → From Brainstorm
│ ├─ File/folder path → Convert from Artifact
│ └─ No input or --all-pending → Explore & Plan (all pending)
└─ Otherwise → AskUserQuestion to select source
Phase Execution (load one phase):
├─ Phase 1: Explore & Plan → phases/01-issue-plan.md
├─ Phase 2: Convert Artifact → phases/02-convert-to-plan.md
├─ Phase 3: From Brainstorm → phases/03-from-brainstorm.md
└─ Phase 4: Form Queue → phases/04-issue-queue.md
Post-Phase:
└─ Summary + Next steps recommendation
```
### Phase Reference Documents
| Phase | Document | Load When | Purpose |
|-------|----------|-----------|---------|
| Phase 1 | [phases/01-issue-plan.md](phases/01-issue-plan.md) | Source = Explore & Plan | Batch plan issues via issue-plan-agent |
| Phase 2 | [phases/02-convert-to-plan.md](phases/02-convert-to-plan.md) | Source = Convert Artifact | Convert lite-plan/session/markdown to solutions |
| Phase 3 | [phases/03-from-brainstorm.md](phases/03-from-brainstorm.md) | Source = From Brainstorm | Convert brainstorm ideas to issue + solution |
| Phase 4 | [phases/04-issue-queue.md](phases/04-issue-queue.md) | Source = Form Queue | Order bound solutions into execution queue |
## Core Rules
1. **Source Selection First**: Always determine source before loading any phase
2. **Single Phase Load**: Only read the selected phase document, never load all phases
3. **CLI Data Access**: Use `ccw issue` CLI for all issue/solution operations, NEVER read files directly
4. **Content Preservation**: Each phase contains complete execution logic from original commands
5. **Auto-Detect Input**: Smart input parsing reduces need for explicit --source flag
## Input Processing
### Auto-Detection Logic
```javascript
function detectSource(input, flags) {
// 1. Explicit --source flag
if (flags.source) return flags.source;
// 2. Auto-detect from input content
const trimmed = input.trim();
// Issue IDs pattern (GH-xxx, ISS-xxx, comma-separated)
if (trimmed.match(/^[A-Z]+-\d+/i) || trimmed.includes(',')) {
return 'plan';
}
// --all-pending or empty input → plan all pending
if (flags.allPending || trimmed === '') {
return 'plan';
}
// SESSION="..." pattern → brainstorm
if (trimmed.includes('SESSION=')) {
return 'brainstorm';
}
// File/folder path → convert
if (trimmed.match(/\.(md|json)$/) || trimmed.includes('.workflow/')) {
return 'convert';
}
// Cannot auto-detect → ask user
return null;
}
```
### Source Selection (AskUserQuestion)
```javascript
// When source cannot be auto-detected
const answer = AskUserQuestion({
questions: [{
question: "How would you like to create/manage issue solutions?",
header: "Source",
multiSelect: false,
options: [
{
label: "Explore & Plan (Recommended)",
description: "AI explores codebase and generates solutions for issues"
},
{
label: "Convert from Artifact",
description: "Convert existing lite-plan, workflow session, or markdown to solution"
},
{
label: "From Brainstorm",
description: "Convert brainstorm session ideas into issue with solution"
},
{
label: "Form Execution Queue",
description: "Order bound solutions into execution queue for /issue:execute"
}
]
}]
});
// Route based on selection
const sourceMap = {
"Explore & Plan": "plan",
"Convert from Artifact": "convert",
"From Brainstorm": "brainstorm",
"Form Execution Queue": "queue"
};
```
## Data Flow
```
User Input (issue IDs / artifact path / session ID / flags)
[Parse Flags + Auto-Detect Source]
[Source Selection] ← AskUserQuestion (if needed)
[Read Selected Phase Document]
[Execute Phase Logic]
[Summary + Next Steps]
├─ After Plan/Convert/Brainstorm → Suggest /issue:queue or /issue:execute
└─ After Queue → Suggest /issue:execute
```
## TodoWrite Pattern
```json
[
{"content": "Select issue source", "status": "completed"},
{"content": "Execute: [selected phase name]", "status": "in_progress"},
{"content": "Summary & next steps", "status": "pending"}
]
```
Phase-specific sub-tasks are attached when the phase executes (see individual phase docs for details).
## Core Guidelines
**Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
| Batch solutions | `ccw issue solutions --status planned --brief` | Loop individual queries |
**Output Options**:
- `--brief`: JSON with minimal fields (orchestrator use)
- `--json`: Full JSON (agent use only)
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
## Error Handling
| Error | Resolution |
|-------|------------|
| No source detected | Show AskUserQuestion with all 4 options |
| Invalid source type | Show available sources, re-prompt |
| Phase execution fails | Report error, suggest manual intervention |
| No pending issues (plan) | Suggest `/issue:new` to create issues first |
| No bound solutions (queue) | Suggest running plan/convert/brainstorm first |
## Post-Phase Next Steps
After successful phase execution, recommend next action:
```javascript
// After Plan/Convert/Brainstorm (solutions created)
AskUserQuestion({
questions: [{
question: "Solutions created. What next?",
header: "Next",
multiSelect: false,
options: [
{ label: "Form Queue", description: "Order solutions for execution (/issue:queue)" },
{ label: "Plan More Issues", description: "Continue creating solutions" },
{ label: "View Issues", description: "Review issue details" },
{ label: "Done", description: "Exit workflow" }
]
}]
});
// After Queue (queue formed)
// → Suggest /issue:execute directly
```
## Related Skills & Commands
- `issue-manage` - Interactive issue CRUD operations
- `/issue:new` - Create structured issue from GitHub or text
- `/issue:execute` - Execute queue with DAG-based parallel orchestration
- `ccw issue list` - List all issues
- `ccw issue status <id>` - View issue details

View File

@@ -1,292 +0,0 @@
# Phase 1: Explore & Plan
> 来源: `commands/issue/plan.md`
## Overview
Batch plan issue resolution using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
**Behavior:**
- Single solution per issue → auto-bind
- Multiple solutions → return for user selection
- Agent handles file generation
## Prerequisites
- Issue IDs provided (comma-separated) or `--all-pending` flag
- `ccw issue` CLI available
- `.workflow/issues/` directory exists or will be created
## Auto Mode
When `--yes` or `-y`: Auto-bind solutions without confirmation, use recommended settings.
## Core Guidelines
**⚠️ Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
**Output Options**:
- `--brief`: JSON with minimal fields (id, title, status, priority, tags)
- `--json`: Full JSON (agent use only)
**Orchestration vs Execution**:
- **Command (orchestrator)**: Use `--brief` for minimal context
- **Agent (executor)**: Fetch full details → `ccw issue status <id> --json`
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
## Execution Steps
### Step 1.1: Issue Loading (Brief Info Only)
```javascript
const batchSize = flags.batchSize || 3;
let issues = []; // {id, title, tags} - brief info for grouping only
// Default to --all-pending if no input provided
const useAllPending = flags.allPending || !userInput || userInput.trim() === '';
if (useAllPending) {
// Get pending issues with brief metadata via CLI
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
const parsed = result ? JSON.parse(result) : [];
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
if (issues.length === 0) {
console.log('No pending issues found.');
return;
}
console.log(`Found ${issues.length} pending issues`);
} else {
// Parse comma-separated issue IDs, fetch brief metadata
const ids = userInput.includes(',')
? userInput.split(',').map(s => s.trim())
: [userInput.trim()];
for (const id of ids) {
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
const info = Bash(`ccw issue status ${id} --json`).trim();
const parsed = info ? JSON.parse(info) : {};
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
}
}
// Note: Agent fetches full issue content via `ccw issue status <id> --json`
// Intelligent grouping: Analyze issues by title/tags, group semantically similar ones
// Strategy: Same module/component, related bugs, feature clusters
// Constraint: Max ${batchSize} issues per batch
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es)`);
TodoWrite({
todos: batches.map((_, i) => ({
content: `Plan batch ${i+1}`,
status: 'pending',
activeForm: `Planning batch ${i+1}`
}))
});
```
### Step 1.2: Unified Explore + Plan (issue-plan-agent) - PARALLEL
```javascript
Bash(`mkdir -p .workflow/issues/solutions`);
const pendingSelections = []; // Collect multi-solution issues for user selection
const agentResults = []; // Collect all agent results for conflict aggregation
// Build prompts for all batches
const agentTasks = batches.map((batch, batchIndex) => {
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
const batchIds = batch.map(i => i.id);
const issuePrompt = `
## Plan Issues
**Issues** (grouped by similarity):
${issueList}
**Project Root**: ${process.cwd()}
### Project Context (MANDATORY)
1. Read: .workflow/project-tech.json (technology stack, architecture)
2. Read: .workflow/project-guidelines.json (constraints and conventions)
### Workflow
1. Fetch issue details: ccw issue status <id> --json
2. **Analyze failure history** (if issue.feedback exists):
- Extract failure details from issue.feedback (type='failure', stage='execute')
- Parse error_type, message, task_id, solution_id from content JSON
- Identify failure patterns: repeated errors, root causes, blockers
- **Constraint**: Avoid repeating failed approaches
3. Load project context files
4. Explore codebase (ACE semantic search)
5. Plan solution with tasks (schema: solution-schema.json)
- **If previous solution failed**: Reference failure analysis in solution.approach
- Add explicit verification steps to prevent same failure mode
6. **If github_url exists**: Add final task to comment on GitHub issue
7. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
8. **CRITICAL - Binding Decision**:
- Single solution → **MUST execute**: ccw issue bind <issue-id> <solution-id>
- Multiple solutions → Return pending_selection only (no bind)
### Failure-Aware Planning Rules
- **Extract failure patterns**: Parse issue.feedback where type='failure' and stage='execute'
- **Identify root causes**: Analyze error_type (test_failure, compilation, timeout, etc.)
- **Design alternative approach**: Create solution that addresses root cause
- **Add prevention steps**: Include explicit verification to catch same error earlier
- **Document lessons**: Reference previous failures in solution.approach
### Rules
- Solution ID format: SOL-{issue-id}-{uid} (uid: 4 random alphanumeric chars, e.g., a7x9)
- Single solution per issue → auto-bind via ccw issue bind
- Multiple solutions → register only, return pending_selection
- Tasks must have quantified convergence.criteria
### Return Summary
{"bound":[{"issue_id":"...","solution_id":"...","task_count":N}],"pending_selection":[{"issue_id":"...","solutions":[{"id":"...","description":"...","task_count":N}]}]}
`;
return { batchIndex, batchIds, issuePrompt, batch };
});
// Launch agents in parallel (max 10 concurrent)
const MAX_PARALLEL = 10;
for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
const chunk = agentTasks.slice(i, i + MAX_PARALLEL);
const taskIds = [];
// Launch chunk in parallel
for (const { batchIndex, batchIds, issuePrompt, batch } of chunk) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
const taskId = Task(
subagent_type="issue-plan-agent",
run_in_background=true,
description=`Explore & plan ${batch.length} issues: ${batchIds.join(', ')}`,
prompt=issuePrompt
);
taskIds.push({ taskId, batchIndex });
}
console.log(`Launched ${taskIds.length} agents (batch ${i/MAX_PARALLEL + 1}/${Math.ceil(agentTasks.length/MAX_PARALLEL)})...`);
// Collect results from this chunk
for (const { taskId, batchIndex } of taskIds) {
const result = TaskOutput(task_id=taskId, block=true);
// Extract JSON from potential markdown code blocks (agent may wrap in ```json...```)
const jsonText = extractJsonFromMarkdown(result);
let summary;
try {
summary = JSON.parse(jsonText);
} catch (e) {
console.log(`⚠ Batch ${batchIndex + 1}: Failed to parse agent result, skipping`);
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
continue;
}
agentResults.push(summary); // Store for Phase 3 conflict aggregation
// Verify binding for bound issues (agent should have executed bind)
for (const item of summary.bound || []) {
const status = JSON.parse(Bash(`ccw issue status ${item.issue_id} --json`).trim());
if (status.bound_solution_id === item.solution_id) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
} else {
// Fallback: agent failed to bind, execute here
Bash(`ccw issue bind ${item.issue_id} ${item.solution_id}`);
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks) [recovered]`);
}
}
// Collect pending selections for Phase 3
for (const pending of summary.pending_selection || []) {
pendingSelections.push(pending);
}
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
}
```
### Step 1.3: Solution Selection (if pending)
```javascript
// Handle multi-solution issues
for (const pending of pendingSelections) {
if (pending.solutions.length === 0) continue;
const options = pending.solutions.slice(0, 4).map(sol => ({
label: `${sol.id} (${sol.task_count} tasks)`,
description: sol.description || sol.approach || 'No description'
}));
const answer = AskUserQuestion({
questions: [{
question: `Issue ${pending.issue_id}: which solution to bind?`,
header: pending.issue_id,
options: options,
multiSelect: false
}]
});
const selected = answer[Object.keys(answer)[0]];
if (!selected || selected === 'Other') continue;
const solId = selected.split(' ')[0];
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
console.log(`${pending.issue_id}: ${solId} bound`);
}
```
### Step 1.4: Summary
```javascript
// Count planned issues via CLI
const planned = JSON.parse(Bash(`ccw issue list --status planned --brief`) || '[]');
const plannedCount = planned.length;
console.log(`
## Done: ${issues.length} issues → ${plannedCount} planned
Next: \`/issue:queue\`\`/issue:execute\`
`);
```
## Error Handling
| Error | Resolution |
|-------|------------|
| Issue not found | Auto-create in issues.jsonl |
| ACE search fails | Agent falls back to ripgrep |
| No solutions generated | Display error, suggest manual planning |
| User cancels selection | Skip issue, continue with others |
| File conflicts | Agent detects and suggests resolution order |
## Bash Compatibility
**Avoid**: `$(cmd)`, `$var`, `for` loops — will be escaped incorrectly
**Use**: Simple commands + `&&` chains, quote comma params `"pending,registered"`
## Quality Checklist
Before completing, verify:
- [ ] All input issues have solutions in `solutions/{issue-id}.jsonl`
- [ ] Single solution issues are auto-bound (`bound_solution_id` set)
- [ ] Multi-solution issues returned in `pending_selection` for user choice
- [ ] Each solution has executable tasks with `files`
- [ ] Task convergence criteria are quantified (not vague)
- [ ] Conflicts detected and reported (if multiple issues touch same files)
- [ ] Issue status updated to `planned` after binding
## Post-Phase Update
After plan completion:
- All processed issues should have `status: planned` and `bound_solution_id` set
- Report: total issues processed, solutions bound, pending selections resolved
- Recommend next step: Form execution queue via Phase 4 or `Skill(skill="issue-resolve", args="--source queue")`

View File

@@ -1,699 +0,0 @@
# Phase 2: Convert from Artifact
> 来源: `commands/issue/convert-to-plan.md`
## Overview
Converts various planning artifact formats into issue workflow solutions with intelligent detection and automatic binding.
**Supported Sources** (auto-detected):
- **lite-plan**: `.workflow/.lite-plan/{slug}/plan.json`
- **workflow-session**: `WFS-xxx` ID or `.workflow/active/{session}/` folder
- **markdown**: Any `.md` file with implementation/task content
- **json**: Direct JSON files matching plan-json-schema
## Prerequisites
- Source artifact path or WFS-xxx ID provided
- `ccw issue` CLI available
- `.workflow/issues/` directory exists or will be created
## Auto Mode
When `--yes` or `-y`: Skip confirmation, auto-create issue and bind solution.
## Command Options
| Option | Description | Default |
|--------|-------------|---------|
| `<SOURCE>` | Planning artifact path or WFS-xxx ID | Required |
| `--issue <id>` | Bind to existing issue instead of creating new | Auto-create |
| `--supplement` | Add tasks to existing solution (requires --issue) | false |
| `-y, --yes` | Skip all confirmations | false |
## Core Data Access Principle
**⚠️ Important**: Use CLI commands for all issue/solution operations.
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| Get issue | `ccw issue status <id> --json` | Read issues.jsonl directly |
| Create issue | `ccw issue init <id> --title "..."` | Write to issues.jsonl |
| Bind solution | `ccw issue bind <id> <sol-id>` | Edit issues.jsonl |
| List solutions | `ccw issue solutions --issue <id> --brief` | Read solutions/*.jsonl |
## Solution Schema Reference
Target format for all extracted data (from solution-schema.json):
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description?: string; // High-level summary
approach?: string; // Technical strategy
tasks: Task[]; // Required: at least 1 task
exploration_context?: object; // Optional: source context
analysis?: { risk, impact, complexity };
score?: number; // 0.0-1.0
is_bound: boolean;
created_at: string;
bound_at?: string;
}
interface Task {
id: string; // T1, T2, T3... (pattern: ^T[0-9]+$)
title: string; // Required: action verb + target
scope: string; // Required: module path or feature area
action: Action; // Required: Create|Update|Implement|...
description?: string;
files?: Array<{path, target?, change?, action?, conflict_risk?}>;
modification_points?: Array<{file, target, change}>; // Legacy, prefer files
implementation: string[]; // Required: step-by-step guide
test?: { unit?, integration?, commands?, coverage_target?, manual_checks? };
convergence: { criteria: string[], verification?: string | string[] }; // Required
acceptance?: { criteria: string[], verification: string[] }; // Legacy, prefer convergence
commit?: { type, scope, message_template, breaking? };
depends_on?: string[];
priority?: string | number; // "critical"|"high"|"medium"|"low" or 1-5
}
type Action = 'Create' | 'Update' | 'Implement' | 'Refactor' | 'Add' | 'Delete' | 'Configure' | 'Test' | 'Fix';
```
## Execution Steps
### Step 2.1: Parse Arguments & Detect Source Type
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --issue, --supplement, -y/--yes
// Extract source path (first non-flag argument)
const source = extractSourceArg(input);
// Detect source type
function detectSourceType(source) {
// Check for WFS-xxx pattern (workflow session ID)
if (source.match(/^WFS-[\w-]+$/)) {
return { type: 'workflow-session-id', path: `.workflow/active/${source}` };
}
// Check if directory
const isDir = Bash(`test -d "${source}" && echo "dir" || echo "file"`).trim() === 'dir';
if (isDir) {
// Check for lite-plan indicator
const hasPlanJson = Bash(`test -f "${source}/plan.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasPlanJson) {
return { type: 'lite-plan', path: source };
}
// Check for workflow session indicator
const hasSession = Bash(`test -f "${source}/workflow-session.json" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasSession) {
return { type: 'workflow-session', path: source };
}
}
// Check file extensions
if (source.endsWith('.json')) {
return { type: 'json-file', path: source };
}
if (source.endsWith('.md')) {
return { type: 'markdown-file', path: source };
}
// Check if path exists at all
const exists = Bash(`test -e "${source}" && echo "yes" || echo "no"`).trim() === 'yes';
if (!exists) {
throw new Error(`E001: Source not found: ${source}`);
}
return { type: 'unknown', path: source };
}
const sourceInfo = detectSourceType(source);
if (sourceInfo.type === 'unknown') {
throw new Error(`E002: Unable to detect source format for: ${source}`);
}
console.log(`Detected source type: ${sourceInfo.type}`);
```
### Step 2.2: Extract Data Using Format-Specific Extractor
```javascript
let extracted = { title: '', approach: '', tasks: [], metadata: {} };
switch (sourceInfo.type) {
case 'lite-plan':
extracted = extractFromLitePlan(sourceInfo.path);
break;
case 'workflow-session':
case 'workflow-session-id':
extracted = extractFromWorkflowSession(sourceInfo.path);
break;
case 'markdown-file':
extracted = await extractFromMarkdownAI(sourceInfo.path);
break;
case 'json-file':
extracted = extractFromJsonFile(sourceInfo.path);
break;
}
// Validate extraction
if (!extracted.tasks || extracted.tasks.length === 0) {
throw new Error('E006: No tasks extracted from source');
}
// Ensure task IDs are normalized to T1, T2, T3...
extracted.tasks = normalizeTaskIds(extracted.tasks);
console.log(`Extracted: ${extracted.tasks.length} tasks`);
```
#### Extractor: Lite-Plan
```javascript
function extractFromLitePlan(folderPath) {
const planJson = Read(`${folderPath}/plan.json`);
const plan = JSON.parse(planJson);
return {
title: plan.summary?.split('.')[0]?.trim() || 'Untitled Plan',
description: plan.summary,
approach: plan.approach,
tasks: plan.tasks.map(t => ({
id: t.id,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
files: t.files || (t.modification_points || []).map(mp => ({path: mp.file, target: mp.target, change: mp.change})),
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.verification ? {
unit: t.verification.unit_tests,
integration: t.verification.integration_tests,
commands: t.verification.manual_checks
} : {},
convergence: normalizeConvergence(t.acceptance, t.convergence),
depends_on: t.depends_on || [],
priority: t.priority || 'medium'
})),
metadata: {
source_type: 'lite-plan',
source_path: folderPath,
complexity: plan.complexity,
estimated_time: plan.estimated_time,
exploration_angles: plan._metadata?.exploration_angles || [],
original_timestamp: plan._metadata?.timestamp
}
};
}
```
#### Extractor: Workflow Session
```javascript
function extractFromWorkflowSession(sessionPath) {
// Load session metadata
const sessionJson = Read(`${sessionPath}/workflow-session.json`);
const session = JSON.parse(sessionJson);
// Load IMPL_PLAN.md for approach (if exists)
let approach = '';
const implPlanPath = `${sessionPath}/IMPL_PLAN.md`;
const hasImplPlan = Bash(`test -f "${implPlanPath}" && echo "yes" || echo "no"`).trim() === 'yes';
if (hasImplPlan) {
const implPlan = Read(implPlanPath);
// Extract overview/approach section
const overviewMatch = implPlan.match(/##\s*(?:Overview|Approach|Strategy)\s*\n([\s\S]*?)(?=\n##|$)/i);
approach = overviewMatch?.[1]?.trim() || implPlan.split('\n').slice(0, 10).join('\n');
}
// Load all task JSONs from .task folder
const taskFiles = Glob({ pattern: `${sessionPath}/.task/IMPL-*.json` });
const tasks = taskFiles.map(f => {
const taskJson = Read(f);
const task = JSON.parse(taskJson);
return {
id: task.id?.replace(/^IMPL-0*/, 'T') || 'T1', // IMPL-001 → T1
title: task.title,
scope: task.scope || inferScopeFromTask(task),
action: capitalizeAction(task.type) || 'Implement',
description: task.description,
files: task.files || (task.implementation?.modification_points || []).map(mp => ({path: mp.file, target: mp.target, change: mp.change})),
implementation: task.implementation?.steps || [],
test: task.implementation?.test || {},
convergence: normalizeConvergence(task.acceptance_criteria, task.convergence),
commit: task.commit,
depends_on: (task.depends_on || []).map(d => d.replace(/^IMPL-0*/, 'T')),
priority: task.priority || 3
};
});
return {
title: session.name || session.description?.split('.')[0] || 'Workflow Session',
description: session.description || session.name,
approach: approach || session.description,
tasks: tasks,
metadata: {
source_type: 'workflow-session',
source_path: sessionPath,
session_id: session.id,
created_at: session.created_at
}
};
}
function inferScopeFromTask(task) {
// Prefer new files[] field, fall back to legacy modification_points
const filePaths = task.files?.map(f => f.path) ||
task.implementation?.modification_points?.map(m => m.file) || [];
if (filePaths.length) {
const dirs = filePaths.map(f => f.split('/').slice(0, -1).join('/'));
return [...new Set(dirs)][0] || '';
}
return '';
}
function capitalizeAction(type) {
if (!type) return 'Implement';
const map = { feature: 'Implement', bugfix: 'Fix', refactor: 'Refactor', test: 'Test', docs: 'Update' };
return map[type.toLowerCase()] || type.charAt(0).toUpperCase() + type.slice(1);
}
```
#### Extractor: Markdown (AI-Assisted via Gemini)
```javascript
async function extractFromMarkdownAI(filePath) {
const fileContent = Read(filePath);
// Use Gemini CLI for intelligent extraction
const cliPrompt = `PURPOSE: Extract implementation plan from markdown document for issue solution conversion. Must output ONLY valid JSON.
TASK: • Analyze document structure • Identify title/summary • Extract approach/strategy section • Parse tasks from any format (lists, tables, sections, code blocks) • Normalize each task to solution schema
MODE: analysis
CONTEXT: Document content provided below
EXPECTED: Valid JSON object with format:
{
"title": "extracted title",
"approach": "extracted approach/strategy",
"tasks": [
{
"id": "T1",
"title": "task title",
"scope": "module or feature area",
"action": "Implement|Update|Create|Fix|Refactor|Add|Delete|Configure|Test",
"description": "what to do",
"implementation": ["step 1", "step 2"],
"acceptance": ["criteria 1", "criteria 2"]
}
]
}
CONSTRAINTS: Output ONLY valid JSON - no markdown, no explanation | Action must be one of: Create, Update, Implement, Refactor, Add, Delete, Configure, Test, Fix | Tasks must have id, title, scope, action, implementation (array), acceptance (array)
DOCUMENT CONTENT:
${fileContent}`;
// Execute Gemini CLI
const result = Bash(`ccw cli -p '${cliPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis`, { timeout: 120000 });
// Parse JSON from result (may be wrapped in markdown code block)
let jsonText = result.trim();
const jsonMatch = jsonText.match(/```(?:json)?\s*([\s\S]*?)```/);
if (jsonMatch) {
jsonText = jsonMatch[1].trim();
}
try {
const extracted = JSON.parse(jsonText);
// Normalize tasks
const tasks = (extracted.tasks || []).map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title || 'Untitled task',
scope: t.scope || '',
action: validateAction(t.action) || 'Implement',
description: t.description || t.title,
files: t.files || (t.modification_points || []).map(mp => ({path: mp.file, target: mp.target, change: mp.change})),
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || {},
convergence: normalizeConvergence(t.acceptance, t.convergence),
depends_on: t.depends_on || [],
priority: t.priority || 'medium'
}));
return {
title: extracted.title || 'Extracted Plan',
description: extracted.summary || extracted.title,
approach: extracted.approach || '',
tasks: tasks,
metadata: {
source_type: 'markdown',
source_path: filePath,
extraction_method: 'gemini-ai'
}
};
} catch (e) {
// Provide more context for debugging
throw new Error(`E005: Failed to extract tasks from markdown. Gemini response was not valid JSON. Error: ${e.message}. Response preview: ${jsonText.substring(0, 200)}...`);
}
}
function validateAction(action) {
const validActions = ['Create', 'Update', 'Implement', 'Refactor', 'Add', 'Delete', 'Configure', 'Test', 'Fix'];
if (!action) return null;
const normalized = action.charAt(0).toUpperCase() + action.slice(1).toLowerCase();
return validActions.includes(normalized) ? normalized : null;
}
```
#### Extractor: JSON File
```javascript
function extractFromJsonFile(filePath) {
const content = Read(filePath);
const plan = JSON.parse(content);
// Detect if it's already solution format or plan format
if (plan.tasks && Array.isArray(plan.tasks)) {
// Map tasks to normalized format
const tasks = plan.tasks.map((t, i) => ({
id: t.id || `T${i + 1}`,
title: t.title,
scope: t.scope || '',
action: t.action || 'Implement',
description: t.description || t.title,
files: t.files || (t.modification_points || []).map(mp => ({path: mp.file, target: mp.target, change: mp.change})),
implementation: Array.isArray(t.implementation) ? t.implementation : [t.implementation || ''],
test: t.test || t.verification || {},
convergence: normalizeConvergence(t.acceptance, t.convergence),
depends_on: t.depends_on || [],
priority: t.priority || 'medium'
}));
return {
title: plan.summary?.split('.')[0] || plan.title || 'JSON Plan',
description: plan.summary || plan.description,
approach: plan.approach,
tasks: tasks,
metadata: {
source_type: 'json',
source_path: filePath,
complexity: plan.complexity,
original_metadata: plan._metadata
}
};
}
throw new Error('E002: JSON file does not contain valid plan structure (missing tasks array)');
}
function normalizeConvergence(acceptance, convergence) {
// Prefer new convergence field; fall back to legacy acceptance
const source = convergence || acceptance;
if (!source) return { criteria: [], verification: [] };
if (typeof source === 'object' && source.criteria) return source;
if (Array.isArray(source)) return { criteria: source, verification: [] };
return { criteria: [String(source)], verification: [] };
}
```
### Step 2.3: Normalize Task IDs
```javascript
function normalizeTaskIds(tasks) {
return tasks.map((t, i) => ({
...t,
id: `T${i + 1}`,
// Also normalize depends_on references
depends_on: (t.depends_on || []).map(d => {
// Handle various ID formats: IMPL-001, T1, 1, etc.
const num = d.match(/\d+/)?.[0];
return num ? `T${parseInt(num)}` : d;
})
}));
}
```
### Step 2.4: Resolve Issue (Create or Find)
```javascript
let issueId = flags.issue;
let existingSolution = null;
if (issueId) {
// Validate issue exists
let issueCheck;
try {
issueCheck = Bash(`ccw issue status ${issueId} --json 2>/dev/null`).trim();
if (!issueCheck || issueCheck === '') {
throw new Error('empty response');
}
} catch (e) {
throw new Error(`E003: Issue not found: ${issueId}`);
}
const issue = JSON.parse(issueCheck);
// Check if issue already has bound solution
if (issue.bound_solution_id && !flags.supplement) {
throw new Error(`E004: Issue ${issueId} already has bound solution (${issue.bound_solution_id}). Use --supplement to add tasks.`);
}
// Load existing solution for supplement mode
if (flags.supplement && issue.bound_solution_id) {
try {
const solResult = Bash(`ccw issue solution ${issue.bound_solution_id} --json`).trim();
existingSolution = JSON.parse(solResult);
console.log(`Loaded existing solution with ${existingSolution.tasks.length} tasks`);
} catch (e) {
throw new Error(`Failed to load existing solution: ${e.message}`);
}
}
} else {
// Create new issue via ccw issue create (auto-generates correct ID)
// Smart extraction: title from content, priority from complexity
const title = extracted.title || 'Converted Plan';
const context = extracted.description || extracted.approach || title;
// Auto-determine priority based on complexity
const complexityMap = { high: 2, medium: 3, low: 4 };
const priority = complexityMap[extracted.metadata.complexity?.toLowerCase()] || 3;
try {
// Use heredoc to avoid shell escaping issues
const createResult = Bash(`ccw issue create << 'EOF'
{
"title": ${JSON.stringify(title)},
"context": ${JSON.stringify(context)},
"priority": ${priority},
"source": "converted"
}
EOF`).trim();
// Parse result to get created issue ID
const created = JSON.parse(createResult);
issueId = created.id;
console.log(`Created issue: ${issueId} (priority: ${priority})`);
} catch (e) {
throw new Error(`Failed to create issue: ${e.message}`);
}
}
```
### Step 2.5: Generate Solution
```javascript
// Generate solution ID
function generateSolutionId(issueId) {
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789';
let uid = '';
for (let i = 0; i < 4; i++) {
uid += chars[Math.floor(Math.random() * chars.length)];
}
return `SOL-${issueId}-${uid}`;
}
let solution;
const solutionId = generateSolutionId(issueId);
if (flags.supplement && existingSolution) {
// Supplement mode: merge with existing solution
const maxTaskId = Math.max(...existingSolution.tasks.map(t => parseInt(t.id.slice(1))));
const newTasks = extracted.tasks.map((t, i) => ({
...t,
id: `T${maxTaskId + i + 1}`
}));
solution = {
...existingSolution,
tasks: [...existingSolution.tasks, ...newTasks],
approach: existingSolution.approach + '\n\n[Supplementary] ' + (extracted.approach || ''),
updated_at: new Date().toISOString()
};
console.log(`Supplementing: ${existingSolution.tasks.length} existing + ${newTasks.length} new = ${solution.tasks.length} total tasks`);
} else {
// New solution
solution = {
id: solutionId,
description: extracted.description || extracted.title,
approach: extracted.approach,
tasks: extracted.tasks,
exploration_context: extracted.metadata.exploration_angles ? {
exploration_angles: extracted.metadata.exploration_angles
} : undefined,
analysis: {
risk: 'medium',
impact: 'medium',
complexity: extracted.metadata.complexity?.toLowerCase() || 'medium'
},
is_bound: false,
created_at: new Date().toISOString(),
_conversion_metadata: {
source_type: extracted.metadata.source_type,
source_path: extracted.metadata.source_path,
converted_at: new Date().toISOString()
}
};
}
```
### Step 2.6: Confirm & Persist
```javascript
// Display preview
console.log(`
## Conversion Summary
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Mode**: ${flags.supplement ? 'Supplement' : 'New'}
### Tasks:
${solution.tasks.map(t => `- ${t.id}: ${t.title} [${t.action}]`).join('\n')}
`);
// Confirm if not auto mode
if (!flags.yes && !flags.y) {
const confirm = AskUserQuestion({
questions: [{
question: `Create solution for issue ${issueId} with ${solution.tasks.length} tasks?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Yes, create solution', description: 'Create and bind solution' },
{ label: 'Cancel', description: 'Abort without changes' }
]
}]
});
if (!confirm.answers?.['Confirm']?.includes('Yes')) {
console.log('Cancelled.');
return;
}
}
// Persist solution (following issue-plan-agent pattern)
Bash(`mkdir -p .workflow/issues/solutions`);
const solutionFile = `.workflow/issues/solutions/${issueId}.jsonl`;
if (flags.supplement) {
// Supplement mode: update existing solution line atomically
try {
const existingContent = Read(solutionFile);
const lines = existingContent.trim().split('\n').filter(l => l);
const updatedLines = lines.map(line => {
const sol = JSON.parse(line);
if (sol.id === existingSolution.id) {
return JSON.stringify(solution);
}
return line;
});
// Atomic write: write entire content at once
Write({ file_path: solutionFile, content: updatedLines.join('\n') + '\n' });
console.log(`✓ Updated solution: ${existingSolution.id}`);
} catch (e) {
throw new Error(`Failed to update solution: ${e.message}`);
}
// Note: No need to rebind - solution is already bound to issue
} else {
// New solution: append to JSONL file (following issue-plan-agent pattern)
try {
const solutionLine = JSON.stringify(solution);
// Read existing content, append new line, write atomically
const existing = Bash(`test -f "${solutionFile}" && cat "${solutionFile}" || echo ""`).trim();
const newContent = existing ? existing + '\n' + solutionLine + '\n' : solutionLine + '\n';
Write({ file_path: solutionFile, content: newContent });
console.log(`✓ Created solution: ${solutionId}`);
} catch (e) {
throw new Error(`Failed to write solution: ${e.message}`);
}
// Bind solution to issue
try {
Bash(`ccw issue bind ${issueId} ${solutionId}`);
console.log(`✓ Bound solution to issue`);
} catch (e) {
// Cleanup: remove solution file on bind failure
try {
Bash(`rm -f "${solutionFile}"`);
} catch (cleanupError) {
// Ignore cleanup errors
}
throw new Error(`Failed to bind solution: ${e.message}`);
}
// Update issue status to planned
try {
Bash(`ccw issue update ${issueId} --status planned`);
} catch (e) {
throw new Error(`Failed to update issue status: ${e.message}`);
}
}
```
### Step 2.7: Summary
```javascript
console.log(`
## Done
**Issue**: ${issueId}
**Solution**: ${flags.supplement ? existingSolution.id : solutionId}
**Tasks**: ${solution.tasks.length}
**Status**: planned
### Next Steps:
- \`/issue:queue\` → Form execution queue
- \`ccw issue status ${issueId}\` → View issue details
- \`ccw issue solution ${flags.supplement ? existingSolution.id : solutionId}\` → View solution
`);
```
## Error Handling
| Error | Code | Resolution |
|-------|------|------------|
| Source not found | E001 | Check path exists |
| Invalid source format | E002 | Verify file contains valid plan structure |
| Issue not found | E003 | Check issue ID or omit --issue to create new |
| Solution already bound | E004 | Use --supplement to add tasks |
| AI extraction failed | E005 | Check markdown structure, try simpler format |
| No tasks extracted | E006 | Source must contain at least 1 task |
## Post-Phase Update
After conversion completion:
- Issue created/updated with `status: planned` and `bound_solution_id` set
- Solution persisted in `.workflow/issues/solutions/{issue-id}.jsonl`
- Report: issue ID, solution ID, task count, mode (new/supplement)
- Recommend next step: Form execution queue via Phase 4 or `Skill(skill="issue-resolve", args="--source queue")`

View File

@@ -1,393 +0,0 @@
# Phase 3: From Brainstorm
> 来源: `commands/issue/from-brainstorm.md`
## Overview
Bridge command that converts **brainstorm-with-file** session output into executable **issue + solution** for parallel-dev-cycle consumption.
**Core workflow**: Load Session → Select Idea → Convert to Issue → Generate Solution → Bind & Ready
**Input sources**:
- **synthesis.json** - Main brainstorm results with top_ideas
- **perspectives.json** - Multi-CLI perspectives (creative/pragmatic/systematic)
- **.brainstorming/** - Synthesis artifacts (clarifications, enhancements from role analyses)
**Output**:
- **Issue** (ISS-YYYYMMDD-NNN) - Full context with clarifications
- **Solution** (SOL-{issue-id}-{uid}) - Structured tasks for parallel-dev-cycle
## Prerequisites
- Brainstorm session ID or path (e.g., `SESSION="BS-rate-limiting-2025-01-28"`)
- `synthesis.json` must exist in session directory
- `ccw issue` CLI available
## Auto Mode
When `--yes` or `-y`: Auto-select highest-scored idea, skip confirmations, create issue directly.
## Arguments
| Argument | Required | Type | Default | Description |
|----------|----------|------|---------|-------------|
| SESSION | Yes | String | - | Session ID or path to `.workflow/.brainstorm/BS-xxx` |
| --idea | No | Integer | - | Pre-select idea by index (0-based) |
| --auto | No | Flag | false | Auto-select highest-scored idea |
| -y, --yes | No | Flag | false | Skip all confirmations |
## Data Structures
### Issue Schema (Output)
```typescript
interface Issue {
id: string; // ISS-YYYYMMDD-NNN
title: string; // From idea.title
status: 'planned'; // Auto-set after solution binding
priority: number; // 1-5 (derived from idea.score)
context: string; // Full description with clarifications
source: 'brainstorm';
labels: string[]; // ['brainstorm', perspective, feasibility]
// Structured fields
expected_behavior: string; // From key_strengths
actual_behavior: string; // From main_challenges
affected_components: string[]; // Extracted from description
_brainstorm_metadata: {
session_id: string;
idea_score: number;
novelty: number;
feasibility: string;
clarifications_count: number;
};
}
```
### Solution Schema (Output)
```typescript
interface Solution {
id: string; // SOL-{issue-id}-{4-char-uid}
description: string; // idea.title
approach: string; // idea.description
tasks: Task[]; // Generated from idea.next_steps
analysis: {
risk: 'low' | 'medium' | 'high';
impact: 'low' | 'medium' | 'high';
complexity: 'low' | 'medium' | 'high';
};
is_bound: boolean; // true
created_at: string;
bound_at: string;
}
interface Task {
id: string; // T1, T2, T3...
title: string; // Actionable task name
scope: string; // design|implementation|testing|documentation
action: string; // Implement|Design|Research|Test|Document
description: string;
implementation: string[]; // Step-by-step guide
convergence: {
criteria: string[]; // What defines success
verification: string[]; // How to verify
};
priority: string; // "critical"|"high"|"medium"|"low"
depends_on: string[]; // Task dependencies
}
```
## Execution Steps
### Step 3.1: Session Loading
```
Phase 1: Session Loading
├─ Validate session path
├─ Load synthesis.json (required)
├─ Load perspectives.json (optional - multi-CLI insights)
├─ Load .brainstorming/** (optional - synthesis artifacts)
└─ Validate top_ideas array exists
```
### Step 3.2: Idea Selection
```
Phase 2: Idea Selection
├─ Auto mode: Select highest scored idea
├─ Pre-selected: Use --idea=N index
└─ Interactive: Display table, ask user to select
```
### Step 3.3: Enrich Issue Context
```
Phase 3: Enrich Issue Context
├─ Base: idea.description + key_strengths + main_challenges
├─ Add: Relevant clarifications (Requirements/Architecture/Feasibility)
├─ Add: Multi-perspective insights (creative/pragmatic/systematic)
└─ Add: Session metadata (session_id, completion date, clarification count)
```
### Step 3.4: Create Issue
```
Phase 4: Create Issue
├─ Generate issue data with enriched context
├─ Calculate priority from idea.score (0-10 → 1-5)
├─ Create via: ccw issue create (heredoc for JSON)
└─ Returns: ISS-YYYYMMDD-NNN
```
### Step 3.5: Generate Solution Tasks
```
Phase 5: Generate Solution Tasks
├─ T1: Research & Validate (if main_challenges exist)
├─ T2: Design & Specification (if key_strengths exist)
├─ T3+: Implementation tasks (from idea.next_steps)
└─ Each task includes: implementation steps + convergence criteria
```
### Step 3.6: Bind Solution
```
Phase 6: Bind Solution
├─ Write solution to .workflow/issues/solutions/{issue-id}.jsonl
├─ Bind via: ccw issue bind {issue-id} {solution-id}
├─ Update issue status to 'planned'
└─ Returns: SOL-{issue-id}-{uid}
```
### Step 3.7: Next Steps
```
Phase 7: Next Steps
└─ Offer: Form queue | Convert another idea | View details | Done
```
## Context Enrichment Logic
### Base Context (Always Included)
- **Description**: `idea.description`
- **Why This Idea**: `idea.key_strengths[]`
- **Challenges to Address**: `idea.main_challenges[]`
- **Implementation Steps**: `idea.next_steps[]`
### Enhanced Context (If Available)
**From Synthesis Artifacts** (`.brainstorming/*/analysis*.md`):
- Extract clarifications matching categories: Requirements, Architecture, Feasibility
- Format: `**{Category}** ({role}): {question} → {answer}`
- Limit: Top 3 most relevant
**From Perspectives** (`perspectives.json`):
- **Creative**: First insight from `perspectives.creative.insights[0]`
- **Pragmatic**: First blocker from `perspectives.pragmatic.blockers[0]`
- **Systematic**: First pattern from `perspectives.systematic.patterns[0]`
**Session Metadata**:
- Session ID, Topic, Completion Date
- Clarifications count (if synthesis artifacts loaded)
## Task Generation Strategy
### Task 1: Research & Validation
**Trigger**: `idea.main_challenges.length > 0`
- **Title**: "Research & Validate Approach"
- **Scope**: design
- **Action**: Research
- **Implementation**: Investigate blockers, review similar implementations, validate with team
- **Acceptance**: Blockers documented, feasibility assessed, approach validated
### Task 2: Design & Specification
**Trigger**: `idea.key_strengths.length > 0`
- **Title**: "Design & Create Specification"
- **Scope**: design
- **Action**: Design
- **Implementation**: Create design doc, define success criteria, plan phases
- **Acceptance**: Design complete, metrics defined, plan outlined
### Task 3+: Implementation Tasks
**Trigger**: `idea.next_steps[]`
- **Title**: From `next_steps[i]` (max 60 chars)
- **Scope**: Inferred from keywords (test→testing, api→backend, ui→frontend)
- **Action**: Detected from verbs (implement, create, update, fix, test, document)
- **Implementation**: Execute step + follow design + write tests
- **Acceptance**: Step implemented + tests passing + code reviewed
### Fallback Task
**Trigger**: No tasks generated from above
- **Title**: `idea.title`
- **Scope**: implementation
- **Action**: Implement
- **Generic implementation + convergence criteria**
## Priority Calculation
### Issue Priority (1-5)
```
idea.score: 0-10
priority = max(1, min(5, ceil((10 - score) / 2)))
Examples:
score 9-10 → priority 1 (critical)
score 7-8 → priority 2 (high)
score 5-6 → priority 3 (medium)
score 3-4 → priority 4 (low)
score 0-2 → priority 5 (lowest)
```
### Task Priority (1-5)
- Research task: 1 (highest)
- Design task: 2
- Implementation tasks: 3 by default, decrement for later tasks
- Testing/documentation: 4-5
### Complexity Analysis
```
risk: main_challenges.length > 2 ? 'high' : 'medium'
impact: score >= 8 ? 'high' : score >= 6 ? 'medium' : 'low'
complexity: main_challenges > 3 OR tasks > 5 ? 'high'
tasks > 3 ? 'medium' : 'low'
```
## CLI Integration
### Issue Creation
```bash
# Uses heredoc to avoid shell escaping
ccw issue create << 'EOF'
{
"title": "...",
"context": "...",
"priority": 3,
"source": "brainstorm",
"labels": ["brainstorm", "creative", "feasibility-high"],
...
}
EOF
```
### Solution Binding
```bash
# Append solution to JSONL file
echo '{"id":"SOL-xxx","tasks":[...]}' >> .workflow/issues/solutions/{issue-id}.jsonl
# Bind to issue
ccw issue bind {issue-id} {solution-id}
# Update status
ccw issue update {issue-id} --status planned
```
## Error Handling
| Error | Message | Resolution |
|-------|---------|------------|
| Session not found | synthesis.json missing | Check session ID, list available sessions |
| No ideas | top_ideas array empty | Complete brainstorm workflow first |
| Invalid idea index | Index out of range | Check valid range 0 to N-1 |
| Issue creation failed | ccw issue create error | Verify CLI endpoint working |
| Solution binding failed | Bind error | Check issue exists, retry |
## Examples
### Interactive Mode
```bash
Skill(skill="issue-resolve", args="--source brainstorm SESSION=\"BS-rate-limiting-2025-01-28\"")
# Output:
# | # | Title | Score | Feasibility |
# |---|-------|-------|-------------|
# | 0 | Token Bucket Algorithm | 8.5 | High |
# | 1 | Sliding Window Counter | 7.2 | Medium |
# | 2 | Fixed Window | 6.1 | High |
# User selects: #0
# Result:
# ✓ Created issue: ISS-20250128-001
# ✓ Created solution: SOL-ISS-20250128-001-ab3d
# ✓ Bound solution to issue
# → Next: /issue:queue
```
### Auto Mode
```bash
Skill(skill="issue-resolve", args="--source brainstorm SESSION=\"BS-caching-2025-01-28\" --auto")
# Result:
# Auto-selected: Redis Cache Layer (Score: 9.2/10)
# ✓ Created issue: ISS-20250128-002
# ✓ Solution with 4 tasks
# → Status: planned
```
## Integration Flow
```
brainstorm-with-file
├─ synthesis.json
├─ perspectives.json
└─ .brainstorming/** (optional)
Phase 3: From Brainstorm ◄─── This phase
├─ ISS-YYYYMMDD-NNN (enriched issue)
└─ SOL-{issue-id}-{uid} (structured solution)
Phase 4: Form Queue (or Skill(skill="issue-resolve", args="--source queue"))
/issue:execute
RA → EP → CD → VAS
```
## Session Files Reference
### Input Files
```
.workflow/.brainstorm/BS-{slug}-{date}/
├── synthesis.json # REQUIRED - Top ideas with scores
├── perspectives.json # OPTIONAL - Multi-CLI insights
├── brainstorm.md # Reference only
└── .brainstorming/ # OPTIONAL - Synthesis artifacts
├── system-architect/
│ └── analysis.md # Contains clarifications + enhancements
├── api-designer/
│ └── analysis.md
└── ...
```
### Output Files
```
.workflow/issues/
├── solutions/
│ └── ISS-YYYYMMDD-001.jsonl # Created solution (JSONL)
└── (managed by ccw issue CLI)
```
## Post-Phase Update
After brainstorm conversion:
- Issue created with `status: planned`, enriched context from brainstorm session
- Solution bound with structured tasks derived from idea.next_steps
- Report: issue ID, solution ID, task count, idea score
- Recommend next step: Form execution queue via Phase 4 or `Skill(skill="issue-resolve", args="--source queue")`

View File

@@ -1,389 +0,0 @@
# Phase 4: Form Execution Queue
> 来源: `commands/issue/queue.md`
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves **inter-solution** conflicts, and creates an ordered execution queue at **solution level**.
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
## Prerequisites
- Issues with `status: planned` and `bound_solution_id` exist
- Solutions written in `.workflow/issues/solutions/{issue-id}.jsonl`
- `ccw issue` CLI available
## Auto Mode
When `--yes` or `-y`: Auto-confirm queue formation, use recommended conflict resolutions.
## Core Capabilities
- **Agent-driven**: issue-queue-agent handles all ordering logic
- **Solution-level granularity**: Queue items are solutions, not tasks
- **Conflict clarification**: High-severity conflicts prompt user decision
- Semantic priority calculation per solution (0.0-1.0)
- Parallel/Sequential group assignment for solutions
## Core Guidelines
**⚠️ Data Access Principle**: Issues and queue files can grow very large. To avoid context overflow:
| Operation | Correct | Incorrect |
|-----------|---------|-----------|
| List issues (brief) | `ccw issue list --status planned --brief` | `Read('issues.jsonl')` |
| **Batch solutions (NEW)** | `ccw issue solutions --status planned --brief` | Loop `ccw issue solution <id>` |
| List queue (brief) | `ccw issue queue --brief` | `Read('queues/*.json')` |
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
| Get next item | `ccw issue next --json` | `Read('queues/*.json')` |
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
| Sync from queue | `ccw issue update --from-queue` | Direct file edit |
| Read solution (single) | `ccw issue solution <id> --brief` | `Read('solutions/*.jsonl')` |
**Output Options**:
- `--brief`: JSON with minimal fields (id, status, counts)
- `--json`: Full JSON (agent use only)
**Orchestration vs Execution**:
- **Command (orchestrator)**: Use `--brief` for minimal context
- **Agent (executor)**: Fetch full details → `ccw issue status <id> --json`
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `queues/*.json` directly.
## Flags
| Flag | Description | Default |
|------|-------------|---------|
| `--queues <n>` | Number of parallel queues | 1 |
| `--issue <id>` | Form queue for specific issue only | All planned |
| `--append <id>` | Append issue to active queue (don't create new) | - |
| `--force` | Skip active queue check, always create new queue | false |
## CLI Subcommands Reference
```bash
ccw issue queue list List all queues with status
ccw issue queue add <issue-id> Add issue to queue (interactive if active queue exists)
ccw issue queue add <issue-id> -f Add to new queue without prompt (force)
ccw issue queue merge <src> --queue <target> Merge source queue into target queue
ccw issue queue switch <queue-id> Switch active queue
ccw issue queue archive Archive current queue
ccw issue queue delete <queue-id> Delete queue from history
```
## Execution Steps
### Step 4.1: Solution Loading & Distribution
**Data Loading:**
- Use `ccw issue solutions --status planned --brief` to get all planned issues with solutions in **one call**
- Returns: Array of `{ issue_id, solution_id, is_bound, task_count, files_touched[], priority }`
- If no bound solutions found → display message, suggest running plan/convert/brainstorm first
**Build Solution Objects:**
```javascript
// Single CLI call replaces N individual queries
const result = Bash(`ccw issue solutions --status planned --brief`).trim();
const solutions = result ? JSON.parse(result) : [];
if (solutions.length === 0) {
console.log('No bound solutions found. Run /issue:plan first.');
return;
}
// solutions already in correct format:
// { issue_id, solution_id, is_bound, task_count, files_touched[], priority }
```
**Multi-Queue Distribution** (if `--queues > 1`):
- Use `files_touched` from brief output for partitioning
- Group solutions with overlapping files into same queue
**Output:** Array of solution objects (or N arrays if multi-queue)
### Step 4.2: Agent-Driven Queue Formation
**Generate Queue IDs** (command layer, pass to agent):
```javascript
const timestamp = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
const numQueues = args.queues || 1;
const queueIds = numQueues === 1
? [`QUE-${timestamp}`]
: Array.from({length: numQueues}, (_, i) => `QUE-${timestamp}-${i + 1}`);
```
**Agent Prompt** (same for each queue, with assigned solutions):
```
## Order Solutions into Execution Queue
**Queue ID**: ${queueId}
**Solutions**: ${solutions.length} from ${issues.length} issues
**Project Root**: ${cwd}
**Queue Index**: ${queueIndex} of ${numQueues}
### Input
${JSON.stringify(solutions)}
// Each object: { issue_id, solution_id, task_count, files_touched[], priority }
### Workflow
Step 1: Build dependency graph from solutions (nodes=solutions, edges=file conflicts via files_touched)
Step 2: Use Gemini CLI for conflict analysis (5 types: file, API, data, dependency, architecture)
Step 3: For high-severity conflicts without clear resolution → add to `clarifications`
Step 4: Calculate semantic priority (base from issue priority + task_count boost)
Step 5: Assign execution groups: P* (parallel, no overlaps) / S* (sequential, shared files)
Step 6: Write queue JSON + update index
### Output Requirements
**Write files** (exactly 2):
- `.workflow/issues/queues/${queueId}.json` - Full queue with solutions, conflicts, groups
- `.workflow/issues/queues/index.json` - Update with new queue entry
**Return JSON**:
\`\`\`json
{
"queue_id": "${queueId}",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{"id": "P1", "type": "parallel", "count": N}],
"issues_queued": ["ISS-xxx"],
"clarifications": [{"conflict_id": "CFT-1", "question": "...", "options": [...]}]
}
\`\`\`
### Rules
- Solution granularity (NOT individual tasks)
- Queue Item ID format: S-1, S-2, S-3, ...
- Use provided Queue ID (do NOT generate new)
- `clarifications` only present if high-severity unresolved conflicts exist
- Use `files_touched` from input (already extracted by orchestrator)
### Done Criteria
- [ ] Queue JSON written with all solutions ordered
- [ ] Index updated with active_queue_id
- [ ] No circular dependencies
- [ ] Parallel groups have no file overlaps
- [ ] Return JSON matches required shape
```
**Launch Agents** (parallel if multi-queue):
```javascript
const numQueues = args.queues || 1;
if (numQueues === 1) {
// Single queue: single agent call
const result = Task(
subagent_type="issue-queue-agent",
prompt=buildPrompt(queueIds[0], solutions),
description=`Order ${solutions.length} solutions`
);
} else {
// Multi-queue: parallel agent calls (single message with N Task calls)
const agentPromises = solutionGroups.map((group, i) =>
Task(
subagent_type="issue-queue-agent",
prompt=buildPrompt(queueIds[i], group, i + 1, numQueues),
description=`Queue ${i + 1}/${numQueues}: ${group.length} solutions`
)
);
// All agents launched in parallel via single message with multiple Task tool calls
}
```
**Multi-Queue Index Update:**
- First queue sets `active_queue_id`
- All queues added to `queues` array with `queue_group` field linking them
### Step 4.3: Conflict Clarification
**Collect Agent Results** (multi-queue):
```javascript
// Collect clarifications from all agents
const allClarifications = results.flatMap((r, i) =>
(r.clarifications || []).map(c => ({ ...c, queue_id: queueIds[i], agent_id: agentIds[i] }))
);
```
**Check Agent Return:**
- Parse agent result JSON (or all results if multi-queue)
- If any `clarifications` array exists and non-empty → user decision required
**Clarification Flow:**
```javascript
if (allClarifications.length > 0) {
for (const clarification of allClarifications) {
// Present to user via AskUserQuestion
const answer = AskUserQuestion({
questions: [{
question: `[${clarification.queue_id}] ${clarification.question}`,
header: clarification.conflict_id,
options: clarification.options,
multiSelect: false
}]
});
// Resume respective agent with user decision
Task(
subagent_type="issue-queue-agent",
resume=clarification.agent_id,
prompt=`Conflict ${clarification.conflict_id} resolved: ${answer.selected}`
);
}
}
```
### Step 4.4: Status Update & Summary
**Status Update** (MUST use CLI command, NOT direct file operations):
```bash
# Option 1: Batch update from queue (recommended)
ccw issue update --from-queue [queue-id] --json
ccw issue update --from-queue --json # Use active queue
ccw issue update --from-queue QUE-xxx --json # Use specific queue
# Option 2: Individual issue update
ccw issue update <issue-id> --status queued
```
**⚠️ IMPORTANT**: Do NOT directly modify `issues.jsonl`. Always use CLI command to ensure proper validation and history tracking.
**Output** (JSON):
```json
{
"success": true,
"queue_id": "QUE-xxx",
"queued": ["ISS-001", "ISS-002"],
"queued_count": 2,
"unplanned": ["ISS-003"],
"unplanned_count": 1
}
```
**Behavior:**
- Updates issues in queue to `status: 'queued'` (skips already queued/executing/completed)
- Identifies planned issues with `bound_solution_id` NOT in queue → `unplanned` array
- Optional `queue-id`: defaults to active queue if omitted
**Summary Output:**
- Display queue ID, solution count, task count
- Show unplanned issues (planned but NOT in queue)
- Show next step: `/issue:execute`
### Step 4.5: Active Queue Check & Decision
**After agent completes, check for active queue:**
```bash
ccw issue queue list --brief
```
**Decision:**
- If `active_queue_id` is null → `ccw issue queue switch <new-queue-id>` (activate new queue)
- If active queue exists → Use **AskUserQuestion** to prompt user
**AskUserQuestion:**
```javascript
AskUserQuestion({
questions: [{
question: "Active queue exists. How would you like to proceed?",
header: "Queue Action",
options: [
{ label: "Merge into existing queue", description: "Add new items to active queue, delete new queue" },
{ label: "Use new queue", description: "Switch to new queue, keep existing in history" },
{ label: "Cancel", description: "Delete new queue, keep existing active" }
],
multiSelect: false
}]
})
```
**Action Commands:**
| User Choice | Commands |
|-------------|----------|
| **Merge into existing** | `ccw issue queue merge <new-queue-id> --queue <active-queue-id>` then `ccw issue queue delete <new-queue-id>` |
| **Use new queue** | `ccw issue queue switch <new-queue-id>` |
| **Cancel** | `ccw issue queue delete <new-queue-id>` |
## Storage Structure (Queue History)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queues/ # Queue history directory
│ ├── index.json # Queue index (active + history)
│ ├── {queue-id}.json # Individual queue files
│ └── ...
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
### Queue Index Schema
```json
{
"active_queue_id": "QUE-20251227-143000",
"active_queue_group": "QGR-20251227-143000",
"queues": [
{
"id": "QUE-20251227-143000-1",
"queue_group": "QGR-20251227-143000",
"queue_index": 1,
"total_queues": 3,
"status": "active",
"issue_ids": ["ISS-xxx", "ISS-yyy"],
"total_solutions": 3,
"completed_solutions": 1,
"created_at": "2025-12-27T14:30:00Z"
}
]
}
```
**Multi-Queue Fields:**
- `queue_group`: Links queues created in same batch (format: `QGR-{timestamp}`)
- `queue_index`: Position in group (1-based)
- `total_queues`: Total queues in group
- `active_queue_group`: Current active group (for multi-queue execution)
**Note**: Queue file schema is produced by `issue-queue-agent`. See agent documentation for details.
## Error Handling
| Error | Resolution |
|-------|------------|
| No bound solutions | Display message, suggest phases 1-3 (plan/convert/brainstorm) |
| Circular dependency | List cycles, abort queue formation |
| High-severity conflict | Return `clarifications`, prompt user decision |
| User cancels clarification | Abort queue formation |
| **index.json not updated** | Auto-fix: Set active_queue_id to new queue |
| **Queue file missing solutions** | Abort with error, agent must regenerate |
| **User cancels queue add** | Display message, return without changes |
| **Merge with empty source** | Skip merge, display warning |
| **All items duplicate** | Skip merge, display "All items already exist" |
## Quality Checklist
Before completing, verify:
- [ ] All planned issues with `bound_solution_id` are included
- [ ] Queue JSON written to `queues/{queue-id}.json` (N files if multi-queue)
- [ ] Index updated in `queues/index.json` with `active_queue_id`
- [ ] Multi-queue: All queues share same `queue_group`
- [ ] No circular dependencies in solution DAG
- [ ] All conflicts resolved (auto or via user clarification)
- [ ] Parallel groups have no file overlaps
- [ ] Cross-queue: No file overlaps between queues
- [ ] Issue statuses updated to `queued`
## Post-Phase Update
After queue formation:
- All planned issues updated to `status: queued`
- Queue files written and index updated
- Report: queue ID(s), solution count, task count, execution groups
- Recommend next step: `/issue:execute` to begin execution

View File

@@ -1,162 +0,0 @@
---
name: project-analyze
description: Multi-phase iterative project analysis with Mermaid diagrams. Generates architecture reports, design reports, method analysis reports. Use when analyzing codebases, understanding project structure, reviewing architecture, exploring design patterns, or documenting system components. Triggers on "analyze project", "architecture report", "design analysis", "code structure", "system overview".
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write
---
# Project Analysis Skill
Generate comprehensive project analysis reports through multi-phase iterative workflow.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Context-Optimized Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Requirements → analysis-config.json │
│ ↓ │
│ Phase 2: Exploration → 初步探索,确定范围 │
│ ↓ │
│ Phase 3: Parallel Agents → sections/section-*.md (直接写MD) │
│ ↓ 返回简要JSON │
│ Phase 3.5: Consolidation → consolidation-summary.md │
│ Agent ↓ 返回质量评分+问题列表 │
│ ↓ │
│ Phase 4: Assembly → 合并MD + 质量附录 │
│ ↓ │
│ Phase 5: Refinement → 最终报告 │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **Agent 直接输出 MD**: 避免 JSON → MD 转换的上下文开销
2. **简要返回**: Agent 只返回路径+摘要,不返回完整内容
3. **汇总 Agent**: 独立 Agent 负责跨章节问题检测和质量评分
4. **引用合并**: Phase 4 读取文件合并,不在上下文中传递
5. **段落式描述**: 禁止清单罗列,层层递进,客观学术表达
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
│ Phase 1: Requirements Discovery │
│ → Read: phases/01-requirements-discovery.md │
│ → Collect: report type, depth level, scope, focus areas │
│ → Output: analysis-config.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2: Project Exploration │
│ → Read: phases/02-project-exploration.md │
│ → Launch: parallel exploration agents │
│ → Output: exploration context for Phase 3 │
├─────────────────────────────────────────────────────────────────┤
│ Phase 3: Deep Analysis (Parallel Agents) │
│ → Read: phases/03-deep-analysis.md │
│ → Reference: specs/quality-standards.md │
│ → Each Agent: 分析代码 → 直接写 sections/section-*.md │
│ → Return: {"status", "output_file", "summary", "cross_notes"} │
├─────────────────────────────────────────────────────────────────┤
│ Phase 3.5: Consolidation (New!) │
│ → Read: phases/03.5-consolidation.md │
│ → Input: Agent 返回的简要信息 + cross_module_notes │
│ → Analyze: 一致性/完整性/关联性/质量检查 │
│ → Output: consolidation-summary.md │
│ → Return: {"quality_score", "issues", "stats"} │
├─────────────────────────────────────────────────────────────────┤
│ Phase 4: Report Generation │
│ → Read: phases/04-report-generation.md │
│ → Check: 如有 errors提示用户处理 │
│ → Merge: Executive Summary + sections/*.md + 质量附录 │
│ → Output: {TYPE}-REPORT.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 5: Iterative Refinement │
│ → Read: phases/05-iterative-refinement.md │
│ → Reference: specs/quality-standards.md │
│ → Loop: 发现问题 → 提问 → 修复 → 重新检查 │
└─────────────────────────────────────────────────────────────────┘
```
## Report Types
| Type | Output | Agents | Focus |
|------|--------|--------|-------|
| `architecture` | ARCHITECTURE-REPORT.md | 5 | System structure, modules, dependencies |
| `design` | DESIGN-REPORT.md | 4 | Patterns, classes, interfaces |
| `methods` | METHODS-REPORT.md | 4 | Algorithms, critical paths, APIs |
| `comprehensive` | COMPREHENSIVE-REPORT.md | All | All above combined |
## Agent Configuration by Report Type
### Architecture Report
| Agent | Output File | Section |
|-------|-------------|---------|
| overview | section-overview.md | System Overview |
| layers | section-layers.md | Layer Analysis |
| dependencies | section-dependencies.md | Module Dependencies |
| dataflow | section-dataflow.md | Data Flow |
| entrypoints | section-entrypoints.md | Entry Points |
### Design Report
| Agent | Output File | Section |
|-------|-------------|---------|
| patterns | section-patterns.md | Design Patterns |
| classes | section-classes.md | Class Relationships |
| interfaces | section-interfaces.md | Interface Contracts |
| state | section-state.md | State Management |
### Methods Report
| Agent | Output File | Section |
|-------|-------------|---------|
| algorithms | section-algorithms.md | Core Algorithms |
| paths | section-paths.md | Critical Code Paths |
| apis | section-apis.md | Public API Reference |
| logic | section-logic.md | Complex Logic |
## Directory Setup
```javascript
// 生成时间戳目录名
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const dir = `.workflow/.scratchpad/analyze-${timestamp}`;
// Windows (cmd)
Bash(`mkdir "${dir}\\sections"`);
Bash(`mkdir "${dir}\\iterations"`);
// Unix/macOS
// Bash(`mkdir -p "${dir}/sections" "${dir}/iterations"`);
```
## Output Structure
```
.workflow/.scratchpad/analyze-{timestamp}/
├── analysis-config.json # Phase 1
├── sections/ # Phase 3 (Agent 直接写入)
│ ├── section-overview.md
│ ├── section-layers.md
│ ├── section-dependencies.md
│ └── ...
├── consolidation-summary.md # Phase 3.5
├── {TYPE}-REPORT.md # Final Output
└── iterations/ # Phase 5
├── v1.md
└── v2.md
```
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | User interaction, config collection |
| [phases/02-project-exploration.md](phases/02-project-exploration.md) | Initial exploration |
| [phases/03-deep-analysis.md](phases/03-deep-analysis.md) | Parallel agent analysis |
| [phases/03.5-consolidation.md](phases/03.5-consolidation.md) | Cross-section consolidation |
| [phases/04-report-generation.md](phases/04-report-generation.md) | Report assembly |
| [phases/05-iterative-refinement.md](phases/05-iterative-refinement.md) | Quality refinement |
| [specs/quality-standards.md](specs/quality-standards.md) | Quality gates, standards |
| [specs/writing-style.md](specs/writing-style.md) | 段落式学术写作规范 |
| [../_shared/mermaid-utils.md](../_shared/mermaid-utils.md) | Shared Mermaid utilities |

View File

@@ -1,79 +0,0 @@
# Phase 1: Requirements Discovery
Collect user requirements before analysis begins.
## Execution
### Step 1: Report Type Selection
```javascript
AskUserQuestion({
questions: [{
question: "What type of project analysis report would you like?",
header: "Report Type",
multiSelect: false,
options: [
{label: "Architecture (Recommended)", description: "System structure, module relationships, layer analysis, dependency graph"},
{label: "Design", description: "Design patterns, class relationships, component interactions, abstraction analysis"},
{label: "Methods", description: "Key algorithms, critical code paths, core function explanations with examples"},
{label: "Comprehensive", description: "All above combined into a complete project analysis"}
]
}]
})
```
### Step 2: Depth Level Selection
```javascript
AskUserQuestion({
questions: [{
question: "What depth level do you need?",
header: "Depth",
multiSelect: false,
options: [
{label: "Overview", description: "High-level understanding, suitable for onboarding"},
{label: "Detailed", description: "In-depth analysis with code examples"},
{label: "Deep-Dive", description: "Exhaustive analysis with implementation details"}
]
}]
})
```
### Step 3: Scope Definition
```javascript
AskUserQuestion({
questions: [{
question: "What scope should the analysis cover?",
header: "Scope",
multiSelect: false,
options: [
{label: "Full Project", description: "Analyze entire codebase"},
{label: "Specific Module", description: "Focus on a specific module or directory"},
{label: "Custom Path", description: "Specify custom path pattern"}
]
}]
})
```
## Focus Areas Mapping
| Report Type | Focus Areas |
|-------------|-------------|
| Architecture | Layer Structure, Module Dependencies, Entry Points, Data Flow |
| Design | Design Patterns, Class Relationships, Interface Contracts, State Management |
| Methods | Core Algorithms, Critical Paths, Public APIs, Complex Logic |
| Comprehensive | All above combined |
## Output
Save configuration to `analysis-config.json`:
```json
{
"type": "architecture|design|methods|comprehensive",
"depth": "overview|detailed|deep-dive",
"scope": "**/*|src/**/*|custom",
"focus_areas": ["..."]
}
```

View File

@@ -1,176 +0,0 @@
# Phase 2: Project Exploration
Launch parallel exploration agents based on report type and task context.
## Execution
### Step 1: Intelligent Angle Selection
```javascript
// Angle presets based on report type (adapted from lite-plan.md)
const ANGLE_PRESETS = {
architecture: ['layer-structure', 'module-dependencies', 'entry-points', 'data-flow'],
design: ['design-patterns', 'class-relationships', 'interface-contracts', 'state-management'],
methods: ['core-algorithms', 'critical-paths', 'public-apis', 'complex-logic'],
comprehensive: ['architecture', 'patterns', 'dependencies', 'integration-points']
};
// Depth-based angle count
const angleCount = {
shallow: 2,
standard: 3,
deep: 4
};
function selectAngles(reportType, depth) {
const preset = ANGLE_PRESETS[reportType] || ANGLE_PRESETS.comprehensive;
const count = angleCount[depth] || 3;
return preset.slice(0, count);
}
const selectedAngles = selectAngles(config.type, config.depth);
console.log(`
## Exploration Plan
Report Type: ${config.type}
Depth: ${config.depth}
Selected Angles: ${selectedAngles.join(', ')}
Launching ${selectedAngles.length} parallel explorations...
`);
```
### Step 2: Launch Parallel Agents (Direct Output)
**⚠️ CRITICAL**: Agents write output files directly. No aggregation needed.
```javascript
// Launch agents with pre-assigned angles
const explorationTasks = selectedAngles.map((angle, index) =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false, // ⚠️ MANDATORY: Must wait for results
description: `Explore: ${angle}`,
prompt: `
## Exploration Objective
Execute **${angle}** exploration for ${config.type} project analysis report.
## Assigned Context
- **Exploration Angle**: ${angle}
- **Report Type**: ${config.type}
- **Depth**: ${config.depth}
- **Scope**: ${config.scope}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
3. Analyze project from ${angle} perspective
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
- get_modules_by_depth.sh → identify modules related to ${angle}
- find/rg → locate files relevant to ${angle} aspect
- Analyze imports/dependencies from ${angle} perspective
**Step 2: Semantic Analysis** (Gemini/Qwen CLI)
- How does existing code handle ${angle} concerns?
- What patterns are used for ${angle}?
- Identify key architectural decisions related to ${angle}
**Step 3: Write Output Directly**
- Consolidate ${angle} findings into JSON
- Write to output file path specified above
## Expected Output Schema
**File**: ${sessionFolder}/exploration-${angle}.json
\`\`\`json
{
"angle": "${angle}",
"findings": {
"structure": [
{ "component": "...", "type": "module|layer|service", "description": "..." }
],
"patterns": [
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
],
"relationships": [
{ "from": "...", "to": "...", "type": "depends|imports|calls", "strength": "high|medium|low" }
],
"key_files": [
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
]
},
"insights": [
{ "observation": "...", "impact": "high|medium|low", "recommendation": "..." }
],
"_metadata": {
"exploration_angle": "${angle}",
"exploration_index": ${index + 1},
"report_type": "${config.type}",
"timestamp": "ISO8601"
}
}
\`\`\`
## Success Criteria
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with ${angle} rationale
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Relationships include concrete file references
- [ ] JSON output written to ${sessionFolder}/exploration-${angle}.json
- [ ] Return: 2-3 sentence summary of ${angle} findings
`
})
);
// Execute all exploration tasks in parallel
```
## Output
Session folder structure after exploration:
```
${sessionFolder}/
├── exploration-{angle1}.json # Agent 1 direct output
├── exploration-{angle2}.json # Agent 2 direct output
├── exploration-{angle3}.json # Agent 3 direct output (if applicable)
└── exploration-{angle4}.json # Agent 4 direct output (if applicable)
```
## Downstream Usage (Phase 3 Analysis Input)
Subsequent analysis phases MUST read exploration outputs as input:
```javascript
// Discover exploration files by known angle pattern
const explorationData = {};
selectedAngles.forEach(angle => {
const filePath = `${sessionFolder}/exploration-${angle}.json`;
explorationData[angle] = JSON.parse(Read(filePath));
});
// Pass to analysis agent
Task({
subagent_type: "analysis-agent",
prompt: `
## Analysis Input
### Exploration Data by Angle
${Object.entries(explorationData).map(([angle, data]) => `
#### ${angle}
${JSON.stringify(data, null, 2)}
`).join('\n')}
## Analysis Task
Synthesize findings from all exploration angles...
`
});
```

View File

@@ -1,854 +0,0 @@
# Phase 3: Deep Analysis
并行 Agent 撰写设计报告章节,返回简要信息。
> **规范参考**: [../specs/quality-standards.md](../specs/quality-standards.md)
> **写作风格**: [../specs/writing-style.md](../specs/writing-style.md)
## Exploration → Agent 自动分配
根据 Phase 2 生成的 exploration 文件名自动分配对应的 analysis agent。
### 映射规则
```javascript
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
const EXPLORATION_TO_AGENT = {
// Architecture Report 角度
'layer-structure': 'layers',
'module-dependencies': 'dependencies',
'entry-points': 'entrypoints',
'data-flow': 'dataflow',
// Design Report 角度
'design-patterns': 'patterns',
'class-relationships': 'classes',
'interface-contracts': 'interfaces',
'state-management': 'state',
// Methods Report 角度
'core-algorithms': 'algorithms',
'critical-paths': 'paths',
'public-apis': 'apis',
'complex-logic': 'logic',
// Comprehensive 角度
'architecture': 'overview',
'patterns': 'patterns',
'dependencies': 'dependencies',
'integration-points': 'entrypoints'
};
// 从文件名提取角度
function extractAngle(filename) {
// exploration-layer-structure.json → layer-structure
const match = filename.match(/exploration-(.+)\.json$/);
return match ? match[1] : null;
}
// 分配 agent
function assignAgent(explorationFile) {
const angle = extractAngle(path.basename(explorationFile));
return EXPLORATION_TO_AGENT[angle] || null;
}
// Agent 配置(用于 buildAgentPrompt
const AGENT_CONFIGS = {
overview: {
role: '首席系统架构师',
task: '基于代码库全貌,撰写"总体架构"章节,洞察核心价值主张和顶层技术决策',
focus: '领域边界与定位、架构范式、核心技术决策、顶层模块划分',
constraint: '避免罗列目录结构重点阐述设计意图包含至少1个 Mermaid 架构图'
},
layers: {
role: '资深软件设计师',
task: '分析系统逻辑分层结构,撰写"逻辑视点与分层架构"章节',
focus: '职责分配体系、数据流向与约束、边界隔离策略、异常处理流',
constraint: '不要列举具体文件名,关注层级间契约和隔离艺术'
},
dependencies: {
role: '集成架构专家',
task: '审视系统外部连接与内部耦合,撰写"依赖管理与生态集成"章节',
focus: '外部集成拓扑、核心依赖分析、依赖注入与控制反转、供应链安全',
constraint: '禁止简单列出依赖配置,必须分析集成策略和风险控制模型'
},
dataflow: {
role: '数据架构师',
task: '追踪系统数据流转机制,撰写"数据流与状态管理"章节',
focus: '数据入口与出口、数据转换管道、持久化策略、一致性保障',
constraint: '关注数据生命周期和形态演变,不要罗列数据库表结构'
},
entrypoints: {
role: '系统边界分析师',
task: '识别系统入口设计和关键路径,撰写"系统入口与调用链"章节',
focus: '入口类型与职责、请求处理管道、关键业务路径、异常与边界处理',
constraint: '关注入口设计哲学,不要逐个列举所有端点'
},
patterns: {
role: '核心开发规范制定者',
task: '挖掘代码中的复用机制和标准化实践,撰写"设计模式与工程规范"章节',
focus: '架构级模式、通信与并发模式、横切关注点实现、抽象与复用策略',
constraint: '避免教科书式解释,必须结合项目上下文说明应用场景'
},
classes: {
role: '领域模型设计师',
task: '分析系统类型体系和领域模型,撰写"类型体系与领域建模"章节',
focus: '领域模型设计、继承与组合策略、职责分配原则、类型安全与约束',
constraint: '关注建模思想,用 UML 类图辅助说明核心关系'
},
interfaces: {
role: '契约设计专家',
task: '分析系统接口设计和抽象层次,撰写"接口契约与抽象设计"章节',
focus: '抽象层次设计、契约与实现分离、扩展点设计、版本演进策略',
constraint: '关注接口设计哲学,不要逐个列举接口方法签名'
},
state: {
role: '状态管理架构师',
task: '分析系统状态管理机制,撰写"状态管理与生命周期"章节',
focus: '状态模型设计、状态生命周期、并发与一致性、状态恢复与容错',
constraint: '关注状态管理设计决策,不要列举具体变量名'
},
algorithms: {
role: '算法架构师',
task: '分析系统核心算法设计,撰写"核心算法与计算模型"章节',
focus: '算法选型与权衡、计算模型设计、性能与可扩展性、正确性保障',
constraint: '关注算法思想,用流程图辅助说明复杂逻辑'
},
paths: {
role: '性能架构师',
task: '分析系统关键执行路径,撰写"关键路径与性能设计"章节',
focus: '关键业务路径、性能敏感区域、瓶颈识别与缓解、降级与熔断',
constraint: '关注路径设计战略考量,不要罗列所有代码执行步骤'
},
apis: {
role: 'API 设计规范专家',
task: '分析系统对外接口设计规范,撰写"API 设计与规范"章节',
focus: 'API 设计风格、命名与结构规范、版本管理策略、错误处理规范',
constraint: '关注设计规范和一致性,不要逐个列举所有 API 端点'
},
logic: {
role: '业务逻辑架构师',
task: '分析系统业务逻辑建模,撰写"业务逻辑与规则引擎"章节',
focus: '业务规则建模、决策点设计、边界条件处理、业务流程编排',
constraint: '关注业务逻辑组织方式,不要逐行解释代码逻辑'
}
};
```
### 自动发现与分配流程
```javascript
// 1. 发现所有 exploration 文件(仅看文件名)
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
.split('\n')
.filter(f => f.trim());
// 2. 按文件名自动分配 agent
const agentAssignments = explorationFiles.map(file => {
const angle = extractAngle(path.basename(file));
const agentName = EXPLORATION_TO_AGENT[angle];
return {
exploration_file: file,
angle: angle,
agent: agentName,
output_file: `section-${agentName}.md`
};
}).filter(a => a.agent); // 过滤未映射的角度
console.log(`
## Agent Auto-Assignment
Found ${explorationFiles.length} exploration files:
${agentAssignments.map(a => `- ${a.angle}${a.agent} agent`).join('\n')}
`);
```
---
## Agent 执行前置条件
**每个 Agent 接收 exploration 文件路径,自行读取内容**
```javascript
// Agent prompt 中包含文件路径
// Agent 启动后的操作顺序:
// 1. Read exploration 文件(上下文输入)
// 2. Read 规范文件
// 3. 执行分析任务
```
规范文件路径(相对于 skill 根目录):
- `specs/quality-standards.md` - 质量标准和检查清单
- `specs/writing-style.md` - 段落式写作规范
---
## 通用写作规范(所有 Agent 共用)
```
[STYLE]
- **语言规范**:使用严谨、专业的中文进行技术写作。仅专业术语(如 Singleton, Middleware, ORM保留英文原文。
- **叙述视角**:采用完全客观的第三人称视角("上帝视角")。严禁使用"我们"、"开发者"、"用户"、"你"或"我"。主语应为"系统"、"模块"、"设计"、"架构"或"该层"。
- **段落结构**
- 禁止使用无序列表作为主要叙述方式,必须将观点融合在连贯的段落中。
- 采用"论点-论据-结论"的逻辑结构。
- 善用逻辑连接词("因此"、"然而"、"鉴于"、"进而")来体现设计思路的推演过程。
- **内容深度**
- 抽象化:描述"做什么"和"为什么这么做",而不是"怎么写的"。
- 方法论:强调设计模式、架构原则(如 SOLID、高内聚低耦合的应用。
- 非代码化:除非定义关键接口,否则不直接引用代码。文件引用仅作为括号内的来源标注 (参考: path/to/file)。
```
## Agent 配置
### Architecture Report Agents
| Agent | 输出文件 | 关注点 |
|-------|----------|--------|
| overview | section-overview.md | 顶层架构、技术决策、设计哲学 |
| layers | section-layers.md | 逻辑分层、职责边界、隔离策略 |
| dependencies | section-dependencies.md | 依赖治理、集成拓扑、风险控制 |
| dataflow | section-dataflow.md | 数据流向、转换机制、一致性保障 |
| entrypoints | section-entrypoints.md | 入口设计、调用链、异常传播 |
### Design Report Agents
| Agent | 输出文件 | 关注点 |
|-------|----------|--------|
| patterns | section-patterns.md | 架构模式、通信机制、横切关注点 |
| classes | section-classes.md | 类型体系、继承策略、职责划分 |
| interfaces | section-interfaces.md | 契约设计、抽象层次、扩展机制 |
| state | section-state.md | 状态模型、生命周期、并发控制 |
### Methods Report Agents
| Agent | 输出文件 | 关注点 |
|-------|----------|--------|
| algorithms | section-algorithms.md | 核心算法思想、复杂度权衡、优化策略 |
| paths | section-paths.md | 关键路径设计、性能敏感点、瓶颈分析 |
| apis | section-apis.md | API 设计规范、版本策略、兼容性 |
| logic | section-logic.md | 业务逻辑建模、决策机制、边界处理 |
---
## Agent 返回格式
```typescript
interface AgentReturn {
status: "completed" | "partial" | "failed";
output_file: string;
summary: string; // 50字以内
cross_module_notes: string[]; // 跨模块发现
stats: { diagrams: number; };
}
```
---
## Agent 提示词
### Overview Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 首席系统架构师
[TASK]
基于代码库的全貌,撰写《系统架构设计报告》的"总体架构"章节。透过代码表象,洞察系统的核心价值主张和顶层技术决策。
输出: ${outDir}/sections/section-overview.md
[STYLE]
- 严谨专业的中文技术写作,专业术语保留英文
- 完全客观的第三人称视角,严禁"我们"、"开发者"
- 段落式叙述,采用"论点-论据-结论"结构
- 善用逻辑连接词体现设计推演过程
- 描述"做什么"和"为什么",非"怎么写的"
- 不直接引用代码,文件仅作来源标注
[FOCUS]
- 领域边界与定位:系统旨在解决什么核心业务问题?其在更大的技术生态中处于什么位置?
- 架构范式:采用何种架构风格(分层、六边形、微服务、事件驱动等)?选择该范式的根本原因是什么?
- 核心技术决策:关键技术栈的选型依据,这些选型如何支撑系统的非功能性需求(性能、扩展性、维护性)
- 顶层模块划分:系统在最高层级被划分为哪些逻辑单元?它们之间的高层协作机制是怎样的?
[CONSTRAINT]
- 避免罗列目录结构
- 重点阐述"设计意图"而非"现有功能"
- 包含至少1个 Mermaid 架构图辅助说明
[RETURN JSON]
{"status":"completed","output_file":"section-overview.md","summary":"<50字>","cross_module_notes":[],"stats":{"diagrams":1}}
`
})
```
### Layers Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 资深软件设计师
[TASK]
分析系统的逻辑分层结构,撰写《系统架构设计报告》的"逻辑视点与分层架构"章节。重点揭示系统如何通过分层来隔离关注点。
输出: ${outDir}/sections/section-layers.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角,主语为"系统"、"该层"、"设计"
- 段落式叙述,禁止无序列表作为主体
- 强调方法论和架构原则的应用
[FOCUS]
- 职责分配体系:系统被划分为哪几个逻辑层级?每一层的核心职责和输入输出是什么?
- 数据流向与约束:数据在各层之间是如何流动的?是否存在严格的单向依赖规则?
- 边界隔离策略各层之间通过何种方式解耦接口抽象、DTO转换、依赖注入如何防止下层实现细节泄露到上层
- 异常处理流:异常信息如何在分层结构中传递和转化?
[CONSTRAINT]
- 不要列举具体的文件名列表
- 关注"层级间的契约"和"隔离的艺术"
[RETURN JSON]
{"status":"completed","output_file":"section-layers.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Dependencies Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 集成架构专家
[TASK]
审视系统的外部连接与内部耦合情况,撰写《系统架构设计报告》的"依赖管理与生态集成"章节。
输出: ${outDir}/sections/section-dependencies.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述,逻辑连贯
[FOCUS]
- 外部集成拓扑系统如何与外部世界第三方API、数据库、中间件交互采用了何种适配器或防腐层设计来隔离外部变化
- 核心依赖分析:区分"核心业务依赖"与"基础设施依赖"。系统对关键框架的依赖程度如何?是否存在被锁定的风险?
- 依赖注入与控制反转:系统内部模块间的组装方式是什么?是否实现了依赖倒置原则以支持可测试性?
- 供应链安全与治理:对于复杂的依赖树,系统采用了何种策略来管理版本和兼容性?
[CONSTRAINT]
- 禁止简单列出依赖配置文件的内容
- 必须分析依赖背后的"集成策略"和"风险控制模型"
[RETURN JSON]
{"status":"completed","output_file":"section-dependencies.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Patterns Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 核心开发规范制定者
[TASK]
挖掘代码中的复用机制和标准化实践,撰写《系统架构设计报告》的"设计模式与工程规范"章节。
输出: ${outDir}/sections/section-patterns.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述,结合项目上下文
[FOCUS]
- 架构级模式识别系统中广泛使用的架构模式CQRS、Event Sourcing、Repository Pattern、Unit of Work。阐述引入这些模式解决了什么特定难题
- 通信与并发模式:分析组件间的通信机制(同步/异步、观察者模式、发布订阅)以及并发控制策略
- 横切关注点实现系统如何统一处理日志、鉴权、缓存、事务管理等横切逻辑AOP、中间件管道、装饰器
- 抽象与复用策略:分析基类、泛型、工具类的设计思想,系统如何通过抽象来减少重复代码并提高一致性?
[CONSTRAINT]
- 避免教科书式地解释设计模式定义,必须结合当前项目上下文说明其应用场景
- 关注"解决类问题的通用机制"
[RETURN JSON]
{"status":"completed","output_file":"section-patterns.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### DataFlow Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 数据架构师
[TASK]
追踪系统的数据流转机制,撰写《系统架构设计报告》的"数据流与状态管理"章节。
输出: ${outDir}/sections/section-dataflow.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 数据入口与出口:数据从何处进入系统,最终流向何处?边界处的数据校验和转换策略是什么?
- 数据转换管道:数据在各层/模块间经历了怎样的形态变化DTO、Entity、VO 等数据对象的职责边界如何划分?
- 持久化策略:系统如何设计数据存储方案?采用了何种 ORM 策略或数据访问模式?
- 一致性保障:系统如何处理事务边界?分布式场景下如何保证数据一致性?
[CONSTRAINT]
- 关注数据的"生命周期"和"形态演变"
- 不要罗列数据库表结构
[RETURN JSON]
{"status":"completed","output_file":"section-dataflow.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### EntryPoints Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 系统边界分析师
[TASK]
识别系统的入口设计和关键路径,撰写《系统架构设计报告》的"系统入口与调用链"章节。
输出: ${outDir}/sections/section-entrypoints.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 入口类型与职责系统提供了哪些类型的入口REST API、CLI、消息队列消费者、定时任务各入口的设计目的和适用场景是什么
- 请求处理管道:从入口到核心逻辑,请求经过了怎样的处理管道?中间件/拦截器的编排逻辑是什么?
- 关键业务路径:最重要的几条业务流程的调用链是怎样的?关键节点的设计考量是什么?
- 异常与边界处理:系统如何统一处理异常?异常信息如何传播和转化?
[CONSTRAINT]
- 关注"入口的设计哲学"而非 API 清单
- 不要逐个列举所有端点
[RETURN JSON]
{"status":"completed","output_file":"section-entrypoints.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Classes Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 领域模型设计师
[TASK]
分析系统的类型体系和领域模型,撰写《系统架构设计报告》的"类型体系与领域建模"章节。
输出: ${outDir}/sections/section-classes.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 领域模型设计:系统的核心领域概念有哪些?它们之间的关系如何建模(聚合、实体、值对象)?
- 继承与组合策略:系统倾向于使用继承还是组合?基类/接口的设计意图是什么?
- 职责分配原则:类的职责划分遵循了什么原则?是否体现了单一职责原则?
- 类型安全与约束:系统如何利用类型系统来表达业务约束和不变量?
[CONSTRAINT]
- 关注"建模思想"而非类的属性列表
- 用 UML 类图辅助说明核心关系
[RETURN JSON]
{"status":"completed","output_file":"section-classes.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Interfaces Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 契约设计专家
[TASK]
分析系统的接口设计和抽象层次,撰写《系统架构设计报告》的"接口契约与抽象设计"章节。
输出: ${outDir}/sections/section-interfaces.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 抽象层次设计:系统定义了哪些核心接口/抽象类?这些抽象的设计意图和职责边界是什么?
- 契约与实现分离:接口如何隔离契约与实现?多态机制如何被运用?
- 扩展点设计:系统预留了哪些扩展点?如何在不修改核心代码的情况下扩展功能?
- 版本演进策略:接口如何支持版本演进?向后兼容性如何保障?
[CONSTRAINT]
- 关注"接口的设计哲学"
- 不要逐个列举接口方法签名
[RETURN JSON]
{"status":"completed","output_file":"section-interfaces.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### State Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 状态管理架构师
[TASK]
分析系统的状态管理机制,撰写《系统架构设计报告》的"状态管理与生命周期"章节。
输出: ${outDir}/sections/section-state.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 状态模型设计:系统需要管理哪些类型的状态(会话状态、应用状态、领域状态)?状态的存储位置和作用域是什么?
- 状态生命周期:状态如何创建、更新、销毁?生命周期管理的机制是什么?
- 并发与一致性:多线程/多实例场景下,状态如何保持一致?采用了何种并发控制策略?
- 状态恢复与容错:系统如何处理状态丢失或损坏?是否有状态恢复机制?
[CONSTRAINT]
- 关注"状态管理的设计决策"
- 不要列举具体的变量名
[RETURN JSON]
{"status":"completed","output_file":"section-state.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Algorithms Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 算法架构师
[TASK]
分析系统的核心算法设计,撰写《系统架构设计报告》的"核心算法与计算模型"章节。
输出: ${outDir}/sections/section-algorithms.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 算法选型与权衡:系统的核心业务逻辑采用了哪些关键算法?选择这些算法的考量因素是什么(时间复杂度、空间复杂度、可维护性)?
- 计算模型设计复杂计算如何被分解和组织是否采用了流水线、Map-Reduce 等计算模式?
- 性能与可扩展性:算法设计如何考虑性能和可扩展性?是否有针对大数据量的优化策略?
- 正确性保障:关键算法的正确性如何保障?是否有边界条件的特殊处理?
[CONSTRAINT]
- 关注"算法思想"而非具体实现代码
- 用流程图辅助说明复杂逻辑
[RETURN JSON]
{"status":"completed","output_file":"section-algorithms.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Paths Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 性能架构师
[TASK]
分析系统的关键执行路径,撰写《系统架构设计报告》的"关键路径与性能设计"章节。
输出: ${outDir}/sections/section-paths.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 关键业务路径:系统中最重要的几条业务执行路径是什么?这些路径的设计目标和约束是什么?
- 性能敏感区域:哪些环节是性能敏感的?系统采用了何种优化策略(缓存、异步、批处理)?
- 瓶颈识别与缓解:潜在的性能瓶颈在哪里?设计中是否预留了扩展空间?
- 降级与熔断:在高负载或故障场景下,系统如何保护关键路径?
[CONSTRAINT]
- 关注"路径设计的战略考量"
- 不要罗列所有代码执行步骤
[RETURN JSON]
{"status":"completed","output_file":"section-paths.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### APIs Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] API 设计规范专家
[TASK]
分析系统的对外接口设计规范,撰写《系统架构设计报告》的"API 设计与规范"章节。
输出: ${outDir}/sections/section-apis.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- API 设计风格:系统采用了何种 API 设计风格RESTful、GraphQL、RPC选择该风格的原因是什么
- 命名与结构规范API 的命名、路径结构、参数设计遵循了什么规范?是否有一致性保障机制?
- 版本管理策略API 如何支持版本演进?向后兼容性策略是什么?
- 错误处理规范API 错误响应的设计规范是什么?错误码体系如何组织?
[CONSTRAINT]
- 关注"设计规范和一致性"
- 不要逐个列举所有 API 端点
[RETURN JSON]
{"status":"completed","output_file":"section-apis.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
### Logic Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
[SPEC]
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
[ROLE] 业务逻辑架构师
[TASK]
分析系统的业务逻辑建模,撰写《系统架构设计报告》的"业务逻辑与规则引擎"章节。
输出: ${outDir}/sections/section-logic.md
[STYLE]
- 严谨专业的中文技术写作
- 客观第三人称视角
- 段落式叙述
[FOCUS]
- 业务规则建模:核心业务规则如何被表达和组织?是否采用了规则引擎或策略模式?
- 决策点设计:系统中的关键决策点有哪些?决策逻辑如何被封装和测试?
- 边界条件处理:系统如何处理边界条件和异常情况?是否有防御性编程措施?
- 业务流程编排:复杂业务流程如何被编排?是否采用了工作流引擎或状态机?
[CONSTRAINT]
- 关注"业务逻辑的组织方式"
- 不要逐行解释代码逻辑
[RETURN JSON]
{"status":"completed","output_file":"section-logic.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`
})
```
---
## 执行流程
```javascript
// 1. 发现 exploration 文件并自动分配 agent
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
.split('\n')
.filter(f => f.trim());
const agentAssignments = explorationFiles.map(file => {
const angle = extractAngle(path.basename(file));
const agentName = EXPLORATION_TO_AGENT[angle];
return { exploration_file: file, angle, agent: agentName };
}).filter(a => a.agent);
// 2. 准备目录
Bash(`mkdir "${outputDir}\\sections"`);
// 3. 并行启动所有 Agent传递 exploration 文件路径)
const results = await Promise.all(
agentAssignments.map(assignment =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Analyze: ${assignment.agent}`,
prompt: buildAgentPrompt(assignment, config, outputDir)
})
)
);
// 4. 收集简要返回信息
const summaries = results.map(r => JSON.parse(r));
// 5. 传递给 Phase 3.5 汇总 Agent
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
```
### Agent Prompt 构建
```javascript
function buildAgentPrompt(assignment, config, outputDir) {
const agentConfig = AGENT_CONFIGS[assignment.agent];
return `
[CONTEXT]
**Exploration 文件**: ${assignment.exploration_file}
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
[SPEC]
读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
[ROLE] ${agentConfig.role}
[TASK]
${agentConfig.task}
输出: ${outputDir}/sections/section-${assignment.agent}.md
[STYLE]
- 严谨专业的中文技术写作,专业术语保留英文
- 完全客观的第三人称视角,严禁"我们"、"开发者"
- 段落式叙述,采用"论点-论据-结论"结构
- 善用逻辑连接词体现设计推演过程
[FOCUS]
${agentConfig.focus}
[CONSTRAINT]
${agentConfig.constraint}
[RETURN JSON]
{"status":"completed","output_file":"section-${assignment.agent}.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
`;
}
```
## Output
各 Agent 写入 `sections/section-xxx.md`,返回简要 JSON 供 Phase 3.5 汇总。

View File

@@ -1,233 +0,0 @@
# Phase 3.5: Consolidation Agent
汇总所有分析 Agent 的产出,生成跨章节综合分析,为 Phase 4 索引报告提供内容。
> **写作规范**: [../specs/writing-style.md](../specs/writing-style.md)
## 执行要求
**必须执行**Phase 3 所有 Analysis Agents 完成后,主编排器**必须**调用此 Consolidation Agent。
**触发条件**
- Phase 3 所有 agent 已返回结果status: completed/partial/failed
- `sections/section-*.md` 文件已生成
**输入来源**
- `agent_summaries`: Phase 3 各 agent 返回的 JSON包含 status, output_file, summary, cross_module_notes
- `cross_module_notes`: 从各 agent 返回中提取的跨模块备注数组
**调用时机**
```javascript
// Phase 3 完成后,主编排器执行:
const phase3Results = await runPhase3Agents(); // 并行执行所有 analysis agents
const agentSummaries = phase3Results.map(r => JSON.parse(r));
const crossNotes = agentSummaries.flatMap(s => s.cross_module_notes || []);
// 必须调用 Phase 3.5 Consolidation Agent
await runPhase35Consolidation(agentSummaries, crossNotes);
```
## 核心职责
1. **跨章节综合分析**:生成 synthesis报告综述
2. **章节摘要提取**:生成 section_summaries索引表格内容
3. **质量检查**:识别问题并评分
4. **建议汇总**:生成 recommendations优先级排序
## 输入
```typescript
interface ConsolidationInput {
output_dir: string;
config: AnalysisConfig;
agent_summaries: AgentReturn[];
cross_module_notes: string[];
}
```
## Agent 调用代码
主编排器使用以下代码调用 Consolidation Agent
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
prompt: `
## 规范前置
首先读取规范文件:
- Read: ${skillRoot}/specs/quality-standards.md
- Read: ${skillRoot}/specs/writing-style.md
严格遵循规范中的质量标准和段落式写作要求。
## 任务
作为汇总 Agent读取所有章节文件执行跨章节分析生成汇总报告和索引内容。
## 输入
- 章节文件: ${outputDir}/sections/section-*.md
- Agent 摘要: ${JSON.stringify(agent_summaries)}
- 跨模块备注: ${JSON.stringify(cross_module_notes)}
- 报告类型: ${config.type}
## 核心产出
### 1. 综合分析 (synthesis)
阅读所有章节,用 2-3 段落描述项目全貌:
- 第一段:项目定位与核心架构特征
- 第二段:关键设计决策与技术选型
- 第三段:整体质量评价与显著特点
### 2. 章节摘要 (section_summaries)
为每个章节提取一句话核心发现,用于索引表格。
### 3. 架构洞察 (cross_analysis)
描述章节间的关联性,如:
- 模块间的依赖关系如何体现在各章节
- 设计决策如何贯穿多个层面
- 潜在的一致性或冲突
### 4. 建议汇总 (recommendations)
按优先级整理各章节的建议,段落式描述。
## 质量检查维度
### 一致性检查
- 术语一致性:同一概念是否使用相同名称
- 代码引用file:line 格式是否正确
### 完整性检查
- 章节覆盖:是否涵盖所有必需章节
- 内容深度:每章节是否达到 ${config.depth} 级别
### 质量检查
- Mermaid 语法:图表是否可渲染
- 段落式写作:是否符合写作规范(禁止清单罗列)
## 输出文件
写入: ${outputDir}/consolidation-summary.md
### 文件格式
\`\`\`markdown
# 分析汇总报告
## 综合分析
[2-3 段落的项目全貌描述,段落式写作]
## 章节摘要
| 章节 | 文件 | 核心发现 |
|------|------|----------|
| 系统概述 | section-overview.md | 一句话描述 |
| 层次分析 | section-layers.md | 一句话描述 |
| ... | ... | ... |
## 架构洞察
[跨章节关联分析,段落式描述]
## 建议汇总
[优先级排序的建议,段落式描述]
---
## 质量评估
### 评分
| 维度 | 得分 | 说明 |
|------|------|------|
| 完整性 | 85% | ... |
| 一致性 | 90% | ... |
| 深度 | 95% | ... |
| 可读性 | 88% | ... |
| 综合 | 89% | ... |
### 发现的问题
#### 严重问题
| ID | 类型 | 位置 | 描述 |
|----|------|------|------|
| E001 | ... | ... | ... |
#### 警告
| ID | 类型 | 位置 | 描述 |
|----|------|------|------|
| W001 | ... | ... | ... |
#### 提示
| ID | 类型 | 位置 | 描述 |
|----|------|------|------|
| I001 | ... | ... | ... |
### 统计
- 章节数: X
- 图表数: X
- 总字数: X
\`\`\`
## 返回格式 (JSON)
{
"status": "completed",
"output_file": "consolidation-summary.md",
// Phase 4 索引报告所需
"synthesis": "2-3 段落的综合分析文本",
"cross_analysis": "跨章节关联分析文本",
"recommendations": "优先级排序的建议文本",
"section_summaries": [
{"file": "section-overview.md", "title": "系统概述", "summary": "一句话核心发现"},
{"file": "section-layers.md", "title": "层次分析", "summary": "一句话核心发现"}
],
// 质量信息
"quality_score": {
"completeness": 85,
"consistency": 90,
"depth": 95,
"readability": 88,
"overall": 89
},
"issues": {
"errors": [...],
"warnings": [...],
"info": [...]
},
"stats": {
"total_sections": 5,
"total_diagrams": 8,
"total_words": 3500
}
}
`
})
```
## 问题分类
| 严重级别 | 前缀 | 含义 | 处理方式 |
|----------|------|------|----------|
| Error | E | 阻塞报告生成 | 必须修复 |
| Warning | W | 影响报告质量 | 建议修复 |
| Info | I | 可改进项 | 可选修复 |
## 问题类型
| 类型 | 说明 |
|------|------|
| missing | 缺失章节 |
| inconsistency | 术语/描述不一致 |
| invalid_ref | 无效代码引用 |
| syntax | Mermaid 语法错误 |
| shallow | 内容过浅 |
| list_style | 违反段落式写作规范 |
## Output
- **文件**: `consolidation-summary.md`(完整汇总报告)
- **返回**: JSON 包含 Phase 4 所需的所有字段

View File

@@ -1,217 +0,0 @@
# Phase 4: Report Generation
生成索引式报告,通过 markdown 链接引用章节文件。
> **规范参考**: [../specs/quality-standards.md](../specs/quality-standards.md)
## 设计原则
1. **引用而非嵌入**:主报告通过链接引用章节,不复制内容
2. **索引 + 综述**:主报告提供导航和高阶分析
3. **避免重复**:综述来自 consolidation不重新生成
4. **独立可读**:各章节文件可单独阅读
## 输入
```typescript
interface ReportInput {
output_dir: string;
config: AnalysisConfig;
consolidation: {
quality_score: QualityScore;
issues: { errors: Issue[], warnings: Issue[], info: Issue[] };
stats: Stats;
synthesis: string; // consolidation agent 的综合分析
section_summaries: Array<{file: string, summary: string}>;
};
}
```
## 执行流程
```javascript
// 1. 质量门禁检查
if (consolidation.issues.errors.length > 0) {
const response = await AskUserQuestion({
questions: [{
question: `发现 ${consolidation.issues.errors.length} 个严重问题,如何处理?`,
header: "质量检查",
multiSelect: false,
options: [
{label: "查看并修复", description: "显示问题列表,手动修复后重试"},
{label: "忽略继续", description: "跳过问题检查,继续装配"},
{label: "终止", description: "停止报告生成"}
]
}]
});
if (response === "查看并修复") {
return { action: "fix_required", errors: consolidation.issues.errors };
}
if (response === "终止") {
return { action: "abort" };
}
}
// 2. 生成索引式报告(不读取章节内容)
const report = generateIndexReport(config, consolidation);
// 3. 写入最终文件
const fileName = `${config.type.toUpperCase()}-REPORT.md`;
Write(`${outputDir}/${fileName}`, report);
```
## 报告模板
### 通用结构
```markdown
# {报告标题}
> 生成日期:{date}
> 分析范围:{scope}
> 分析深度:{depth}
> 质量评分:{overall}%
---
## 报告综述
{consolidation.synthesis - 来自汇总 Agent 的跨章节综合分析}
---
## 章节索引
| 章节 | 核心发现 | 详情 |
|------|----------|------|
{section_summaries 生成的表格行}
---
## 架构洞察
{从 consolidation 提取的跨模块关联分析}
---
## 建议与展望
{consolidation.recommendations - 优先级排序的综合建议}
---
**附录**
- [质量报告](./consolidation-summary.md)
- [章节文件目录](./sections/)
```
### 报告标题映射
| 类型 | 标题 |
|------|------|
| architecture | 项目架构设计报告 |
| design | 项目设计模式报告 |
| methods | 项目核心方法报告 |
| comprehensive | 项目综合分析报告 |
## 生成函数
```javascript
function generateIndexReport(config, consolidation) {
const titles = {
architecture: "项目架构设计报告",
design: "项目设计模式报告",
methods: "项目核心方法报告",
comprehensive: "项目综合分析报告"
};
const date = new Date().toLocaleDateString('zh-CN');
// 章节索引表格
const sectionTable = consolidation.section_summaries
.map(s => `| ${s.title} | ${s.summary} | [查看详情](./sections/${s.file}) |`)
.join('\n');
return `# ${titles[config.type]}
> 生成日期:${date}
> 分析范围:${config.scope}
> 分析深度:${config.depth}
> 质量评分:${consolidation.quality_score.overall}%
---
## 报告综述
${consolidation.synthesis}
---
## 章节索引
| 章节 | 核心发现 | 详情 |
|------|----------|------|
${sectionTable}
---
## 架构洞察
${consolidation.cross_analysis || '详见各章节分析。'}
---
## 建议与展望
${consolidation.recommendations || '详见质量报告中的改进建议。'}
---
**附录**
- [质量报告](./consolidation-summary.md)
- [章节文件目录](./sections/)
`;
}
```
## 输出结构
```
.workflow/.scratchpad/analyze-{timestamp}/
├── sections/ # 独立章节Phase 3 产出)
│ ├── section-overview.md
│ ├── section-layers.md
│ └── ...
├── consolidation-summary.md # 质量报告Phase 3.5 产出)
└── {TYPE}-REPORT.md # 索引报告(本阶段产出)
```
## 与 Phase 3.5 的协作
Phase 3.5 consolidation agent 需要提供:
```typescript
interface ConsolidationOutput {
// ... 原有字段
synthesis: string; // 跨章节综合分析2-3 段落)
cross_analysis: string; // 架构级关联洞察
recommendations: string; // 优先级排序的建议
section_summaries: Array<{
file: string; // 文件名
title: string; // 章节标题
summary: string; // 一句话核心发现
}>;
}
```
## 关键变更
| 原设计 | 新设计 |
|--------|--------|
| 读取章节内容并拼接 | 链接引用,不读取内容 |
| 重新生成 Executive Summary | 直接使用 consolidation.synthesis |
| 嵌入质量评分表格 | 链接引用 consolidation-summary.md |
| 主报告包含全部内容 | 主报告仅为索引 + 综述 |

View File

@@ -1,124 +0,0 @@
# Phase 5: Iterative Refinement
Discovery-driven refinement based on analysis findings.
## Execution
### Step 1: Extract Discoveries
```javascript
function extractDiscoveries(deepAnalysis) {
return {
ambiguities: deepAnalysis.findings.filter(f => f.confidence < 0.7),
complexityHotspots: deepAnalysis.findings.filter(f => f.complexity === 'high'),
patternDeviations: deepAnalysis.patterns.filter(p => p.consistency < 0.8),
unclearDependencies: deepAnalysis.dependencies.filter(d => d.type === 'implicit'),
potentialIssues: deepAnalysis.recommendations.filter(r => r.priority === 'investigate'),
depthOpportunities: deepAnalysis.sections.filter(s => s.has_more_detail)
};
}
const discoveries = extractDiscoveries(deepAnalysis);
```
### Step 2: Build Dynamic Questions
Questions emerge from discoveries, NOT predetermined:
```javascript
function buildDynamicQuestions(discoveries, config) {
const questions = [];
if (discoveries.ambiguities.length > 0) {
questions.push({
question: `Analysis found ambiguity in "${discoveries.ambiguities[0].area}". Which interpretation is correct?`,
header: "Clarify",
options: discoveries.ambiguities[0].interpretations
});
}
if (discoveries.complexityHotspots.length > 0) {
questions.push({
question: `These areas have high complexity. Which would you like explained?`,
header: "Deep-Dive",
multiSelect: true,
options: discoveries.complexityHotspots.slice(0, 4).map(h => ({
label: h.name,
description: h.summary
}))
});
}
if (discoveries.patternDeviations.length > 0) {
questions.push({
question: `Found pattern deviations. Should these be highlighted in the report?`,
header: "Patterns",
options: [
{label: "Yes, include analysis", description: "Add section explaining deviations"},
{label: "No, skip", description: "Omit from report"}
]
});
}
// Always include action question
questions.push({
question: "How would you like to proceed?",
header: "Action",
options: [
{label: "Continue refining", description: "Address more discoveries"},
{label: "Finalize report", description: "Generate final output"},
{label: "Change scope", description: "Modify analysis scope"}
]
});
return questions.slice(0, 4); // Max 4 questions
}
```
### Step 3: Apply Refinements
```javascript
if (userAction === "Continue refining") {
// Apply selected refinements
for (const selection of userSelections) {
applyRefinement(selection, deepAnalysis, report);
}
// Save iteration
Write(`${outputDir}/iterations/iteration-${iterationCount}.json`, {
timestamp: new Date().toISOString(),
discoveries: discoveries,
selections: userSelections,
changes: appliedChanges
});
// Loop back to Step 1
iterationCount++;
goto Step1;
}
if (userAction === "Finalize report") {
// Proceed to final output
goto FinalizeReport;
}
```
### Step 4: Finalize Report
```javascript
// Add iteration history to report metadata
const finalReport = {
...report,
metadata: {
iterations: iterationCount,
refinements_applied: allRefinements,
final_discoveries: discoveries
}
};
Write(`${outputDir}/${config.type.toUpperCase()}-REPORT.md`, finalReport);
```
## Output
Updated report with refinements, saved iterations to `iterations/` folder.

View File

@@ -1,115 +0,0 @@
# Quality Standards
Quality gates and requirements for project analysis reports.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase 4 | Check report structure before assembly | Report Requirements |
| Phase 5 | Validate before each iteration | Quality Gates |
| Phase 5 | Handle failures during refinement | Error Handling |
---
## Report Requirements
**Use in Phase 4**: Ensure report includes all required elements.
| Requirement | Check | How to Fix |
|-------------|-------|------------|
| Executive Summary | 3-5 key takeaways | Extract from analysis findings |
| Visual diagrams | Valid Mermaid syntax | Use `../_shared/mermaid-utils.md` |
| Code references | `file:line` format | Link to actual source locations |
| Recommendations | Actionable, specific | Derive from analysis insights |
| Consistent depth | Match user's depth level | Adjust detail per config.depth |
---
## Quality Gates
**Use in Phase 5**: Run these checks before asking user questions.
```javascript
function runQualityGates(report, config, diagrams) {
const gates = [
{
name: "focus_areas_covered",
check: () => config.focus_areas.every(area =>
report.toLowerCase().includes(area.toLowerCase())
),
fix: "Re-analyze missing focus areas"
},
{
name: "diagrams_valid",
check: () => diagrams.every(d => d.valid),
fix: "Regenerate failed diagrams with mermaid-utils"
},
{
name: "code_refs_accurate",
check: () => extractCodeRefs(report).every(ref => fileExists(ref)),
fix: "Update invalid file references"
},
{
name: "no_placeholders",
check: () => !report.includes('[TODO]') && !report.includes('[PLACEHOLDER]'),
fix: "Fill in all placeholder content"
},
{
name: "recommendations_specific",
check: () => !report.includes('consider') || report.includes('specifically'),
fix: "Make recommendations project-specific"
}
];
const results = gates.map(g => ({...g, passed: g.check()}));
const allPassed = results.every(r => r.passed);
return { allPassed, results };
}
```
**Integration with Phase 5**:
```javascript
// In 05-iterative-refinement.md
const { allPassed, results } = runQualityGates(report, config, diagrams);
if (allPassed) {
// All gates passed → ask user to confirm or finalize
} else {
// Gates failed → include failed gates in discovery questions
const failedGates = results.filter(r => !r.passed);
discoveries.qualityIssues = failedGates;
}
```
---
## Error Handling
**Use when**: Encountering errors during any phase.
| Error | Detection | Recovery |
|-------|-----------|----------|
| CLI timeout | Bash exits with timeout | Reduce scope via `config.scope`, retry |
| Exploration failure | Agent returns error | Fall back to `Read` + `Grep` directly |
| User abandons | User selects "cancel" | Save to `iterations/`, allow resume |
| Invalid scope path | Path doesn't exist | `AskUserQuestion` to correct path |
| Diagram validation fails | `validateMermaidSyntax` returns issues | Regenerate with stricter escaping |
**Recovery Flow**:
```javascript
try {
await executePhase(phase);
} catch (error) {
const recovery = ERROR_HANDLERS[error.type];
if (recovery) {
await recovery.action(error, config);
// Retry phase or continue
} else {
// Save progress and ask user
Write(`${outputDir}/error-state.json`, { phase, error, config });
AskUserQuestion({ question: "遇到错误,如何处理?", ... });
}
}
```

View File

@@ -1,152 +0,0 @@
# 写作风格规范
## 核心原则
**段落式描述,层层递进,禁止清单罗列。**
## 禁止的写作模式
```markdown
<!-- 禁止:清单罗列 -->
### 模块列表
- 用户模块:处理用户相关功能
- 订单模块:处理订单相关功能
- 支付模块:处理支付相关功能
### 依赖关系
| 模块 | 依赖 | 说明 |
|------|------|------|
| A | B | xxx |
```
## 推荐的写作模式
```markdown
<!-- 推荐:段落式描述 -->
### 模块架构设计
系统采用分层模块化架构,核心业务逻辑围绕用户、订单、支付三大领域展开。
用户模块作为系统的入口层,承担身份认证与权限管理职责,为下游模块提供
统一的用户上下文。订单模块位于业务核心层,依赖用户模块获取会话信息,
并协调支付模块完成交易闭环。
值得注意的是,支付模块采用策略模式实现多渠道支付,通过接口抽象与
具体支付网关解耦。这一设计使得新增支付渠道时,仅需实现相应策略类,
无需修改核心订单逻辑,体现了开闭原则的应用。
从依赖方向分析,系统呈现清晰的单向依赖:表现层依赖业务层,业务层
依赖数据层,未发现循环依赖。这一架构特征确保了模块的独立可测试性,
同时为后续微服务拆分奠定了基础。
```
## 写作策略
### 策略一:主语转换
将主语从开发者视角转移到系统/代码本身:
| 禁止 | 推荐 |
|------|------|
| 我们设计了... | 系统采用... |
| 开发者实现了... | 该模块通过... |
| 代码中使用了... | 架构设计体现了... |
### 策略二:逻辑连接
使用连接词确保段落递进:
- **承接**:此外、进一步、在此基础上
- **转折**:然而、值得注意的是、不同于
- **因果**:因此、这一设计使得、由此可见
- **总结**:综上所述、从整体来看、概言之
### 策略三:深度阐释
每个技术点需包含:
1. **是什么**:客观描述技术实现
2. **为什么**:阐释设计意图和考量
3. **影响**:说明对系统的影响和价值
```markdown
<!-- 示例 -->
系统采用依赖注入模式管理组件生命周期(是什么)。这一选择源于
对可测试性和松耦合的追求(为什么)。通过将依赖关系外置于
配置层,各模块可独立进行单元测试,同时为运行时替换实现
提供了可能(影响)。
```
## 章节模板
### 架构概述(段落式)
```markdown
## 系统架构概述
{项目名称}采用{架构模式}架构,整体设计围绕{核心理念}展开。
从宏观视角审视,系统可划分为{N}个主要层次,各层职责明确,
边界清晰。
{表现层/入口层}作为系统与外部交互的唯一入口,承担请求解析、
参数校验、响应封装等职责。该层通过{框架/技术}实现,遵循
{设计原则},确保接口的一致性与可维护性。
{业务层}是系统的核心所在,封装了全部业务逻辑。该层采用
{模式/策略}组织代码,将复杂业务拆解为{N}个领域模块。
值得注意的是,{关键设计决策}体现了对{质量属性}的重视。
{数据层}负责持久化与数据访问,通过{技术/框架}实现。
该层与业务层通过{接口/抽象}解耦,使得数据源的替换
不影响上层逻辑,体现了依赖倒置原则的应用。
```
### 设计模式分析(段落式)
```markdown
## 设计模式应用
代码库中可识别出{模式1}、{模式2}等设计模式的应用,
这些模式的选择与系统的{核心需求}密切相关。
{模式1}主要应用于{场景/模块}。具体实现位于
`{文件路径}`,通过{实现方式}达成{目标}。
这一模式的引入有效解决了{问题},使得{效果}。
在{另一场景}中,系统采用{模式2}应对{挑战}。
不同于{模式1}的{特点}{模式2}更侧重于{关注点}。
`{文件路径}`的实现可以看出,设计者通过
{具体实现}实现了{目标}。
综合来看,模式的选择体现了对{原则}的遵循,
为系统的{质量属性}提供了有力支撑。
```
### 算法流程分析(段落式)
```markdown
## 核心算法设计
{算法名称}是系统处理{业务场景}的核心逻辑,
其实现位于`{文件路径}`
从算法流程来看,整体可分为{N}个阶段。首先,
{第一阶段描述},这一步骤的目的在于{目的}。
随后,算法进入{第二阶段},通过{方法}实现{目标}。
最终,{结果处理}完成整个处理流程。
在复杂度方面,该算法的时间复杂度为{O(x)}
空间复杂度为{O(y)}。这一复杂度特征源于
{原因},在{数据规模}场景下表现良好。
值得关注的是,{算法名称}采用了{优化策略}
相较于朴素实现,{具体优化点}。这一设计决策
使得{性能提升/效果}。
```
## 质量检查清单
- [ ] 无清单罗列(禁止 `-``|` 表格作为主体内容)
- [ ] 段落完整(每段 3-5 句,逻辑闭环)
- [ ] 逻辑递进(有连接词串联)
- [ ] 客观表达(无"我们"、"开发者"等主观主语)
- [ ] 深度阐释(包含是什么/为什么/影响)
- [ ] 代码引用(关键点附文件路径)

View File

@@ -1,184 +0,0 @@
---
name: software-manual
description: Generate interactive TiddlyWiki-style HTML software manuals with screenshots, API docs, and multi-level code examples. Use when creating user guides, software documentation, or API references. Triggers on "software manual", "user guide", "generate manual", "create docs".
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write, mcp__chrome__*
---
# Software Manual Skill
Generate comprehensive, interactive software manuals in TiddlyWiki-style single-file HTML format.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Context-Optimized Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Requirements → manual-config.json │
│ ↓ │
│ Phase 2: Exploration → exploration-*.json │
│ ↓ │
│ Phase 3: Parallel Agents → sections/section-*.md │
│ ↓ (6 Agents) │
│ Phase 3.5: Consolidation → consolidation-summary.md │
│ ↓ │
│ Phase 4: Screenshot → screenshots/*.png │
│ Capture (via Chrome MCP) │
│ ↓ │
│ Phase 5: HTML Assembly → {name}-使用手册.html │
│ ↓ │
│ Phase 6: Refinement → iterations/ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **主 Agent 编排,子 Agent 执行**: 所有繁重计算委托给 `universal-executor` 子 Agent
2. **Brief Returns**: Agents return path + summary, not full content (avoid context overflow)
3. **System Agents**: 使用 `cli-explore-agent` (探索) 和 `universal-executor` (执行)
4. **成熟库内嵌**: marked.js (MD 解析) + highlight.js (语法高亮),无 CDN 依赖
5. **Single-File HTML**: TiddlyWiki-style interactive document with embedded resources
6. **动态标签**: 根据实际章节自动生成导航标签
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
│ Phase 1: Requirements Discovery (主 Agent) │
│ → AskUserQuestion: 收集软件类型、目标用户、文档范围 │
│ → Output: manual-config.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2: Project Exploration (cli-explore-agent × N) │
│ → 并行探索: architecture, ui-routes, api-endpoints, config │
│ → Output: exploration-*.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2.5: API Extraction (extract_apis.py) │
│ → 自动提取: FastAPI/TypeDoc/pdoc │
│ → Output: api-docs/{backend,frontend,modules}/*.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 3: Parallel Analysis (universal-executor × 6) │
│ → 6 个子 Agent 并行: overview, ui-guide, api-docs, config, │
│ troubleshooting, code-examples │
│ → Output: sections/section-*.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 3.5: Consolidation (universal-executor) │
│ → 质量检查: 一致性、交叉引用、截图标记 │
│ → Output: consolidation-summary.md, screenshots-list.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 4: Screenshot Capture (universal-executor + Chrome MCP) │
│ → 批量截图: 调用 mcp__chrome__screenshot │
│ → Output: screenshots/*.png + manifest.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 5: HTML Assembly (universal-executor) │
│ → 组装 HTML: MD→tiddlers, 嵌入 CSS/JS/图片 │
│ → Output: {name}-使用手册.html │
├─────────────────────────────────────────────────────────────────┤
│ Phase 6: Iterative Refinement (主 Agent) │
│ → 预览 + 用户反馈 + 迭代修复 │
│ → Output: iterations/v*.html │
└─────────────────────────────────────────────────────────────────┘
```
## Agent Configuration
| Agent | Role | Output File | Focus Areas |
|-------|------|-------------|-------------|
| overview | Product Manager | section-overview.md | Product intro, features, quick start |
| ui-guide | UX Expert | section-ui-guide.md | UI operations, step-by-step guides |
| api-docs | API Architect | section-api-reference.md | REST API, Frontend API |
| config | DevOps Engineer | section-configuration.md | Env vars, deployment, settings |
| troubleshooting | Support Engineer | section-troubleshooting.md | FAQs, error codes, solutions |
| code-examples | Developer Advocate | section-examples.md | Beginner/Intermediate/Advanced examples |
## Agent Return Format
```typescript
interface ManualAgentReturn {
status: "completed" | "partial" | "failed";
output_file: string;
summary: string; // Max 50 chars
screenshots_needed: Array<{
id: string; // e.g., "ss-login-form"
url: string; // Relative or absolute URL
description: string; // "Login form interface"
selector?: string; // CSS selector for partial screenshot
wait_for?: string; // Element to wait for
}>;
cross_references: string[]; // Other sections referenced
difficulty_level: "beginner" | "intermediate" | "advanced";
}
```
## HTML Features (TiddlyWiki-style)
1. **Search**: Full-text search with result highlighting
2. **Collapse/Expand**: Per-section collapsible content
3. **Tag Navigation**: Filter by category tags
4. **Theme Toggle**: Light/Dark mode with localStorage persistence
5. **Single File**: All CSS/JS/images embedded as Base64
6. **Offline**: Works without internet connection
7. **Print-friendly**: Optimized print stylesheet
## Directory Setup
```javascript
// Generate timestamp directory name
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const dir = `.workflow/.scratchpad/manual-${timestamp}`;
// Windows
Bash(`mkdir "${dir}\\sections" && mkdir "${dir}\\screenshots" && mkdir "${dir}\\api-docs" && mkdir "${dir}\\iterations"`);
```
## Output Structure
```
.workflow/.scratchpad/manual-{timestamp}/
├── manual-config.json # Phase 1
├── exploration/ # Phase 2
│ ├── exploration-architecture.json
│ ├── exploration-ui-routes.json
│ └── exploration-api-endpoints.json
├── sections/ # Phase 3
│ ├── section-overview.md
│ ├── section-ui-guide.md
│ ├── section-api-reference.md
│ ├── section-configuration.md
│ ├── section-troubleshooting.md
│ └── section-examples.md
├── consolidation-summary.md # Phase 3.5
├── api-docs/ # API documentation
│ ├── frontend/ # TypeDoc output
│ └── backend/ # Swagger/OpenAPI output
├── screenshots/ # Phase 4
│ ├── ss-*.png
│ └── screenshots-manifest.json
├── iterations/ # Phase 6
│ ├── v1.html
│ └── v2.html
└── {软件名}-使用手册.html # Final Output
```
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | 用户配置收集 |
| [phases/02-project-exploration.md](phases/02-project-exploration.md) | 项目类型检测 |
| [phases/02.5-api-extraction.md](phases/02.5-api-extraction.md) | API 自动提取 |
| [phases/03-parallel-analysis.md](phases/03-parallel-analysis.md) | 6 Agent 并行分析 |
| [phases/03.5-consolidation.md](phases/03.5-consolidation.md) | 整合与质量检查 |
| [phases/04-screenshot-capture.md](phases/04-screenshot-capture.md) | Chrome MCP 截图 |
| [phases/05-html-assembly.md](phases/05-html-assembly.md) | HTML 组装 |
| [phases/06-iterative-refinement.md](phases/06-iterative-refinement.md) | 迭代优化 |
| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准 |
| [specs/writing-style.md](specs/writing-style.md) | 写作风格 |
| [templates/tiddlywiki-shell.html](templates/tiddlywiki-shell.html) | HTML 模板 |
| [templates/css/wiki-base.css](templates/css/wiki-base.css) | 基础样式 |
| [templates/css/wiki-dark.css](templates/css/wiki-dark.css) | 暗色主题 |
| [scripts/bundle-libraries.md](scripts/bundle-libraries.md) | 库文件打包 |
| [scripts/api-extractor.md](scripts/api-extractor.md) | API 提取说明 |
| [scripts/extract_apis.py](scripts/extract_apis.py) | API 提取脚本 |
| [scripts/screenshot-helper.md](scripts/screenshot-helper.md) | 截图辅助 |

View File

@@ -1,162 +0,0 @@
# Phase 1: Requirements Discovery
Collect user requirements and generate configuration for the manual generation process.
## Objective
Gather essential information about the software project to customize the manual generation:
- Software type and characteristics
- Target user audience
- Documentation scope and depth
- Special requirements
## Execution Steps
### Step 1: Software Information Collection
Use `AskUserQuestion` to collect:
```javascript
AskUserQuestion({
questions: [
{
question: "What type of software is this project?",
header: "Software Type",
options: [
{ label: "Web Application", description: "Frontend + Backend web app with UI" },
{ label: "CLI Tool", description: "Command-line interface tool" },
{ label: "SDK/Library", description: "Developer library or SDK" },
{ label: "Desktop App", description: "Desktop application (Electron, etc.)" }
],
multiSelect: false
},
{
question: "Who is the target audience for this manual?",
header: "Target Users",
options: [
{ label: "End Users", description: "Non-technical users who use the product" },
{ label: "Developers", description: "Developers integrating or extending the product" },
{ label: "Administrators", description: "System admins deploying and maintaining" },
{ label: "All Audiences", description: "Mixed audience with different sections" }
],
multiSelect: false
},
{
question: "What documentation scope do you need?",
header: "Doc Scope",
options: [
{ label: "Quick Start", description: "Essential getting started guide only" },
{ label: "User Guide", description: "Complete user-facing documentation" },
{ label: "API Reference", description: "Focus on API documentation" },
{ label: "Comprehensive", description: "Full documentation including all sections" }
],
multiSelect: false
},
{
question: "What difficulty levels should code examples cover?",
header: "Example Levels",
options: [
{ label: "Beginner Only", description: "Simple, basic examples" },
{ label: "Beginner + Intermediate", description: "Basic to moderate complexity" },
{ label: "All Levels", description: "Beginner, Intermediate, and Advanced" }
],
multiSelect: false
}
]
});
```
### Step 2: Auto-Detection (Supplement)
Automatically detect project characteristics:
```javascript
// Detect from package.json
const packageJson = Read('package.json');
const softwareName = packageJson.name;
const version = packageJson.version;
const description = packageJson.description;
// Detect tech stack
const hasReact = packageJson.dependencies?.react;
const hasVue = packageJson.dependencies?.vue;
const hasExpress = packageJson.dependencies?.express;
const hasNestJS = packageJson.dependencies?.['@nestjs/core'];
// Detect CLI
const hasBin = !!packageJson.bin;
// Detect UI
const hasPages = Glob('src/pages/**/*').length > 0 || Glob('pages/**/*').length > 0;
const hasRoutes = Glob('**/routes.*').length > 0;
```
### Step 3: Generate Configuration
Create `manual-config.json`:
```json
{
"software": {
"name": "{{detected or user input}}",
"version": "{{from package.json}}",
"description": "{{from package.json}}",
"type": "{{web|cli|sdk|desktop}}"
},
"target_audience": "{{end_users|developers|admins|all}}",
"doc_scope": "{{quick_start|user_guide|api_reference|comprehensive}}",
"example_levels": ["beginner", "intermediate", "advanced"],
"tech_stack": {
"frontend": "{{react|vue|angular|vanilla}}",
"backend": "{{express|nestjs|fastify|none}}",
"language": "{{typescript|javascript}}",
"ui_framework": "{{tailwind|mui|antd|none}}"
},
"features": {
"has_ui": true,
"has_api": true,
"has_cli": false,
"has_config": true
},
"agents_to_run": [
"overview",
"ui-guide",
"api-docs",
"config",
"troubleshooting",
"code-examples"
],
"screenshot_config": {
"enabled": true,
"dev_command": "npm run dev",
"dev_url": "http://localhost:3000",
"wait_timeout": 5000
},
"output": {
"filename": "{{name}}-使用手册.html",
"theme": "light",
"language": "zh-CN"
},
"timestamp": "{{ISO8601}}"
}
```
## Agent Selection Logic
Based on `doc_scope`, select agents to run:
| Scope | Agents |
|-------|--------|
| quick_start | overview |
| user_guide | overview, ui-guide, config, troubleshooting |
| api_reference | overview, api-docs, code-examples |
| comprehensive | ALL 6 agents |
## Output
- **File**: `manual-config.json`
- **Location**: `.workflow/.scratchpad/manual-{timestamp}/`
## Next Phase
Proceed to [Phase 2: Project Exploration](02-project-exploration.md) with the generated configuration.

View File

@@ -1,101 +0,0 @@
# Phase 2: Project Exploration
使用 `cli-explore-agent` 探索项目结构,生成文档所需的结构化数据。
## 探索角度
```javascript
const EXPLORATION_ANGLES = {
web: ['architecture', 'ui-routes', 'api-endpoints', 'config'],
cli: ['architecture', 'commands', 'config'],
sdk: ['architecture', 'public-api', 'types', 'config'],
desktop: ['architecture', 'ui-screens', 'config']
};
```
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
const angles = EXPLORATION_ANGLES[config.software.type];
// 并行探索
const tasks = angles.map(angle => Task({
subagent_type: 'cli-explore-agent',
run_in_background: false,
prompt: buildExplorationPrompt(angle, config, workDir)
}));
const results = await Promise.all(tasks);
```
## 探索配置
```javascript
const EXPLORATION_CONFIGS = {
architecture: {
task: '分析项目模块结构、入口点、依赖关系',
patterns: ['src/*/', 'package.json', 'tsconfig.json'],
output: 'exploration-architecture.json'
},
'ui-routes': {
task: '提取 UI 路由、页面组件、导航结构',
patterns: ['src/pages/**', 'src/views/**', 'app/**/page.*', 'src/router/**'],
output: 'exploration-ui-routes.json'
},
'api-endpoints': {
task: '提取 REST API 端点、请求/响应类型',
patterns: ['src/**/*.controller.*', 'src/routes/**', 'openapi.*', 'swagger.*'],
output: 'exploration-api-endpoints.json'
},
config: {
task: '提取环境变量、配置文件选项',
patterns: ['.env.example', 'config/**', 'docker-compose.yml'],
output: 'exploration-config.json'
},
commands: {
task: '提取 CLI 命令、选项、示例',
patterns: ['src/cli*', 'bin/*', 'src/commands/**'],
output: 'exploration-commands.json'
}
};
```
## Prompt 构建
```javascript
function buildExplorationPrompt(angle, config, workDir) {
const cfg = EXPLORATION_CONFIGS[angle];
return `
[TASK]
${cfg.task}
[SCOPE]
项目类型: ${config.software.type}
扫描模式: deep-scan
文件模式: ${cfg.patterns.join(', ')}
[OUTPUT]
文件: ${workDir}/exploration/${cfg.output}
格式: JSON (schema-compliant)
[RETURN]
简要说明发现的内容数量和关键发现
`;
}
```
## 输出结构
```
exploration/
├── exploration-architecture.json # 模块结构
├── exploration-ui-routes.json # UI 路由
├── exploration-api-endpoints.json # API 端点
├── exploration-config.json # 配置选项
└── exploration-commands.json # CLI 命令 (if CLI)
```
## 下一阶段
→ [Phase 3: Parallel Analysis](03-parallel-analysis.md)

View File

@@ -1,161 +0,0 @@
# Phase 2.5: API Extraction
在项目探索后、并行分析前,自动提取 API 文档。
## 核心原则
**使用成熟工具提取,确保输出格式与 wiki 模板兼容。**
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// 检查项目路径配置
const apiSources = config.api_sources || detectApiSources(config.project_path);
// 执行 API 提取
Bash({
command: `python .claude/skills/software-manual/scripts/extract_apis.py -o "${workDir}" -p ${apiSources.join(' ')}`
});
// 验证输出
const apiDocsDir = `${workDir}/api-docs`;
const extractedFiles = Glob(`${apiDocsDir}/**/*.{json,md}`);
console.log(`Extracted ${extractedFiles.length} API documentation files`);
```
## 支持的项目类型
| 类型 | 检测方式 | 提取工具 | 输出格式 |
|------|----------|----------|----------|
| FastAPI | `app/main.py` + FastAPI import | OpenAPI JSON | `openapi.json` + `API_SUMMARY.md` |
| Next.js | `package.json` + next | TypeDoc | `*.md` (Markdown) |
| Python Module | `__init__.py` + setup.py/pyproject.toml | pdoc | `*.md` (Markdown) |
| Express | `package.json` + express | swagger-jsdoc | `openapi.json` |
| NestJS | `package.json` + @nestjs | @nestjs/swagger | `openapi.json` |
## 输出格式规范
### Markdown 兼容性要求
确保输出 Markdown 与 wiki CSS 样式兼容:
```markdown
# API Reference → <h1> (wiki-base.css)
## Endpoints → <h2>
| Method | Path | Summary | → <table> 蓝色表头
|--------|------|---------|
| `GET` | `/api/...` | ... | → <code> 红色高亮
### GET /api/users → <h3>
\`\`\`json → <pre><code> 深色背景
{
"id": 1,
"name": "example"
}
\`\`\`
- Parameter: `id` (required) → <ul><li> + <code>
```
### 格式验证检查
```javascript
function validateApiDocsFormat(apiDocsDir) {
const issues = [];
const mdFiles = Glob(`${apiDocsDir}/**/*.md`);
for (const file of mdFiles) {
const content = Read(file);
// 检查表格格式
if (content.includes('|') && !content.match(/\|.*\|.*\|/)) {
issues.push(`${file}: 表格格式不完整`);
}
// 检查代码块语言标注
const codeBlocks = content.match(/```(\w*)\n/g) || [];
const unlabeled = codeBlocks.filter(b => b === '```\n');
if (unlabeled.length > 0) {
issues.push(`${file}: ${unlabeled.length} 个代码块缺少语言标注`);
}
// 检查标题层级
if (!content.match(/^# /m)) {
issues.push(`${file}: 缺少一级标题`);
}
}
return issues;
}
```
## 项目配置示例
`manual-config.json` 中配置 API 源:
```json
{
"software": {
"name": "Hydro Generator Workbench",
"type": "web"
},
"api_sources": {
"backend": {
"path": "D:/dongdiankaifa9/backend",
"type": "fastapi",
"entry": "app.main:app"
},
"frontend": {
"path": "D:/dongdiankaifa9/frontend",
"type": "typescript",
"entries": ["lib", "hooks", "components"]
},
"hydro_generator_module": {
"path": "D:/dongdiankaifa9/hydro_generator_module",
"type": "python"
},
"multiphysics_network": {
"path": "D:/dongdiankaifa9/multiphysics_network",
"type": "python"
}
}
}
```
## 输出结构
```
{workDir}/api-docs/
├── backend/
│ ├── openapi.json # OpenAPI 3.0 规范
│ └── API_SUMMARY.md # Markdown 摘要wiki 兼容)
├── frontend/
│ ├── modules.md # TypeDoc 模块文档
│ ├── classes/ # 类文档
│ └── functions/ # 函数文档
├── hydro_generator/
│ ├── assembler.md # pdoc 模块文档
│ ├── blueprint.md
│ └── builders/
└── multiphysics/
├── analysis_domain.md
├── builders.md
└── compilers.md
```
## 质量门禁
- [ ] 所有配置的 API 源已提取
- [ ] Markdown 格式与 wiki CSS 兼容
- [ ] 表格正确渲染(蓝色表头)
- [ ] 代码块有语言标注
- [ ] 无空文件或错误文件
## 下一阶段
→ [Phase 3: Parallel Analysis](03-parallel-analysis.md)

View File

@@ -1,183 +0,0 @@
# Phase 3: Parallel Analysis
使用 `universal-executor` 并行生成 6 个文档章节。
## Agent 配置
```javascript
const AGENT_CONFIGS = {
overview: {
role: 'Product Manager',
output: 'section-overview.md',
task: '撰写产品概览、核心功能、快速入门指南',
focus: '产品定位、目标用户、5步快速入门、系统要求',
input: ['exploration-architecture.json', 'README.md', 'package.json'],
tag: 'getting-started'
},
'interface-guide': {
role: 'Product Designer',
output: 'section-interface.md',
task: '撰写界面或交互指南Web 截图、CLI 命令交互、桌面应用操作)',
focus: '视觉布局、交互流程、命令行参数、输入/输出示例',
input: ['exploration-ui-routes.json', 'src/**', 'pages/**', 'views/**', 'components/**', 'src/commands/**'],
tag: 'interface',
screenshot_rules: `
根据项目类型标注交互点:
[Web] <!-- SCREENSHOT: id="ss-{功能}" url="{路由}" selector="{CSS选择器}" description="{描述}" -->
[CLI] 使用代码块展示命令交互:
\`\`\`bash
$ command --flag value
Expected output here
\`\`\`
[Desktop] <!-- SCREENSHOT: id="ss-{功能}" description="{描述}" -->
`
},
'api-reference': {
role: 'Technical Architect',
output: 'section-reference.md',
task: '撰写接口参考文档REST API / 函数库 / CLI 命令)',
focus: '函数签名、端点定义、参数说明、返回值、错误代码',
pre_extract: 'python .claude/skills/software-manual/scripts/extract_apis.py -o ${workDir}',
input: [
'${workDir}/api-docs/backend/openapi.json', // FastAPI OpenAPI
'${workDir}/api-docs/backend/API_SUMMARY.md', // Backend summary
'${workDir}/api-docs/frontend/**/*.md', // TypeDoc output
'${workDir}/api-docs/hydro_generator/**/*.md', // Python module
'${workDir}/api-docs/multiphysics/**/*.md' // Python module
],
tag: 'api'
},
config: {
role: 'DevOps Engineer',
output: 'section-configuration.md',
task: '撰写配置指南,涵盖环境变量、配置文件、部署设置',
focus: '环境变量表格、配置文件格式、部署选项、安全设置',
input: ['exploration-config.json', '.env.example', 'config/**', '*.config.*'],
tag: 'config'
},
troubleshooting: {
role: 'Support Engineer',
output: 'section-troubleshooting.md',
task: '撰写故障排查指南涵盖常见问题、错误码、FAQ',
focus: '常见问题与解决方案、错误码参考、FAQ、获取帮助',
input: ['docs/troubleshooting.md', 'src/**/errors.*', 'src/**/exceptions.*', 'TROUBLESHOOTING.md'],
tag: 'troubleshooting'
},
'code-examples': {
role: 'Developer Advocate',
output: 'section-examples.md',
task: '撰写多难度级别代码示例入门40%/进阶40%/高级20%',
focus: '完整可运行代码、分步解释、预期输出、最佳实践',
input: ['examples/**', 'tests/**', 'demo/**', 'samples/**'],
tag: 'examples'
}
};
```
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// 1. 预提取 API 文档(如有 pre_extract 配置)
for (const [name, cfg] of Object.entries(AGENT_CONFIGS)) {
if (cfg.pre_extract) {
const cmd = cfg.pre_extract.replace(/\$\{workDir\}/g, workDir);
console.log(`[Pre-extract] ${name}: ${cmd}`);
Bash({ command: cmd });
}
}
// 2. 并行启动 6 个 universal-executor
const tasks = Object.entries(AGENT_CONFIGS).map(([name, cfg]) =>
Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildAgentPrompt(name, cfg, config, workDir)
})
);
const results = await Promise.all(tasks);
```
## Prompt 构建
```javascript
function buildAgentPrompt(name, cfg, config, workDir) {
const screenshotSection = cfg.screenshot_rules
? `\n[SCREENSHOT RULES]\n${cfg.screenshot_rules}`
: '';
return `
[ROLE] ${cfg.role}
[PROJECT CONTEXT]
项目类型: ${config.software.type} (web/cli/sdk/desktop)
语言: ${config.software.language || 'auto-detect'}
名称: ${config.software.name}
[TASK]
${cfg.task}
输出: ${workDir}/sections/${cfg.output}
[INPUT]
- 配置: ${workDir}/manual-config.json
- 探索结果: ${workDir}/exploration/
- 扫描路径: ${cfg.input.join(', ')}
[CONTENT REQUIREMENTS]
- 标题层级: # ## ### (最多3级)
- 代码块: \`\`\`language ... \`\`\` (必须标注语言)
- 表格: | col1 | col2 | 格式
- 列表: 有序 1. 2. 3. / 无序 - - -
- 内联代码: \`code\`
- 链接: [text](url)
${screenshotSection}
[FOCUS]
${cfg.focus}
[OUTPUT FORMAT]
Markdown 文件,包含:
- 清晰的章节结构
- 具体的代码示例
- 参数/配置表格
- 常见用例说明
[RETURN JSON]
{
"status": "completed",
"output_file": "sections/${cfg.output}",
"summary": "<50字>",
"tag": "${cfg.tag}",
"screenshots_needed": []
}
`;
}
```
## 结果收集
```javascript
const agentResults = results.map(r => JSON.parse(r));
const allScreenshots = agentResults.flatMap(r => r.screenshots_needed);
Write(`${workDir}/agent-results.json`, JSON.stringify({
results: agentResults,
screenshots_needed: allScreenshots,
timestamp: new Date().toISOString()
}, null, 2));
```
## 质量检查
- [ ] Markdown 语法有效
- [ ] 无占位符文本
- [ ] 代码块标注语言
- [ ] 截图标记格式正确
- [ ] 交叉引用有效
## 下一阶段
→ [Phase 3.5: Consolidation](03.5-consolidation.md)

View File

@@ -1,82 +0,0 @@
# Phase 3.5: Consolidation
使用 `universal-executor` 子 Agent 执行质量检查,避免主 Agent 内存溢出。
## 核心原则
**主 Agent 负责编排,子 Agent 负责繁重计算。**
## 执行流程
```javascript
const agentResults = JSON.parse(Read(`${workDir}/agent-results.json`));
// 委托给 universal-executor 执行整合检查
const result = Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildConsolidationPrompt(workDir)
});
const consolidationResult = JSON.parse(result);
```
## Prompt 构建
```javascript
function buildConsolidationPrompt(workDir) {
return `
[ROLE] Quality Analyst
[TASK]
检查所有章节的一致性和完整性
[INPUT]
- 章节文件: ${workDir}/sections/section-*.md
- Agent 结果: ${workDir}/agent-results.json
[CHECKS]
1. Markdown 语法有效性
2. 截图标记格式 (<!-- SCREENSHOT: id="..." -->)
3. 交叉引用有效性
4. 术语一致性
5. 代码块语言标注
[OUTPUT]
1. 写入 ${workDir}/consolidation-summary.md
2. 写入 ${workDir}/screenshots-list.json (截图清单)
[RETURN JSON]
{
"status": "completed",
"sections_checked": <n>,
"screenshots_found": <n>,
"issues": { "errors": <n>, "warnings": <n> },
"quality_score": <0-100>
}
`;
}
```
## Agent 职责
1. **读取章节** → 逐个检查 section-*.md
2. **提取截图** → 收集所有截图标记
3. **验证引用** → 检查交叉引用有效性
4. **评估质量** → 计算综合分数
5. **输出报告** → consolidation-summary.md
## 输出
- `consolidation-summary.md` - 质量报告
- `screenshots-list.json` - 截图清单(供 Phase 4 使用)
## 质量门禁
- [ ] 无错误
- [ ] 总分 >= 60%
- [ ] 交叉引用有效
## 下一阶段
→ [Phase 4: Screenshot Capture](04-screenshot-capture.md)

View File

@@ -1,89 +0,0 @@
# Phase 4: Screenshot Capture
使用 `universal-executor` 子 Agent 调用 Chrome MCP 截图。
## 核心原则
**主 Agent 负责编排,子 Agent 负责截图采集。**
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
const screenshotsList = JSON.parse(Read(`${workDir}/screenshots-list.json`));
// 委托给 universal-executor 执行截图
const result = Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildScreenshotPrompt(config, screenshotsList, workDir)
});
const captureResult = JSON.parse(result);
```
## Prompt 构建
```javascript
function buildScreenshotPrompt(config, screenshotsList, workDir) {
return `
[ROLE] Screenshot Capturer
[TASK]
使用 Chrome MCP 批量截图
[INPUT]
- 配置: ${workDir}/manual-config.json
- 截图清单: ${workDir}/screenshots-list.json
[STEPS]
1. 检查 Chrome MCP 可用性 (mcp__chrome__*)
2. 启动开发服务器: ${config.screenshot_config?.dev_command || 'npm run dev'}
3. 等待服务器就绪: ${config.screenshot_config?.dev_url || 'http://localhost:3000'}
4. 遍历截图清单,逐个调用 mcp__chrome__screenshot
5. 保存截图到 ${workDir}/screenshots/
6. 生成 manifest: ${workDir}/screenshots/screenshots-manifest.json
7. 停止开发服务器
[MCP CALLS]
- mcp__chrome__screenshot({ url, selector?, viewport })
- 保存为 PNG 文件
[FALLBACK]
若 Chrome MCP 不可用,生成手动截图指南: MANUAL_CAPTURE.md
[RETURN JSON]
{
"status": "completed|skipped",
"captured": <n>,
"failed": <n>,
"manifest_file": "screenshots-manifest.json"
}
`;
}
```
## Agent 职责
1. **检查 MCP** → Chrome MCP 可用性
2. **启动服务** → 开发服务器
3. **批量截图** → 调用 mcp__chrome__screenshot
4. **保存文件** → screenshots/*.png
5. **生成清单** → screenshots-manifest.json
## 输出
- `screenshots/*.png` - 截图文件
- `screenshots/screenshots-manifest.json` - 清单
- `screenshots/MANUAL_CAPTURE.md` - 手动指南fallback
## 质量门禁
- [ ] 高优先级截图完成
- [ ] 尺寸一致 (1280×800)
- [ ] 无空白截图
- [ ] Manifest 完整
## 下一阶段
→ [Phase 5: HTML Assembly](05-html-assembly.md)

View File

@@ -1,132 +0,0 @@
# Phase 5: HTML Assembly
使用 `universal-executor` 子 Agent 生成最终 HTML避免主 Agent 内存溢出。
## 核心原则
**主 Agent 负责编排,子 Agent 负责繁重计算。**
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// 委托给 universal-executor 执行 HTML 组装
const result = Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildAssemblyPrompt(config, workDir)
});
const buildResult = JSON.parse(result);
```
## Prompt 构建
```javascript
function buildAssemblyPrompt(config, workDir) {
return `
[ROLE] HTML Assembler
[TASK]
生成 TiddlyWiki 风格的交互式 HTML 手册(使用成熟库,无外部 CDN 依赖)
[INPUT]
- 模板: .claude/skills/software-manual/templates/tiddlywiki-shell.html
- CSS: .claude/skills/software-manual/templates/css/wiki-base.css, wiki-dark.css
- 配置: ${workDir}/manual-config.json
- 章节: ${workDir}/sections/section-*.md
- Agent 结果: ${workDir}/agent-results.json (含 tag 信息)
- 截图: ${workDir}/screenshots/
[LIBRARIES TO EMBED]
1. marked.js (v14+) - Markdown 转 HTML
- 从 https://unpkg.com/marked/marked.min.js 获取内容内嵌
2. highlight.js (v11+) - 代码语法高亮
- 核心 + 常用语言包 (js, ts, python, bash, json, yaml, html, css)
- 使用 github-dark 主题
[STEPS]
1. 读取 HTML 模板和 CSS
2. 内嵌 marked.js 和 highlight.js 代码
3. 读取 agent-results.json 提取各章节 tag
4. 动态生成 {{TAG_BUTTONS_HTML}} (基于实际使用的 tags)
5. 逐个读取 section-*.md使用 marked 转换为 HTML
6. 为代码块添加 data-language 属性和语法高亮
7. 处理 <!-- SCREENSHOT: id="..." --> 标记,嵌入 Base64 图片
8. 生成目录、搜索索引
9. 组装最终 HTML写入 ${workDir}/${config.software.name}-使用手册.html
[CONTENT FORMATTING]
- 代码块: 深色背景 + 语言标签 + 语法高亮
- 表格: 蓝色表头 + 边框 + 悬停效果
- 内联代码: 红色高亮
- 列表: 有序/无序样式增强
- 左侧导航: 固定侧边栏 + TOC
[RETURN JSON]
{
"status": "completed",
"output_file": "${config.software.name}-使用手册.html",
"file_size": "<size>",
"sections_count": <n>,
"tags_generated": [],
"screenshots_embedded": <n>
}
`;
}
```
## Agent 职责
1. **读取模板** → HTML + CSS
2. **转换章节** → Markdown → HTML tiddlers
3. **嵌入截图** → Base64 编码
4. **生成索引** → 搜索数据
5. **组装输出** → 单文件 HTML
## Markdown 转换规则
Agent 内部实现:
```
# H1 → <h1>
## H2 → <h2>
### H3 → <h3>
```code``` → <pre><code>
**bold** → <strong>
*italic* → <em>
[text](url) → <a href>
- item → <li>
<!-- SCREENSHOT: id="xxx" --> → <figure><img src="data:..."></figure>
```
## Tiddler 结构
```html
<article class="tiddler" id="tiddler-{name}" data-tags="..." data-difficulty="...">
<header class="tiddler-header">
<h2><button class="collapse-toggle"></button> {title}</h2>
<div class="tiddler-meta">{badges}</div>
</header>
<div class="tiddler-content">{html}</div>
</article>
```
## 输出
- `{软件名}-使用手册.html` - 最终 HTML
- `build-report.json` - 构建报告
## 质量门禁
- [ ] HTML 渲染正确
- [ ] 搜索功能可用
- [ ] 折叠/展开正常
- [ ] 主题切换持久化
- [ ] 截图显示正确
- [ ] 文件大小 < 10MB
## 下一阶段
→ [Phase 6: Iterative Refinement](06-iterative-refinement.md)

View File

@@ -1,259 +0,0 @@
# Phase 6: Iterative Refinement
Preview, collect feedback, and iterate until quality meets standards.
## Objective
- Preview generated HTML in browser
- Collect user feedback
- Address issues iteratively
- Finalize documentation
## Execution Steps
### Step 1: Preview HTML
```javascript
const buildReport = JSON.parse(Read(`${workDir}/build-report.json`));
const outputFile = `${workDir}/${buildReport.output}`;
// Open in default browser for preview
Bash({ command: `start "${outputFile}"` }); // Windows
// Bash({ command: `open "${outputFile}"` }); // macOS
// Report to user
console.log(`
📖 Manual Preview
File: ${buildReport.output}
Size: ${buildReport.size_human}
Sections: ${buildReport.sections}
Screenshots: ${buildReport.screenshots}
Please review the manual in your browser.
`);
```
### Step 2: Collect Feedback
```javascript
const feedback = await AskUserQuestion({
questions: [
{
question: "How does the manual look overall?",
header: "Overall",
options: [
{ label: "Looks great!", description: "Ready to finalize" },
{ label: "Minor issues", description: "Small tweaks needed" },
{ label: "Major issues", description: "Significant changes required" },
{ label: "Missing content", description: "Need to add more sections" }
],
multiSelect: false
},
{
question: "Which aspects need improvement? (Select all that apply)",
header: "Improvements",
options: [
{ label: "Content accuracy", description: "Fix incorrect information" },
{ label: "More examples", description: "Add more code examples" },
{ label: "Better screenshots", description: "Retake or add screenshots" },
{ label: "Styling/Layout", description: "Improve visual appearance" }
],
multiSelect: true
}
]
});
```
### Step 3: Address Feedback
Based on feedback, take appropriate action:
#### Minor Issues
```javascript
if (feedback.overall === "Minor issues") {
// Prompt for specific changes
const details = await AskUserQuestion({
questions: [{
question: "What specific changes are needed?",
header: "Details",
options: [
{ label: "Typo fixes", description: "Fix spelling/grammar" },
{ label: "Reorder sections", description: "Change section order" },
{ label: "Update content", description: "Modify existing text" },
{ label: "Custom changes", description: "I'll describe the changes" }
],
multiSelect: true
}]
});
// Apply changes based on user input
applyMinorChanges(details);
}
```
#### Major Issues
```javascript
if (feedback.overall === "Major issues") {
// Return to relevant phase
console.log(`
Major issues require returning to an earlier phase:
- Content issues → Phase 3 (Parallel Analysis)
- Screenshot issues → Phase 4 (Screenshot Capture)
- Structure issues → Phase 2 (Project Exploration)
Which phase should we return to?
`);
const phase = await selectPhase();
return { action: 'restart', from_phase: phase };
}
```
#### Missing Content
```javascript
if (feedback.overall === "Missing content") {
// Identify missing sections
const missing = await AskUserQuestion({
questions: [{
question: "What content is missing?",
header: "Missing",
options: [
{ label: "API endpoints", description: "More API documentation" },
{ label: "UI features", description: "Additional UI guides" },
{ label: "Examples", description: "More code examples" },
{ label: "Troubleshooting", description: "More FAQ items" }
],
multiSelect: true
}]
});
// Run additional agent(s) for missing content
await runSupplementaryAgents(missing);
}
```
### Step 4: Save Iteration
```javascript
// Save current version before changes
const iterationNum = getNextIterationNumber(workDir);
const iterationDir = `${workDir}/iterations`;
// Copy current version
Bash({ command: `copy "${outputFile}" "${iterationDir}\\v${iterationNum}.html"` });
// Log iteration
const iterationLog = {
version: iterationNum,
timestamp: new Date().toISOString(),
feedback: feedback,
changes: appliedChanges
};
Write(`${iterationDir}/iteration-${iterationNum}.json`, JSON.stringify(iterationLog, null, 2));
```
### Step 5: Regenerate if Needed
```javascript
if (changesApplied) {
// Re-run HTML assembly with updated sections
await runPhase('05-html-assembly');
// Open updated preview
Bash({ command: `start "${outputFile}"` });
}
```
### Step 6: Finalize
When user approves:
```javascript
if (feedback.overall === "Looks great!") {
// Final quality check
const finalReport = {
...buildReport,
iterations: iterationNum,
finalized_at: new Date().toISOString(),
quality_score: calculateFinalQuality()
};
Write(`${workDir}/final-report.json`, JSON.stringify(finalReport, null, 2));
// Suggest final location
console.log(`
✅ Manual Finalized!
Output: ${buildReport.output}
Size: ${buildReport.size_human}
Quality: ${finalReport.quality_score}%
Iterations: ${iterationNum}
Suggested actions:
1. Copy to project root: copy "${outputFile}" "docs/"
2. Add to version control
3. Publish to documentation site
`);
return { status: 'completed', output: outputFile };
}
```
## Iteration History
Each iteration is logged:
```
iterations/
├── v1.html # First version
├── iteration-1.json # Feedback and changes
├── v2.html # After first iteration
├── iteration-2.json # Feedback and changes
└── ...
```
## Quality Metrics
Track improvement across iterations:
```javascript
const qualityMetrics = {
content_completeness: 0, // All sections present
screenshot_coverage: 0, // Screenshots for all UI
example_diversity: 0, // Different difficulty levels
search_accuracy: 0, // Search returns relevant results
user_satisfaction: 0 // Based on feedback
};
```
## Exit Conditions
The refinement phase ends when:
1. User explicitly approves ("Looks great!")
2. Maximum iterations reached (configurable, default: 5)
3. Quality score exceeds threshold (default: 90%)
## Output
- **Final HTML**: `{软件名}-使用手册.html`
- **Final Report**: `final-report.json`
- **Iteration History**: `iterations/`
## Completion
When finalized, the skill is complete. Final output location:
```
.workflow/.scratchpad/manual-{timestamp}/
├── {软件名}-使用手册.html ← Final deliverable
├── final-report.json
└── iterations/
```
Consider copying to a permanent location like `docs/` or project root.

View File

@@ -1,245 +0,0 @@
# API 文档提取脚本
根据项目类型自动提取 API 文档,支持 FastAPI、Next.js、Python 模块。
## 支持的技术栈
| 类型 | 技术栈 | 工具 | 输出格式 |
|------|--------|------|----------|
| Backend | FastAPI | openapi-to-md | Markdown |
| Frontend | Next.js/TypeScript | TypeDoc | Markdown |
| Python Module | Python | pdoc | Markdown/HTML |
## 使用方法
### 1. FastAPI Backend (OpenAPI)
```bash
# 提取 OpenAPI JSON
cd D:/dongdiankaifa9/backend
python -c "
from app.main import app
import json
print(json.dumps(app.openapi(), indent=2))
" > api-docs/openapi.json
# 转换为 Markdown (使用 widdershins)
npx widdershins api-docs/openapi.json -o api-docs/API_REFERENCE.md --language_tabs 'python:Python' 'javascript:JavaScript' 'bash:cURL'
```
**备选方案 (无需启动服务)**:
```python
# scripts/extract_fastapi_openapi.py
import sys
sys.path.insert(0, 'D:/dongdiankaifa9/backend')
from app.main import app
import json
openapi_schema = app.openapi()
with open('api-docs/openapi.json', 'w', encoding='utf-8') as f:
json.dump(openapi_schema, f, indent=2, ensure_ascii=False)
print(f"Extracted {len(openapi_schema.get('paths', {}))} endpoints")
```
### 2. Next.js Frontend (TypeDoc)
```bash
cd D:/dongdiankaifa9/frontend
# 安装 TypeDoc
npm install --save-dev typedoc typedoc-plugin-markdown
# 生成文档
npx typedoc --plugin typedoc-plugin-markdown \
--out api-docs \
--entryPoints "./lib" "./hooks" "./components" \
--entryPointStrategy expand \
--exclude "**/node_modules/**" \
--exclude "**/*.test.*" \
--readme none
```
**typedoc.json 配置**:
```json
{
"$schema": "https://typedoc.org/schema.json",
"entryPoints": ["./lib", "./hooks", "./components"],
"entryPointStrategy": "expand",
"out": "api-docs",
"plugin": ["typedoc-plugin-markdown"],
"exclude": ["**/node_modules/**", "**/*.test.*", "**/*.spec.*"],
"excludePrivate": true,
"excludeInternal": true,
"readme": "none",
"name": "Frontend API Reference"
}
```
### 3. Python Module (pdoc)
```bash
# 安装 pdoc
pip install pdoc
# hydro_generator_module
cd D:/dongdiankaifa9
pdoc hydro_generator_module \
--output-dir api-docs/hydro_generator \
--format markdown \
--no-show-source
# multiphysics_network
pdoc multiphysics_network \
--output-dir api-docs/multiphysics \
--format markdown \
--no-show-source
```
**备选: Sphinx (更强大)**:
```bash
# 安装 Sphinx
pip install sphinx sphinx-markdown-builder
# 生成 API 文档
sphinx-apidoc -o docs/source hydro_generator_module
cd docs && make markdown
```
## 集成脚本
```python
#!/usr/bin/env python3
# scripts/extract_all_apis.py
import subprocess
import sys
from pathlib import Path
PROJECTS = {
'backend': {
'path': 'D:/dongdiankaifa9/backend',
'type': 'fastapi',
'output': 'api-docs/backend'
},
'frontend': {
'path': 'D:/dongdiankaifa9/frontend',
'type': 'typescript',
'output': 'api-docs/frontend'
},
'hydro_generator_module': {
'path': 'D:/dongdiankaifa9/hydro_generator_module',
'type': 'python',
'output': 'api-docs/hydro_generator'
},
'multiphysics_network': {
'path': 'D:/dongdiankaifa9/multiphysics_network',
'type': 'python',
'output': 'api-docs/multiphysics'
}
}
def extract_fastapi(config):
"""提取 FastAPI OpenAPI 文档"""
path = Path(config['path'])
sys.path.insert(0, str(path))
try:
from app.main import app
import json
output_dir = Path(config['output'])
output_dir.mkdir(parents=True, exist_ok=True)
# 导出 OpenAPI JSON
with open(output_dir / 'openapi.json', 'w', encoding='utf-8') as f:
json.dump(app.openapi(), f, indent=2, ensure_ascii=False)
print(f"✓ FastAPI: {len(app.openapi().get('paths', {}))} endpoints")
return True
except Exception as e:
print(f"✗ FastAPI error: {e}")
return False
def extract_typescript(config):
"""提取 TypeScript 文档"""
try:
subprocess.run([
'npx', 'typedoc',
'--plugin', 'typedoc-plugin-markdown',
'--out', config['output'],
'--entryPoints', './lib', './hooks',
'--entryPointStrategy', 'expand'
], cwd=config['path'], check=True)
print(f"✓ TypeDoc: {config['path']}")
return True
except Exception as e:
print(f"✗ TypeDoc error: {e}")
return False
def extract_python(config):
"""提取 Python 模块文档"""
try:
module_name = Path(config['path']).name
subprocess.run([
'pdoc', module_name,
'--output-dir', config['output'],
'--format', 'markdown'
], cwd=Path(config['path']).parent, check=True)
print(f"✓ pdoc: {module_name}")
return True
except Exception as e:
print(f"✗ pdoc error: {e}")
return False
EXTRACTORS = {
'fastapi': extract_fastapi,
'typescript': extract_typescript,
'python': extract_python
}
if __name__ == '__main__':
for name, config in PROJECTS.items():
print(f"\n[{name}]")
extractor = EXTRACTORS.get(config['type'])
if extractor:
extractor(config)
```
## Phase 3 集成
`api-reference` Agent 提示词中添加:
```
[PRE-EXTRACTION]
运行 API 提取脚本获取结构化文档:
- python scripts/extract_all_apis.py
[INPUT FILES]
- api-docs/backend/openapi.json (FastAPI endpoints)
- api-docs/frontend/*.md (TypeDoc output)
- api-docs/hydro_generator/*.md (pdoc output)
- api-docs/multiphysics/*.md (pdoc output)
```
## 输出结构
```
api-docs/
├── backend/
│ ├── openapi.json # Raw OpenAPI spec
│ └── API_REFERENCE.md # Converted Markdown
├── frontend/
│ ├── modules.md
│ ├── functions.md
│ └── classes/
├── hydro_generator/
│ ├── assembler.md
│ ├── blueprint.md
│ └── builders/
└── multiphysics/
├── analysis_domain.md
├── builders.md
└── compilers.md
```

View File

@@ -1,584 +0,0 @@
#!/usr/bin/env python3
"""
Docsify-Style HTML Manual Assembly Script Template
Generates interactive single-file documentation with hierarchical navigation
Usage:
1. Copy this script to your manual output directory
2. Customize MANUAL_META and NAV_STRUCTURE
3. Run: python assemble_docsify.py
"""
import json
import base64
import re
from pathlib import Path
from typing import Dict, List, Any
# Try to import markdown library
try:
import markdown
from markdown.extensions.codehilite import CodeHiliteExtension
from markdown.extensions.fenced_code import FencedCodeExtension
from markdown.extensions.tables import TableExtension
from markdown.extensions.toc import TocExtension
HAS_MARKDOWN = True
except ImportError:
HAS_MARKDOWN = False
print("Warning: markdown library not found. Install with: pip install markdown pygments")
# ============================================================
# CONFIGURATION - Customize these for your project
# ============================================================
# Paths - Update these paths for your environment
BASE_DIR = Path(__file__).parent
SECTIONS_DIR = BASE_DIR / "sections"
SCREENSHOTS_DIR = BASE_DIR / "screenshots"
# Template paths - Point to skill templates directory
SKILL_DIR = Path(__file__).parent.parent # Adjust based on where script is placed
TEMPLATE_FILE = SKILL_DIR / "templates" / "docsify-shell.html"
CSS_BASE_FILE = SKILL_DIR / "templates" / "css" / "docsify-base.css"
# Manual metadata - Customize for your software
MANUAL_META = {
"title": "Your Software",
"subtitle": "使用手册",
"version": "v1.0.0",
"timestamp": "2025-01-01",
"language": "zh-CN",
"logo_icon": "Y" # First letter or emoji
}
# Output file
OUTPUT_FILE = BASE_DIR / f"{MANUAL_META['title']}{MANUAL_META['subtitle']}.html"
# Hierarchical navigation structure
# Customize groups and items based on your sections
NAV_STRUCTURE = [
{
"type": "group",
"title": "入门指南",
"icon": "📚",
"expanded": True,
"items": [
{"id": "overview", "title": "产品概述", "file": "section-overview.md"},
]
},
{
"type": "group",
"title": "使用教程",
"icon": "🎯",
"expanded": False,
"items": [
{"id": "ui-guide", "title": "UI操作指南", "file": "section-ui-guide.md"},
]
},
{
"type": "group",
"title": "API参考",
"icon": "🔧",
"expanded": False,
"items": [
{"id": "api-reference", "title": "API文档", "file": "section-api-reference.md"},
]
},
{
"type": "group",
"title": "配置与部署",
"icon": "⚙️",
"expanded": False,
"items": [
{"id": "configuration", "title": "配置指南", "file": "section-configuration.md"},
]
},
{
"type": "group",
"title": "帮助与支持",
"icon": "💡",
"expanded": False,
"items": [
{"id": "troubleshooting", "title": "故障排除", "file": "section-troubleshooting.md"},
{"id": "examples", "title": "代码示例", "file": "section-examples.md"},
]
}
]
# Screenshot ID to filename mapping - Customize for your screenshots
SCREENSHOT_MAPPING = {
# "截图ID": "filename.png",
}
# ============================================================
# CORE FUNCTIONS - Generally don't need to modify
# ============================================================
# Global cache for embedded images
_embedded_images = {}
def read_file(filepath: Path) -> str:
"""Read file content with UTF-8 encoding"""
return filepath.read_text(encoding='utf-8')
# ============================================================
# MERMAID VALIDATION
# ============================================================
# Valid Mermaid diagram types
MERMAID_DIAGRAM_TYPES = [
'graph', 'flowchart', 'sequenceDiagram', 'classDiagram',
'stateDiagram', 'stateDiagram-v2', 'erDiagram', 'journey',
'gantt', 'pie', 'quadrantChart', 'requirementDiagram',
'gitGraph', 'mindmap', 'timeline', 'zenuml', 'sankey-beta',
'xychart-beta', 'block-beta'
]
# Common Mermaid syntax patterns
MERMAID_PATTERNS = {
'graph': r'^graph\s+(TB|BT|LR|RL|TD)\s*$',
'flowchart': r'^flowchart\s+(TB|BT|LR|RL|TD)\s*$',
'sequenceDiagram': r'^sequenceDiagram\s*$',
'classDiagram': r'^classDiagram\s*$',
'stateDiagram': r'^stateDiagram(-v2)?\s*$',
'erDiagram': r'^erDiagram\s*$',
'gantt': r'^gantt\s*$',
'pie': r'^pie\s*(showData|title\s+.*)?\s*$',
'journey': r'^journey\s*$',
}
class MermaidBlock:
"""Represents a mermaid code block found in markdown"""
def __init__(self, content: str, file: str, line_num: int, indented: bool = False):
self.content = content
self.file = file
self.line_num = line_num
self.indented = indented
self.errors: List[str] = []
self.warnings: List[str] = []
self.diagram_type: str = None
def __repr__(self):
return f"MermaidBlock({self.diagram_type}, {self.file}:{self.line_num})"
def extract_mermaid_blocks(markdown_text: str, filename: str) -> List[MermaidBlock]:
"""Extract all mermaid code blocks from markdown text"""
blocks = []
# More flexible pattern - matches opening fence with optional indent,
# then captures content until closing fence (with any indent)
pattern = r'^(\s*)(```|~~~)mermaid\s*\n(.*?)\n\s*\2\s*$'
for match in re.finditer(pattern, markdown_text, re.MULTILINE | re.DOTALL):
indent = match.group(1)
content = match.group(3)
# Calculate line number
line_num = markdown_text[:match.start()].count('\n') + 1
indented = len(indent) > 0
block = MermaidBlock(
content=content,
file=filename,
line_num=line_num,
indented=indented
)
blocks.append(block)
return blocks
def validate_mermaid_block(block: MermaidBlock) -> bool:
"""Validate a mermaid block and populate errors/warnings"""
content = block.content.strip()
lines = content.split('\n')
if not lines:
block.errors.append("Empty mermaid block")
return False
first_line = lines[0].strip()
# Detect diagram type
for dtype in MERMAID_DIAGRAM_TYPES:
if first_line.startswith(dtype):
block.diagram_type = dtype
break
if not block.diagram_type:
block.errors.append(f"Unknown diagram type: '{first_line[:30]}...'")
block.errors.append(f"Valid types: {', '.join(MERMAID_DIAGRAM_TYPES[:8])}...")
return False
# Check for balanced brackets/braces
brackets = {'[': ']', '{': '}', '(': ')'}
stack = []
for i, char in enumerate(content):
if char in brackets:
stack.append((char, i))
elif char in brackets.values():
if not stack:
block.errors.append(f"Unmatched closing bracket '{char}' at position {i}")
else:
open_char, _ = stack.pop()
if brackets[open_char] != char:
block.errors.append(f"Mismatched brackets: '{open_char}' and '{char}'")
if stack:
for open_char, pos in stack:
block.warnings.append(f"Unclosed bracket '{open_char}' at position {pos}")
# Check for common graph/flowchart issues
if block.diagram_type in ['graph', 'flowchart']:
# Check direction specifier
if not re.match(r'^(graph|flowchart)\s+(TB|BT|LR|RL|TD)', first_line):
block.warnings.append("Missing or invalid direction (TB/BT/LR/RL/TD)")
# Check for arrow syntax
arrow_count = content.count('-->') + content.count('---') + content.count('-.->') + content.count('==>')
if arrow_count == 0 and len(lines) > 1:
block.warnings.append("No arrows found - graph may be incomplete")
# Check for sequenceDiagram issues
if block.diagram_type == 'sequenceDiagram':
if '->' not in content and '->>' not in content:
block.warnings.append("No message arrows found in sequence diagram")
# Indentation warning
if block.indented:
block.warnings.append("Indented code block - may not render in some markdown parsers")
return len(block.errors) == 0
def validate_all_mermaid(nav_structure: List[Dict], sections_dir: Path) -> Dict[str, Any]:
"""Validate all mermaid blocks in all section files"""
report = {
'total_blocks': 0,
'valid_blocks': 0,
'error_blocks': 0,
'warning_blocks': 0,
'blocks': [],
'by_file': {},
'by_type': {}
}
for group in nav_structure:
for item in group.get("items", []):
section_file = item.get("file")
if not section_file:
continue
filepath = sections_dir / section_file
if not filepath.exists():
continue
content = read_file(filepath)
blocks = extract_mermaid_blocks(content, section_file)
file_report = {'blocks': [], 'errors': 0, 'warnings': 0}
for block in blocks:
report['total_blocks'] += 1
is_valid = validate_mermaid_block(block)
if is_valid:
report['valid_blocks'] += 1
else:
report['error_blocks'] += 1
file_report['errors'] += 1
if block.warnings:
report['warning_blocks'] += 1
file_report['warnings'] += len(block.warnings)
# Track by diagram type
if block.diagram_type:
if block.diagram_type not in report['by_type']:
report['by_type'][block.diagram_type] = 0
report['by_type'][block.diagram_type] += 1
report['blocks'].append(block)
file_report['blocks'].append(block)
if blocks:
report['by_file'][section_file] = file_report
return report
def print_mermaid_report(report: Dict[str, Any]) -> None:
"""Print mermaid validation report"""
print("\n" + "=" * 60)
print("MERMAID DIAGRAM VALIDATION REPORT")
print("=" * 60)
print(f"\nSummary:")
print(f" Total blocks: {report['total_blocks']}")
print(f" Valid: {report['valid_blocks']}")
print(f" With errors: {report['error_blocks']}")
print(f" With warnings: {report['warning_blocks']}")
if report['by_type']:
print(f"\nDiagram Types:")
for dtype, count in sorted(report['by_type'].items()):
print(f" {dtype}: {count}")
# Print errors and warnings
has_issues = False
for block in report['blocks']:
if block.errors or block.warnings:
if not has_issues:
print(f"\nIssues Found:")
has_issues = True
print(f"\n [{block.file}:{block.line_num}] {block.diagram_type or 'unknown'}")
for error in block.errors:
print(f" [ERROR] {error}")
for warning in block.warnings:
print(f" [WARN] {warning}")
if not has_issues:
print(f"\n No issues found!")
print("=" * 60 + "\n")
def convert_md_to_html(markdown_text: str) -> str:
"""Convert Markdown to HTML with syntax highlighting"""
if not HAS_MARKDOWN:
# Fallback: just escape HTML and wrap in pre
escaped = markdown_text.replace('&', '&amp;').replace('<', '&lt;').replace('>', '&gt;')
return f'<pre>{escaped}</pre>'
md = markdown.Markdown(
extensions=[
FencedCodeExtension(),
TableExtension(),
TocExtension(toc_depth=3),
CodeHiliteExtension(
css_class='highlight',
linenums=False,
guess_lang=True,
use_pygments=True
),
],
output_format='html5'
)
html = md.convert(markdown_text)
md.reset()
return html
def embed_screenshot_base64(screenshot_id: str) -> str:
"""Embed screenshot as base64, using cache to avoid duplicates"""
global _embedded_images
filename = SCREENSHOT_MAPPING.get(screenshot_id)
if not filename:
return f'<div class="screenshot-placeholder">📷 {screenshot_id}</div>'
filepath = SCREENSHOTS_DIR / filename
if not filepath.exists():
return f'<div class="screenshot-placeholder">📷 {screenshot_id}</div>'
# Check cache
if filename not in _embedded_images:
try:
with open(filepath, 'rb') as f:
image_data = base64.b64encode(f.read()).decode('utf-8')
ext = filepath.suffix[1:].lower()
_embedded_images[filename] = f"data:image/{ext};base64,{image_data}"
except Exception as e:
return f'<div class="screenshot-placeholder">📷 {screenshot_id} (加载失败)</div>'
return f'''<figure class="screenshot">
<img src="{_embedded_images[filename]}" alt="{screenshot_id}" loading="lazy" />
<figcaption>{screenshot_id}</figcaption>
</figure>'''
def process_markdown_screenshots(markdown_text: str) -> str:
"""Replace [[screenshot:xxx]] placeholders with embedded images"""
pattern = r'\[\[screenshot:(.*?)\]\]'
def replacer(match):
screenshot_id = match.group(1)
return embed_screenshot_base64(screenshot_id)
return re.sub(pattern, replacer, markdown_text)
def generate_sidebar_nav_html(nav_structure: List[Dict]) -> str:
"""Generate hierarchical sidebar navigation HTML"""
html_parts = []
for group in nav_structure:
if group["type"] == "group":
expanded_class = "expanded" if group.get("expanded", False) else ""
html_parts.append(f'''
<div class="nav-group {expanded_class}">
<div class="nav-group-header">
<button class="nav-group-toggle" aria-expanded="{str(group.get('expanded', False)).lower()}">
<svg viewBox="0 0 24 24"><path d="M8.59 16.59L13.17 12 8.59 7.41 10 6l6 6-6 6z" fill="currentColor"/></svg>
</button>
<span class="nav-group-title">{group.get('icon', '')} {group['title']}</span>
</div>
<div class="nav-group-items">''')
for item in group.get("items", []):
html_parts.append(f'''
<a class="nav-item" href="#/{item['id']}" data-section="{item['id']}">{item['title']}</a>''')
html_parts.append('''
</div>
</div>''')
return '\n'.join(html_parts)
def generate_sections_html(nav_structure: List[Dict]) -> str:
"""Generate content sections HTML"""
sections_html = []
for group in nav_structure:
for item in group.get("items", []):
section_id = item["id"]
section_title = item["title"]
section_file = item.get("file")
if not section_file:
continue
filepath = SECTIONS_DIR / section_file
if not filepath.exists():
print(f"Warning: Section file not found: {filepath}")
continue
# Read and convert markdown
markdown_content = read_file(filepath)
markdown_content = process_markdown_screenshots(markdown_content)
html_content = convert_md_to_html(markdown_content)
sections_html.append(f'''
<section class="content-section" id="section-{section_id}" data-title="{section_title}">
{html_content}
</section>''')
return '\n'.join(sections_html)
def generate_search_index(nav_structure: List[Dict]) -> str:
"""Generate search index JSON"""
search_index = {}
for group in nav_structure:
for item in group.get("items", []):
section_id = item["id"]
section_file = item.get("file")
if not section_file:
continue
filepath = SECTIONS_DIR / section_file
if filepath.exists():
content = read_file(filepath)
clean_content = re.sub(r'[#*`\[\]()]', '', content)
clean_content = re.sub(r'\s+', ' ', clean_content)[:1500]
search_index[section_id] = {
"title": item["title"],
"body": clean_content,
"group": group["title"]
}
return json.dumps(search_index, ensure_ascii=False, indent=2)
def generate_nav_structure_json(nav_structure: List[Dict]) -> str:
"""Generate navigation structure JSON for client-side"""
return json.dumps(nav_structure, ensure_ascii=False, indent=2)
def assemble_manual(validate_mermaid: bool = True):
"""Main assembly function
Args:
validate_mermaid: Whether to validate mermaid diagrams (default: True)
"""
global _embedded_images
_embedded_images = {}
full_title = f"{MANUAL_META['title']} {MANUAL_META['subtitle']}"
print(f"Assembling Docsify-style manual: {full_title}")
# Verify template exists
if not TEMPLATE_FILE.exists():
print(f"Error: Template not found at {TEMPLATE_FILE}")
print("Please update TEMPLATE_FILE path in this script.")
return None, 0
if not CSS_BASE_FILE.exists():
print(f"Error: CSS not found at {CSS_BASE_FILE}")
print("Please update CSS_BASE_FILE path in this script.")
return None, 0
# Validate Mermaid diagrams
mermaid_report = None
if validate_mermaid:
print("\nValidating Mermaid diagrams...")
mermaid_report = validate_all_mermaid(NAV_STRUCTURE, SECTIONS_DIR)
print_mermaid_report(mermaid_report)
# Warn if there are errors (but continue)
if mermaid_report['error_blocks'] > 0:
print(f"[WARN] {mermaid_report['error_blocks']} mermaid block(s) have errors!")
print(" These diagrams may not render correctly.")
# Read template and CSS
template_html = read_file(TEMPLATE_FILE)
css_content = read_file(CSS_BASE_FILE)
# Generate components
sidebar_nav_html = generate_sidebar_nav_html(NAV_STRUCTURE)
sections_html = generate_sections_html(NAV_STRUCTURE)
search_index_json = generate_search_index(NAV_STRUCTURE)
nav_structure_json = generate_nav_structure_json(NAV_STRUCTURE)
# Replace placeholders
output_html = template_html
output_html = output_html.replace('{{SOFTWARE_NAME}}', full_title)
output_html = output_html.replace('{{VERSION}}', MANUAL_META['version'])
output_html = output_html.replace('{{TIMESTAMP}}', MANUAL_META['timestamp'])
output_html = output_html.replace('{{LOGO_ICON}}', MANUAL_META['logo_icon'])
output_html = output_html.replace('{{EMBEDDED_CSS}}', css_content)
output_html = output_html.replace('{{SIDEBAR_NAV_HTML}}', sidebar_nav_html)
output_html = output_html.replace('{{SECTIONS_HTML}}', sections_html)
output_html = output_html.replace('{{SEARCH_INDEX_JSON}}', search_index_json)
output_html = output_html.replace('{{NAV_STRUCTURE_JSON}}', nav_structure_json)
# Write output file
OUTPUT_FILE.write_text(output_html, encoding='utf-8')
file_size = OUTPUT_FILE.stat().st_size
file_size_mb = file_size / (1024 * 1024)
section_count = sum(len(g.get("items", [])) for g in NAV_STRUCTURE)
print("[OK] Docsify-style manual generated successfully!")
print(f" Output: {OUTPUT_FILE}")
print(f" Size: {file_size_mb:.2f} MB ({file_size:,} bytes)")
print(f" Navigation Groups: {len(NAV_STRUCTURE)}")
print(f" Sections: {section_count}")
return str(OUTPUT_FILE), file_size
if __name__ == "__main__":
output_path, size = assemble_manual()

View File

@@ -1,85 +0,0 @@
# 库文件打包说明
## 依赖库
HTML 组装阶段需要内嵌以下成熟库(无 CDN 依赖):
### 1. marked.js - Markdown 解析
```bash
# 获取最新版本
curl -o templates/libs/marked.min.js https://unpkg.com/marked/marked.min.js
```
### 2. highlight.js - 代码语法高亮
```bash
# 获取核心 + 常用语言包
curl -o templates/libs/highlight.min.js https://unpkg.com/@highlightjs/cdn-assets/highlight.min.js
# 获取 github-dark 主题
curl -o templates/libs/github-dark.min.css https://unpkg.com/@highlightjs/cdn-assets/styles/github-dark.min.css
```
## 内嵌方式
Phase 5 Agent 应:
1. 读取 `templates/libs/*.js``*.css`
2. 将内容嵌入 HTML 的 `<script>``<style>` 标签
3.`DOMContentLoaded` 后初始化:
```javascript
// 初始化 marked
marked.setOptions({
highlight: function(code, lang) {
if (lang && hljs.getLanguage(lang)) {
return hljs.highlight(code, { language: lang }).value;
}
return hljs.highlightAuto(code).value;
},
breaks: true,
gfm: true
});
// 应用高亮
document.querySelectorAll('pre code').forEach(block => {
hljs.highlightElement(block);
});
```
## 备选方案
如果无法获取外部库,使用内置的简化 Markdown 转换:
```javascript
function simpleMarkdown(md) {
return md
.replace(/^### (.+)$/gm, '<h3>$1</h3>')
.replace(/^## (.+)$/gm, '<h2>$1</h2>')
.replace(/^# (.+)$/gm, '<h1>$1</h1>')
.replace(/```(\w+)?\n([\s\S]*?)```/g, (m, lang, code) =>
`<pre data-language="${lang || ''}"><code class="language-${lang || ''}">${escapeHtml(code)}</code></pre>`)
.replace(/`([^`]+)`/g, '<code>$1</code>')
.replace(/\*\*(.+?)\*\*/g, '<strong>$1</strong>')
.replace(/\*(.+?)\*/g, '<em>$1</em>')
.replace(/\[([^\]]+)\]\(([^)]+)\)/g, '<a href="$2">$1</a>')
.replace(/^\|(.+)\|$/gm, processTableRow)
.replace(/^- (.+)$/gm, '<li>$1</li>')
.replace(/^\d+\. (.+)$/gm, '<li>$1</li>');
}
```
## 文件结构
```
templates/
├── libs/
│ ├── marked.min.js # Markdown parser
│ ├── highlight.min.js # Syntax highlighting
│ └── github-dark.min.css # Code theme
├── tiddlywiki-shell.html
└── css/
├── wiki-base.css
└── wiki-dark.css
```

View File

@@ -1,270 +0,0 @@
#!/usr/bin/env python3
"""
API 文档提取脚本
支持 FastAPI、TypeScript、Python 模块
"""
import subprocess
import sys
import json
from pathlib import Path
from typing import Dict, Any, Optional
# 项目配置
PROJECTS = {
'backend': {
'path': Path('D:/dongdiankaifa9/backend'),
'type': 'fastapi',
'entry': 'app.main:app',
'output': 'api-docs/backend'
},
'frontend': {
'path': Path('D:/dongdiankaifa9/frontend'),
'type': 'typescript',
'entries': ['lib', 'hooks', 'components'],
'output': 'api-docs/frontend'
},
'hydro_generator_module': {
'path': Path('D:/dongdiankaifa9/hydro_generator_module'),
'type': 'python',
'output': 'api-docs/hydro_generator'
},
'multiphysics_network': {
'path': Path('D:/dongdiankaifa9/multiphysics_network'),
'type': 'python',
'output': 'api-docs/multiphysics'
}
}
def extract_fastapi(name: str, config: Dict[str, Any], output_base: Path) -> bool:
"""提取 FastAPI OpenAPI 文档"""
path = config['path']
output_dir = output_base / config['output']
output_dir.mkdir(parents=True, exist_ok=True)
# 添加路径到 sys.path
if str(path) not in sys.path:
sys.path.insert(0, str(path))
try:
# 动态导入 app
from app.main import app
# 获取 OpenAPI schema
openapi_schema = app.openapi()
# 保存 JSON
json_path = output_dir / 'openapi.json'
with open(json_path, 'w', encoding='utf-8') as f:
json.dump(openapi_schema, f, indent=2, ensure_ascii=False)
# 生成 Markdown 摘要
md_path = output_dir / 'API_SUMMARY.md'
generate_api_markdown(openapi_schema, md_path)
endpoints = len(openapi_schema.get('paths', {}))
print(f" ✓ Extracted {endpoints} endpoints → {output_dir}")
return True
except ImportError as e:
print(f" ✗ Import error: {e}")
return False
except Exception as e:
print(f" ✗ Error: {e}")
return False
def generate_api_markdown(schema: Dict, output_path: Path):
"""从 OpenAPI schema 生成 Markdown"""
lines = [
f"# {schema.get('info', {}).get('title', 'API Reference')}",
"",
f"Version: {schema.get('info', {}).get('version', '1.0.0')}",
"",
"## Endpoints",
"",
"| Method | Path | Summary |",
"|--------|------|---------|"
]
for path, methods in schema.get('paths', {}).items():
for method, details in methods.items():
if method in ('get', 'post', 'put', 'delete', 'patch'):
summary = details.get('summary', details.get('operationId', '-'))
lines.append(f"| `{method.upper()}` | `{path}` | {summary} |")
lines.extend([
"",
"## Schemas",
""
])
for name, schema_def in schema.get('components', {}).get('schemas', {}).items():
lines.append(f"### {name}")
lines.append("")
if 'properties' in schema_def:
lines.append("| Property | Type | Required |")
lines.append("|----------|------|----------|")
required = schema_def.get('required', [])
for prop, prop_def in schema_def['properties'].items():
prop_type = prop_def.get('type', prop_def.get('$ref', 'any'))
is_required = '' if prop in required else ''
lines.append(f"| `{prop}` | {prop_type} | {is_required} |")
lines.append("")
with open(output_path, 'w', encoding='utf-8') as f:
f.write('\n'.join(lines))
def extract_typescript(name: str, config: Dict[str, Any], output_base: Path) -> bool:
"""提取 TypeScript 文档 (TypeDoc)"""
path = config['path']
output_dir = output_base / config['output']
# 检查 TypeDoc 是否已安装
try:
result = subprocess.run(
['npx', 'typedoc', '--version'],
cwd=path,
capture_output=True,
text=True
)
if result.returncode != 0:
print(f" ⚠ TypeDoc not installed, installing...")
subprocess.run(
['npm', 'install', '--save-dev', 'typedoc', 'typedoc-plugin-markdown'],
cwd=path,
check=True
)
except FileNotFoundError:
print(f" ✗ npm/npx not found")
return False
# 运行 TypeDoc
try:
entries = config.get('entries', ['lib'])
cmd = [
'npx', 'typedoc',
'--plugin', 'typedoc-plugin-markdown',
'--out', str(output_dir),
'--entryPointStrategy', 'expand',
'--exclude', '**/node_modules/**',
'--exclude', '**/*.test.*',
'--readme', 'none'
]
for entry in entries:
entry_path = path / entry
if entry_path.exists():
cmd.extend(['--entryPoints', str(entry_path)])
result = subprocess.run(cmd, cwd=path, capture_output=True, text=True)
if result.returncode == 0:
print(f" ✓ TypeDoc generated → {output_dir}")
return True
else:
print(f" ✗ TypeDoc error: {result.stderr[:200]}")
return False
except Exception as e:
print(f" ✗ Error: {e}")
return False
def extract_python_module(name: str, config: Dict[str, Any], output_base: Path) -> bool:
"""提取 Python 模块文档 (pdoc)"""
path = config['path']
output_dir = output_base / config['output']
module_name = path.name
# 检查 pdoc
try:
subprocess.run(['pdoc', '--version'], capture_output=True, check=True)
except (FileNotFoundError, subprocess.CalledProcessError):
print(f" ⚠ pdoc not installed, installing...")
subprocess.run([sys.executable, '-m', 'pip', 'install', 'pdoc'], check=True)
# 运行 pdoc
try:
result = subprocess.run(
[
'pdoc', module_name,
'--output-dir', str(output_dir),
'--format', 'markdown'
],
cwd=path.parent,
capture_output=True,
text=True
)
if result.returncode == 0:
# 统计生成的文件
md_files = list(output_dir.glob('**/*.md'))
print(f" ✓ pdoc generated {len(md_files)} files → {output_dir}")
return True
else:
print(f" ✗ pdoc error: {result.stderr[:200]}")
return False
except Exception as e:
print(f" ✗ Error: {e}")
return False
EXTRACTORS = {
'fastapi': extract_fastapi,
'typescript': extract_typescript,
'python': extract_python_module
}
def main(output_base: Optional[str] = None, projects: Optional[list] = None):
"""主入口"""
base = Path(output_base) if output_base else Path.cwd()
print("=" * 50)
print("API Documentation Extraction")
print("=" * 50)
results = {}
for name, config in PROJECTS.items():
if projects and name not in projects:
continue
print(f"\n[{name}] ({config['type']})")
if not config['path'].exists():
print(f" ✗ Path not found: {config['path']}")
results[name] = False
continue
extractor = EXTRACTORS.get(config['type'])
if extractor:
results[name] = extractor(name, config, base)
else:
print(f" ✗ Unknown type: {config['type']}")
results[name] = False
# 汇总
print("\n" + "=" * 50)
print("Summary")
print("=" * 50)
success = sum(1 for v in results.values() if v)
print(f"Success: {success}/{len(results)}")
return all(results.values())
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Extract API documentation')
parser.add_argument('--output', '-o', default='.', help='Output base directory')
parser.add_argument('--projects', '-p', nargs='+', help='Specific projects to extract')
args = parser.parse_args()
success = main(args.output, args.projects)
sys.exit(0 if success else 1)

View File

@@ -1,447 +0,0 @@
# Screenshot Helper
Guide for capturing screenshots using Chrome MCP.
## Overview
This script helps capture screenshots of web interfaces for the software manual using Chrome MCP or fallback methods.
## Chrome MCP Prerequisites
### Check MCP Availability
```javascript
async function checkChromeMCPAvailability() {
try {
// Attempt to get Chrome version via MCP
const version = await mcp__chrome__getVersion();
return {
available: true,
browser: version.browser,
version: version.version
};
} catch (error) {
return {
available: false,
error: error.message
};
}
}
```
### MCP Configuration
Expected Claude configuration for Chrome MCP:
```json
{
"mcpServers": {
"chrome": {
"command": "npx",
"args": ["@anthropic-ai/mcp-chrome"],
"env": {
"CHROME_PATH": "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"
}
}
}
}
```
## Screenshot Workflow
### Step 1: Prepare Environment
```javascript
async function prepareScreenshotEnvironment(workDir, config) {
const screenshotDir = `${workDir}/screenshots`;
// Create directory
Bash({ command: `mkdir -p "${screenshotDir}"` });
// Check Chrome MCP
const chromeMCP = await checkChromeMCPAvailability();
if (!chromeMCP.available) {
console.log('Chrome MCP not available. Will generate manual guide.');
return { mode: 'manual' };
}
// Start development server if needed
if (config.screenshot_config?.dev_command) {
const server = await startDevServer(config);
return { mode: 'auto', server, screenshotDir };
}
return { mode: 'auto', screenshotDir };
}
```
### Step 2: Start Development Server
```javascript
async function startDevServer(config) {
const devCommand = config.screenshot_config.dev_command;
const devUrl = config.screenshot_config.dev_url;
// Start server in background
const server = Bash({
command: devCommand,
run_in_background: true
});
console.log(`Starting dev server: ${devCommand}`);
// Wait for server to be ready
const ready = await waitForServer(devUrl, 30000);
if (!ready) {
throw new Error(`Server at ${devUrl} did not start in time`);
}
console.log(`Dev server ready at ${devUrl}`);
return server;
}
async function waitForServer(url, timeout = 30000) {
const start = Date.now();
while (Date.now() - start < timeout) {
try {
const response = await fetch(url, { method: 'HEAD' });
if (response.ok) return true;
} catch (e) {
// Server not ready
}
await sleep(1000);
}
return false;
}
```
### Step 3: Capture Screenshots
```javascript
async function captureScreenshots(screenshots, config, workDir) {
const results = {
captured: [],
failed: []
};
const devUrl = config.screenshot_config.dev_url;
const screenshotDir = `${workDir}/screenshots`;
for (const ss of screenshots) {
try {
// Build full URL
const fullUrl = new URL(ss.url, devUrl).href;
console.log(`Capturing: ${ss.id} (${fullUrl})`);
// Configure capture options
const options = {
url: fullUrl,
viewport: { width: 1280, height: 800 },
fullPage: ss.fullPage || false
};
// Wait for specific element if specified
if (ss.wait_for) {
options.waitFor = ss.wait_for;
}
// Capture specific element if selector provided
if (ss.selector) {
options.selector = ss.selector;
}
// Add delay for animations
await sleep(500);
// Capture via Chrome MCP
const result = await mcp__chrome__screenshot(options);
// Save as PNG
const filename = `${ss.id}.png`;
Write(`${screenshotDir}/${filename}`, result.data, { encoding: 'base64' });
results.captured.push({
id: ss.id,
file: filename,
url: ss.url,
description: ss.description,
size: result.data.length
});
} catch (error) {
console.error(`Failed to capture ${ss.id}:`, error.message);
results.failed.push({
id: ss.id,
url: ss.url,
error: error.message
});
}
}
return results;
}
```
### Step 4: Generate Manifest
```javascript
function generateScreenshotManifest(results, workDir) {
const manifest = {
generated: new Date().toISOString(),
total: results.captured.length + results.failed.length,
captured: results.captured.length,
failed: results.failed.length,
screenshots: results.captured,
failures: results.failed
};
Write(`${workDir}/screenshots/screenshots-manifest.json`,
JSON.stringify(manifest, null, 2));
return manifest;
}
```
### Step 5: Cleanup
```javascript
async function cleanupScreenshotEnvironment(env) {
if (env.server) {
console.log('Stopping dev server...');
KillShell({ shell_id: env.server.task_id });
}
}
```
## Main Runner
```javascript
async function runScreenshotCapture(workDir, screenshots) {
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// Prepare environment
const env = await prepareScreenshotEnvironment(workDir, config);
if (env.mode === 'manual') {
// Generate manual capture guide
generateManualCaptureGuide(screenshots, workDir);
return { success: false, mode: 'manual' };
}
try {
// Capture screenshots
const results = await captureScreenshots(screenshots, config, workDir);
// Generate manifest
const manifest = generateScreenshotManifest(results, workDir);
// Generate manual guide for failed captures
if (results.failed.length > 0) {
generateManualCaptureGuide(results.failed, workDir);
}
return {
success: true,
captured: results.captured.length,
failed: results.failed.length,
manifest
};
} finally {
// Cleanup
await cleanupScreenshotEnvironment(env);
}
}
```
## Manual Capture Fallback
When Chrome MCP is unavailable:
```javascript
function generateManualCaptureGuide(screenshots, workDir) {
const guide = `
# Manual Screenshot Capture Guide
Chrome MCP is not available. Please capture screenshots manually.
## Prerequisites
1. Start your development server
2. Open a browser
3. Use a screenshot tool (Snipping Tool, Screenshot, etc.)
## Screenshots Required
${screenshots.map((ss, i) => `
### ${i + 1}. ${ss.id}
- **URL**: ${ss.url}
- **Description**: ${ss.description}
- **Save as**: \`screenshots/${ss.id}.png\`
${ss.selector ? `- **Capture area**: \`${ss.selector}\` element only` : '- **Type**: Full page or viewport'}
${ss.wait_for ? `- **Wait for**: \`${ss.wait_for}\` to be visible` : ''}
**Steps:**
1. Navigate to ${ss.url}
${ss.wait_for ? `2. Wait for ${ss.wait_for} to appear` : ''}
${ss.selector ? `2. Capture only the ${ss.selector} area` : '2. Capture the full viewport'}
3. Save as \`${ss.id}.png\`
`).join('\n')}
## After Capturing
1. Place all PNG files in the \`screenshots/\` directory
2. Ensure filenames match exactly (case-sensitive)
3. Run Phase 5 (HTML Assembly) to continue
## Screenshot Specifications
- **Format**: PNG
- **Width**: 1280px recommended
- **Quality**: High
- **Annotations**: None (add in post-processing if needed)
`;
Write(`${workDir}/screenshots/MANUAL_CAPTURE.md`, guide);
}
```
## Advanced Options
### Viewport Sizes
```javascript
const viewportPresets = {
desktop: { width: 1280, height: 800 },
tablet: { width: 768, height: 1024 },
mobile: { width: 375, height: 667 },
wide: { width: 1920, height: 1080 }
};
async function captureResponsive(ss, config, workDir) {
const results = [];
for (const [name, viewport] of Object.entries(viewportPresets)) {
const result = await mcp__chrome__screenshot({
url: ss.url,
viewport
});
const filename = `${ss.id}-${name}.png`;
Write(`${workDir}/screenshots/${filename}`, result.data, { encoding: 'base64' });
results.push({ viewport: name, file: filename });
}
return results;
}
```
### Before/After Comparisons
```javascript
async function captureInteraction(ss, config, workDir) {
const baseUrl = config.screenshot_config.dev_url;
const fullUrl = new URL(ss.url, baseUrl).href;
// Capture before state
const before = await mcp__chrome__screenshot({
url: fullUrl,
viewport: { width: 1280, height: 800 }
});
Write(`${workDir}/screenshots/${ss.id}-before.png`, before.data, { encoding: 'base64' });
// Perform interaction (click, type, etc.)
if (ss.interaction) {
await mcp__chrome__click({ selector: ss.interaction.click });
await sleep(500);
}
// Capture after state
const after = await mcp__chrome__screenshot({
url: fullUrl,
viewport: { width: 1280, height: 800 }
});
Write(`${workDir}/screenshots/${ss.id}-after.png`, after.data, { encoding: 'base64' });
return {
before: `${ss.id}-before.png`,
after: `${ss.id}-after.png`
};
}
```
### Screenshot Annotation
```javascript
function generateAnnotationGuide(screenshots, workDir) {
const guide = `
# Screenshot Annotation Guide
For screenshots requiring callouts or highlights:
## Tools
- macOS: Preview, Skitch
- Windows: Snipping Tool, ShareX
- Cross-platform: Greenshot, Lightshot
## Annotation Guidelines
1. **Callouts**: Use numbered circles (1, 2, 3)
2. **Highlights**: Use semi-transparent rectangles
3. **Arrows**: Point from text to element
4. **Text**: Use sans-serif font, 12-14pt
## Color Scheme
- Primary: #0d6efd (blue)
- Secondary: #6c757d (gray)
- Success: #198754 (green)
- Warning: #ffc107 (yellow)
- Danger: #dc3545 (red)
## Screenshots Needing Annotation
${screenshots.filter(s => s.annotate).map(ss => `
- **${ss.id}**: ${ss.description}
- Highlight: ${ss.annotate.highlight || 'N/A'}
- Callouts: ${ss.annotate.callouts?.join(', ') || 'N/A'}
`).join('\n')}
`;
Write(`${workDir}/screenshots/ANNOTATION_GUIDE.md`, guide);
}
```
## Troubleshooting
### Chrome MCP Not Found
1. Check Claude MCP configuration
2. Verify Chrome is installed
3. Check CHROME_PATH environment variable
### Screenshots Are Blank
1. Increase wait time before capture
2. Check if page requires authentication
3. Verify URL is correct
### Elements Not Visible
1. Scroll element into view
2. Expand collapsed sections
3. Wait for animations to complete
### Server Not Starting
1. Check if port is already in use
2. Verify dev command is correct
3. Check for startup errors in logs

View File

@@ -1,419 +0,0 @@
# Swagger/OpenAPI Runner
Guide for generating backend API documentation from OpenAPI/Swagger specifications.
## Overview
This script extracts and converts OpenAPI/Swagger specifications to Markdown format for inclusion in the software manual.
## Detection Strategy
### Check for Existing Specification
```javascript
async function detectOpenAPISpec() {
// Check for existing spec files
const specPatterns = [
'openapi.json',
'openapi.yaml',
'openapi.yml',
'swagger.json',
'swagger.yaml',
'swagger.yml',
'**/openapi*.json',
'**/swagger*.json'
];
for (const pattern of specPatterns) {
const files = Glob(pattern);
if (files.length > 0) {
return { found: true, type: 'file', path: files[0] };
}
}
// Check for swagger-jsdoc in dependencies
const packageJson = JSON.parse(Read('package.json'));
if (packageJson.dependencies?.['swagger-jsdoc'] ||
packageJson.devDependencies?.['swagger-jsdoc']) {
return { found: true, type: 'jsdoc' };
}
// Check for NestJS Swagger
if (packageJson.dependencies?.['@nestjs/swagger']) {
return { found: true, type: 'nestjs' };
}
// Check for runtime endpoint
return { found: false, suggestion: 'runtime' };
}
```
## Extraction Methods
### Method A: From Existing Spec File
```javascript
async function extractFromFile(specPath, workDir) {
const outputDir = `${workDir}/api-docs/backend`;
Bash({ command: `mkdir -p "${outputDir}"` });
// Copy spec to output
Bash({ command: `cp "${specPath}" "${outputDir}/openapi.json"` });
// Convert to Markdown using widdershins
const result = Bash({
command: `npx widdershins "${specPath}" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'python:Python' 'bash:cURL'`,
timeout: 60000
});
return { success: result.exitCode === 0, outputDir };
}
```
### Method B: From swagger-jsdoc
```javascript
async function extractFromJsDoc(workDir) {
const outputDir = `${workDir}/api-docs/backend`;
// Look for swagger definition file
const defFiles = Glob('**/swagger*.js').concat(Glob('**/openapi*.js'));
if (defFiles.length === 0) {
return { success: false, error: 'No swagger definition found' };
}
// Generate spec
const result = Bash({
command: `npx swagger-jsdoc -d "${defFiles[0]}" -o "${outputDir}/openapi.json"`,
timeout: 60000
});
if (result.exitCode !== 0) {
return { success: false, error: result.stderr };
}
// Convert to Markdown
Bash({
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'bash:cURL'`
});
return { success: true, outputDir };
}
```
### Method C: From NestJS Swagger
```javascript
async function extractFromNestJS(workDir) {
const outputDir = `${workDir}/api-docs/backend`;
// NestJS typically exposes /api-docs-json at runtime
// We need to start the server temporarily
// Start server in background
const server = Bash({
command: 'npm run start:dev',
run_in_background: true,
timeout: 30000
});
// Wait for server to be ready
await waitForServer('http://localhost:3000', 30000);
// Fetch OpenAPI spec
const spec = await fetch('http://localhost:3000/api-docs-json');
const specJson = await spec.json();
// Save spec
Write(`${outputDir}/openapi.json`, JSON.stringify(specJson, null, 2));
// Stop server
KillShell({ shell_id: server.task_id });
// Convert to Markdown
Bash({
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'bash:cURL'`
});
return { success: true, outputDir };
}
```
### Method D: From Runtime Endpoint
```javascript
async function extractFromRuntime(workDir, serverUrl = 'http://localhost:3000') {
const outputDir = `${workDir}/api-docs/backend`;
// Common OpenAPI endpoint paths
const endpointPaths = [
'/api-docs-json',
'/swagger.json',
'/openapi.json',
'/docs/json',
'/api/v1/docs.json'
];
let specJson = null;
for (const path of endpointPaths) {
try {
const response = await fetch(`${serverUrl}${path}`);
if (response.ok) {
specJson = await response.json();
break;
}
} catch (e) {
continue;
}
}
if (!specJson) {
return { success: false, error: 'Could not fetch OpenAPI spec from server' };
}
// Save and convert
Write(`${outputDir}/openapi.json`, JSON.stringify(specJson, null, 2));
Bash({
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md"`
});
return { success: true, outputDir };
}
```
## Installation
### Required Tools
```bash
# For OpenAPI to Markdown conversion
npm install -g widdershins
# Or as dev dependency
npm install --save-dev widdershins
# For generating from JSDoc comments
npm install --save-dev swagger-jsdoc
```
## Configuration
### widdershins Options
```bash
npx widdershins openapi.json \
-o api-reference.md \
--language_tabs 'javascript:JavaScript' 'python:Python' 'bash:cURL' \
--summary \
--omitHeader \
--resolve \
--expandBody
```
| Option | Description |
|--------|-------------|
| `--language_tabs` | Code example languages |
| `--summary` | Use summary as operation heading |
| `--omitHeader` | Don't include title header |
| `--resolve` | Resolve $ref references |
| `--expandBody` | Show full request body |
### swagger-jsdoc Definition
Example `swagger-def.js`:
```javascript
module.exports = {
definition: {
openapi: '3.0.0',
info: {
title: 'MyApp API',
version: '1.0.0',
description: 'API documentation for MyApp'
},
servers: [
{ url: 'http://localhost:3000/api/v1' }
]
},
apis: ['./src/routes/*.js', './src/controllers/*.js']
};
```
## Output Format
### Generated Markdown Structure
```markdown
# MyApp API
## Overview
Base URL: `http://localhost:3000/api/v1`
## Authentication
This API uses Bearer token authentication.
---
## Projects
### List Projects
`GET /projects`
Returns a list of all projects.
**Parameters**
| Name | In | Type | Required | Description |
|------|-----|------|----------|-------------|
| status | query | string | false | Filter by status |
| page | query | integer | false | Page number |
**Responses**
| Status | Description |
|--------|-------------|
| 200 | Successful response |
| 401 | Unauthorized |
**Example Request**
```javascript
fetch('/api/v1/projects?status=active')
.then(res => res.json())
.then(data => console.log(data));
```
**Example Response**
```json
{
"data": [
{ "id": "1", "name": "Project 1" }
],
"pagination": {
"page": 1,
"total": 10
}
}
```
```
## Integration
### Main Runner
```javascript
async function runSwaggerExtraction(workDir) {
const detection = await detectOpenAPISpec();
if (!detection.found) {
console.log('No OpenAPI spec detected. Skipping backend API docs.');
return { success: false, skipped: true };
}
let result;
switch (detection.type) {
case 'file':
result = await extractFromFile(detection.path, workDir);
break;
case 'jsdoc':
result = await extractFromJsDoc(workDir);
break;
case 'nestjs':
result = await extractFromNestJS(workDir);
break;
default:
result = await extractFromRuntime(workDir);
}
if (result.success) {
// Post-process the Markdown
await postProcessApiDocs(result.outputDir);
}
return result;
}
async function postProcessApiDocs(outputDir) {
const mdFile = `${outputDir}/api-reference.md`;
let content = Read(mdFile);
// Remove widdershins header
content = content.replace(/^---[\s\S]*?---\n/, '');
// Add custom styling hints
content = content.replace(/^(#{1,3} .+)$/gm, '$1\n');
Write(mdFile, content);
}
```
## Troubleshooting
### Common Issues
#### "widdershins: command not found"
```bash
npm install -g widdershins
# Or use npx
npx widdershins openapi.json -o api.md
```
#### "Error parsing OpenAPI spec"
```bash
# Validate spec first
npx @redocly/cli lint openapi.json
# Fix common issues
npx @redocly/cli bundle openapi.json -o fixed.json
```
#### "Server not responding"
Ensure the development server is running and accessible:
```bash
# Check if server is running
curl http://localhost:3000/health
# Check OpenAPI endpoint
curl http://localhost:3000/api-docs-json
```
### Manual Fallback
If automatic extraction fails, document APIs manually:
1. List all route files: `Glob('**/routes/*.js')`
2. Extract route definitions using regex
3. Build documentation structure manually
```javascript
async function manualApiExtraction(workDir) {
const routeFiles = Glob('src/routes/*.js').concat(Glob('src/routes/*.ts'));
const endpoints = [];
for (const file of routeFiles) {
const content = Read(file);
const routes = content.matchAll(/router\.(get|post|put|delete|patch)\(['"]([^'"]+)['"]/g);
for (const match of routes) {
endpoints.push({
method: match[1].toUpperCase(),
path: match[2],
file: file
});
}
}
return endpoints;
}
```

View File

@@ -1,357 +0,0 @@
# TypeDoc Runner
Guide for generating frontend API documentation using TypeDoc.
## Overview
TypeDoc generates API documentation from TypeScript source code by analyzing type annotations and JSDoc comments.
## Prerequisites
### Check TypeScript Project
```javascript
// Verify TypeScript is used
const packageJson = JSON.parse(Read('package.json'));
const hasTypeScript = packageJson.devDependencies?.typescript ||
packageJson.dependencies?.typescript;
if (!hasTypeScript) {
console.log('Not a TypeScript project. Skipping TypeDoc.');
return;
}
// Check for tsconfig.json
const hasTsConfig = Glob('tsconfig.json').length > 0;
```
## Installation
### Install TypeDoc
```bash
npm install --save-dev typedoc typedoc-plugin-markdown
```
### Optional Plugins
```bash
# For better Markdown output
npm install --save-dev typedoc-plugin-markdown
# For README inclusion
npm install --save-dev typedoc-plugin-rename-defaults
```
## Configuration
### typedoc.json
Create `typedoc.json` in project root:
```json
{
"entryPoints": ["./src/index.ts"],
"entryPointStrategy": "expand",
"out": ".workflow/.scratchpad/manual-{timestamp}/api-docs/frontend",
"plugin": ["typedoc-plugin-markdown"],
"exclude": [
"**/node_modules/**",
"**/*.test.ts",
"**/*.spec.ts",
"**/tests/**"
],
"excludePrivate": true,
"excludeProtected": true,
"excludeInternal": true,
"hideGenerator": true,
"readme": "none",
"categorizeByGroup": true,
"navigation": {
"includeCategories": true,
"includeGroups": true
}
}
```
### Alternative: CLI Options
```bash
npx typedoc \
--entryPoints src/index.ts \
--entryPointStrategy expand \
--out api-docs/frontend \
--plugin typedoc-plugin-markdown \
--exclude "**/node_modules/**" \
--exclude "**/*.test.ts" \
--excludePrivate \
--excludeProtected \
--readme none
```
## Execution
### Basic Run
```javascript
async function runTypeDoc(workDir) {
const outputDir = `${workDir}/api-docs/frontend`;
// Create output directory
Bash({ command: `mkdir -p "${outputDir}"` });
// Run TypeDoc
const result = Bash({
command: `npx typedoc --out "${outputDir}" --plugin typedoc-plugin-markdown src/`,
timeout: 120000 // 2 minutes
});
if (result.exitCode !== 0) {
console.error('TypeDoc failed:', result.stderr);
return { success: false, error: result.stderr };
}
// List generated files
const files = Glob(`${outputDir}/**/*.md`);
console.log(`Generated ${files.length} documentation files`);
return { success: true, files };
}
```
### With Custom Entry Points
```javascript
async function runTypeDocCustom(workDir, entryPoints) {
const outputDir = `${workDir}/api-docs/frontend`;
// Build entry points string
const entries = entryPoints.map(e => `--entryPoints "${e}"`).join(' ');
const result = Bash({
command: `npx typedoc ${entries} --out "${outputDir}" --plugin typedoc-plugin-markdown`,
timeout: 120000
});
return { success: result.exitCode === 0 };
}
// Example: Document specific files
await runTypeDocCustom(workDir, [
'src/api/index.ts',
'src/hooks/index.ts',
'src/utils/index.ts'
]);
```
## Output Structure
```
api-docs/frontend/
├── README.md # Index
├── modules.md # Module list
├── modules/
│ ├── api.md # API module
│ ├── hooks.md # Hooks module
│ └── utils.md # Utils module
├── classes/
│ ├── ApiClient.md # Class documentation
│ └── ...
├── interfaces/
│ ├── Config.md # Interface documentation
│ └── ...
└── functions/
├── formatDate.md # Function documentation
└── ...
```
## Integration with Manual
### Reading TypeDoc Output
```javascript
async function integrateTypeDocOutput(workDir) {
const apiDocsDir = `${workDir}/api-docs/frontend`;
const files = Glob(`${apiDocsDir}/**/*.md`);
// Build API reference content
let content = '## Frontend API Reference\n\n';
// Add modules
const modules = Glob(`${apiDocsDir}/modules/*.md`);
for (const mod of modules) {
const modContent = Read(mod);
content += `### ${extractTitle(modContent)}\n\n`;
content += summarizeModule(modContent);
}
// Add functions
const functions = Glob(`${apiDocsDir}/functions/*.md`);
content += '\n### Functions\n\n';
for (const fn of functions) {
const fnContent = Read(fn);
content += formatFunctionDoc(fnContent);
}
// Add hooks
const hooks = Glob(`${apiDocsDir}/functions/*Hook*.md`);
if (hooks.length > 0) {
content += '\n### Hooks\n\n';
for (const hook of hooks) {
const hookContent = Read(hook);
content += formatHookDoc(hookContent);
}
}
return content;
}
```
### Example Output Format
```markdown
## Frontend API Reference
### API Module
Functions for interacting with the backend API.
#### fetchProjects
```typescript
function fetchProjects(options?: FetchOptions): Promise<Project[]>
```
Fetches all projects for the current user.
**Parameters:**
| Name | Type | Description |
|------|------|-------------|
| options | FetchOptions | Optional fetch configuration |
**Returns:** Promise<Project[]>
### Hooks
#### useProjects
```typescript
function useProjects(options?: UseProjectsOptions): UseProjectsResult
```
React hook for managing project data.
**Parameters:**
| Name | Type | Description |
|------|------|-------------|
| options.status | string | Filter by project status |
| options.limit | number | Max projects to fetch |
**Returns:**
| Property | Type | Description |
|----------|------|-------------|
| projects | Project[] | Array of projects |
| loading | boolean | Loading state |
| error | Error \| null | Error if failed |
| refetch | () => void | Refresh data |
```
## Troubleshooting
### Common Issues
#### "Cannot find module 'typescript'"
```bash
npm install --save-dev typescript
```
#### "No entry points found"
Ensure entry points exist:
```bash
# Check entry points
ls src/index.ts
# Or use glob pattern
npx typedoc --entryPoints "src/**/*.ts"
```
#### "Unsupported TypeScript version"
```bash
# Check TypeDoc compatibility
npm info typedoc peerDependencies
# Install compatible version
npm install --save-dev typedoc@0.25.x
```
### Debugging
```bash
# Verbose output
npx typedoc --logLevel Verbose src/
# Show warnings
npx typedoc --treatWarningsAsErrors src/
```
## Best Practices
### Document Exports Only
```typescript
// Good: Public API documented
/**
* Fetches projects from the API.
* @param options - Fetch options
* @returns Promise resolving to projects
*/
export function fetchProjects(options?: FetchOptions): Promise<Project[]> {
// ...
}
// Internal: Not documented
function internalHelper() {
// ...
}
```
### Use JSDoc Comments
```typescript
/**
* User hook for managing authentication state.
*
* @example
* ```tsx
* const { user, login, logout } = useAuth();
* ```
*
* @returns Authentication state and methods
*/
export function useAuth(): AuthResult {
// ...
}
```
### Define Types Properly
```typescript
/**
* Configuration for the API client.
*/
export interface ApiConfig {
/** API base URL */
baseUrl: string;
/** Request timeout in milliseconds */
timeout?: number;
/** Custom headers to include */
headers?: Record<string, string>;
}
```

View File

@@ -1,325 +0,0 @@
# HTML Template Specification
Technical specification for the TiddlyWiki-style HTML output.
## Overview
The output is a single, self-contained HTML file with:
- All CSS embedded inline
- All JavaScript embedded inline
- All images embedded as Base64
- Full offline functionality
## File Structure
```html
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{SOFTWARE_NAME}} - User Manual</title>
<style>{{EMBEDDED_CSS}}</style>
</head>
<body class="wiki-container" data-theme="light">
<aside class="wiki-sidebar">...</aside>
<main class="wiki-content">...</main>
<button class="theme-toggle">...</button>
<script id="search-index" type="application/json">{{SEARCH_INDEX}}</script>
<script>{{EMBEDDED_JS}}</script>
</body>
</html>
```
## Placeholders
| Placeholder | Description | Source |
|-------------|-------------|--------|
| `{{SOFTWARE_NAME}}` | Software name | manual-config.json |
| `{{VERSION}}` | Version number | manual-config.json |
| `{{EMBEDDED_CSS}}` | All CSS content | wiki-base.css + wiki-dark.css |
| `{{TOC_HTML}}` | Table of contents | Generated from sections |
| `{{TIDDLERS_HTML}}` | All content blocks | Converted from Markdown |
| `{{SEARCH_INDEX_JSON}}` | Search data | Generated from content |
| `{{EMBEDDED_JS}}` | JavaScript code | Inline in template |
| `{{TIMESTAMP}}` | Generation timestamp | ISO 8601 format |
| `{{LOGO_BASE64}}` | Logo image | Project logo or generated |
## Component Specifications
### Sidebar (`.wiki-sidebar`)
```
Width: 280px (fixed)
Position: Fixed left
Height: 100vh
Components:
- Logo area (.wiki-logo)
- Search box (.wiki-search)
- Tag navigation (.wiki-tags)
- Table of contents (.wiki-toc)
```
### Main Content (`.wiki-content`)
```
Margin-left: 280px (sidebar width)
Max-width: 900px (content)
Components:
- Header bar (.content-header)
- Tiddler container (.tiddler-container)
- Footer (.wiki-footer)
```
### Tiddler (Content Block)
```html
<article class="tiddler"
id="tiddler-{{ID}}"
data-tags="{{TAGS}}"
data-difficulty="{{DIFFICULTY}}">
<header class="tiddler-header">
<h2 class="tiddler-title">
<button class="collapse-toggle"></button>
{{TITLE}}
</h2>
<div class="tiddler-meta">
<span class="difficulty-badge {{DIFFICULTY}}">{{DIFFICULTY_LABEL}}</span>
{{TAG_BADGES}}
</div>
</header>
<div class="tiddler-content">
{{CONTENT_HTML}}
</div>
</article>
```
### Search Index Format
```json
{
"tiddler-overview": {
"title": "Product Overview",
"body": "Plain text content for searching...",
"tags": ["getting-started", "overview"]
},
"tiddler-ui-guide": {
"title": "UI Guide",
"body": "Plain text content...",
"tags": ["ui-guide"]
}
}
```
## Interactive Features
### 1. Search
- Full-text search with result highlighting
- Searches title, body, and tags
- Shows up to 10 results
- Keyboard accessible (Enter to search, Esc to close)
### 2. Collapse/Expand
- Per-section toggle via button
- Expand All / Collapse All buttons
- State indicated by ▼ (expanded) or ▶ (collapsed)
- Smooth transition animation
### 3. Tag Filtering
- Tags: all, getting-started, ui-guide, api, config, troubleshooting, examples
- Single selection (radio behavior)
- "all" shows everything
- Hidden tiddlers via `display: none`
### 4. Theme Toggle
- Light/Dark mode switch
- Persists to localStorage (`wiki-theme`)
- Applies to entire document via `[data-theme="dark"]`
- Toggle button shows sun/moon icon
### 5. Responsive Design
```
Breakpoints:
- Desktop (> 1024px): Sidebar visible
- Tablet (768-1024px): Sidebar collapsible
- Mobile (< 768px): Sidebar hidden, hamburger menu
```
### 6. Print Support
- Hides sidebar, toggles, interactive elements
- Expands all collapsed sections
- Adjusts colors for print
- Page breaks between sections
## Accessibility
### Keyboard Navigation
- Tab through interactive elements
- Enter to activate buttons
- Escape to close search results
- Arrow keys in search results
### ARIA Attributes
```html
<input aria-label="Search">
<nav aria-label="Table of Contents">
<button aria-label="Toggle theme">
<div aria-live="polite"> <!-- For search results -->
```
### Color Contrast
- Text/background ratio ≥ 4.5:1
- Interactive elements clearly visible
- Focus indicators visible
## Performance
### Target Metrics
| Metric | Target |
|--------|--------|
| Total file size | < 10MB |
| Time to interactive | < 2s |
| Search latency | < 100ms |
### Optimization Strategies
1. **Lazy loading for images**: `loading="lazy"`
2. **Efficient search**: In-memory index, no external requests
3. **CSS containment**: Scope styles to components
4. **Minimal JavaScript**: Vanilla JS, no libraries
## CSS Variables
### Light Theme
```css
:root {
--bg-primary: #ffffff;
--bg-secondary: #f8f9fa;
--text-primary: #212529;
--text-secondary: #495057;
--accent-color: #0d6efd;
--border-color: #dee2e6;
}
```
### Dark Theme
```css
[data-theme="dark"] {
--bg-primary: #1a1a2e;
--bg-secondary: #16213e;
--text-primary: #eaeaea;
--text-secondary: #b8b8b8;
--accent-color: #4dabf7;
--border-color: #2d3748;
}
```
## Markdown to HTML Mapping
| Markdown | HTML |
|----------|------|
| `# Heading` | `<h1>` |
| `## Heading` | `<h2>` |
| `**bold**` | `<strong>` |
| `*italic*` | `<em>` |
| `` `code` `` | `<code>` |
| `[link](url)` | `<a href="url">` |
| `- item` | `<ul><li>` |
| `1. item` | `<ol><li>` |
| ``` ```js ``` | `<pre><code class="language-js">` |
| `> quote` | `<blockquote>` |
| `---` | `<hr>` |
## Screenshot Embedding
### Marker Format
```markdown
<!-- SCREENSHOT: id="ss-login" url="/login" description="Login form" -->
```
### Embedded Format
```html
<figure class="screenshot">
<img src="data:image/png;base64,{{BASE64_DATA}}"
alt="Login form"
loading="lazy">
<figcaption>Login form</figcaption>
</figure>
```
### Placeholder (if missing)
```html
<div class="screenshot-placeholder">
[Screenshot: ss-login - Login form]
</div>
```
## File Size Optimization
### CSS
- Minify before embedding
- Remove unused styles
- Combine duplicate rules
### JavaScript
- Minify before embedding
- Remove console.log statements
- Use IIFE for scoping
### Images
- Compress before Base64 encoding
- Use appropriate dimensions (max 1280px width)
- Consider WebP format if browser support is acceptable
## Validation
### HTML Validation
- W3C HTML5 compliance
- Proper nesting
- Required attributes present
### CSS Validation
- Valid property values
- No deprecated properties
- Vendor prefixes where needed
### JavaScript
- No syntax errors
- All functions defined
- Error handling for edge cases
## Testing Checklist
- [ ] Opens in Chrome/Firefox/Safari/Edge
- [ ] Search works correctly
- [ ] Collapse/expand works
- [ ] Tag filtering works
- [ ] Theme toggle works
- [ ] Print preview correct
- [ ] Keyboard navigation works
- [ ] Mobile responsive
- [ ] Offline functionality
- [ ] All links valid
- [ ] All images display
- [ ] No console errors

View File

@@ -1,253 +0,0 @@
# Quality Standards
Quality gates and standards for software manual generation.
## Quality Dimensions
### 1. Completeness (25%)
All required sections present and adequately covered.
| Requirement | Weight | Criteria |
|-------------|--------|----------|
| Overview section | 5 | Product intro, features, quick start |
| UI Guide | 5 | All major screens documented |
| API Reference | 5 | All public APIs documented |
| Configuration | 4 | All config options explained |
| Troubleshooting | 3 | Common issues addressed |
| Examples | 3 | Multi-level examples provided |
**Scoring**:
- 100%: All sections present with adequate depth
- 80%: All sections present, some lacking depth
- 60%: Missing 1-2 sections
- 40%: Missing 3+ sections
- 0%: Critical sections missing (overview, UI guide)
### 2. Consistency (25%)
Terminology, style, and structure uniform across sections.
| Aspect | Check |
|--------|-------|
| Terminology | Same term for same concept throughout |
| Formatting | Consistent heading levels, code block styles |
| Tone | Consistent formality level |
| Cross-references | All internal links valid |
| Screenshot naming | Follow `ss-{feature}-{action}` pattern |
**Scoring**:
- 100%: Zero inconsistencies
- 80%: 1-3 minor inconsistencies
- 60%: 4-6 inconsistencies
- 40%: 7-10 inconsistencies
- 0%: Pervasive inconsistencies
### 3. Depth (25%)
Content provides sufficient detail for target audience.
| Level | Criteria |
|-------|----------|
| Shallow | Basic descriptions only |
| Standard | Descriptions + usage examples |
| Deep | Descriptions + examples + edge cases + best practices |
**Per-Section Depth Check**:
- [ ] Explains "what" (definition)
- [ ] Explains "why" (rationale)
- [ ] Explains "how" (procedure)
- [ ] Provides examples
- [ ] Covers edge cases
- [ ] Includes tips/best practices
**Scoring**:
- 100%: Deep coverage on all critical sections
- 80%: Standard coverage on all sections
- 60%: Shallow coverage on some sections
- 40%: Missing depth in critical areas
- 0%: Superficial throughout
### 4. Readability (25%)
Clear, user-friendly writing that's easy to follow.
| Metric | Target |
|--------|--------|
| Sentence length | Average < 20 words |
| Paragraph length | Average < 5 sentences |
| Heading hierarchy | Proper H1 > H2 > H3 nesting |
| Code blocks | Language specified |
| Lists | Used for 3+ items |
| Screenshots | Placed near relevant text |
**Structural Elements**:
- [ ] Clear section headers
- [ ] Numbered steps for procedures
- [ ] Bullet lists for options/features
- [ ] Tables for comparisons
- [ ] Code blocks with syntax highlighting
- [ ] Screenshots with captions
**Scoring**:
- 100%: All readability criteria met
- 80%: Minor structural issues
- 60%: Some sections hard to follow
- 40%: Significant readability problems
- 0%: Unclear, poorly structured
## Overall Quality Score
```
Overall = (Completeness × 0.25) + (Consistency × 0.25) +
(Depth × 0.25) + (Readability × 0.25)
```
**Quality Gates**:
| Gate | Threshold | Action |
|------|-----------|--------|
| Pass | ≥ 80% | Proceed to HTML generation |
| Review | 60-79% | Address warnings, proceed with caution |
| Fail | < 60% | Must address errors before continuing |
## Issue Classification
### Errors (Must Fix)
- Missing required sections
- Invalid cross-references
- Broken screenshot markers
- Code blocks without language
- Incomplete procedures (missing steps)
### Warnings (Should Fix)
- Terminology inconsistencies
- Sections lacking depth
- Missing examples
- Long paragraphs (> 7 sentences)
- Screenshots missing captions
### Info (Nice to Have)
- Optimization suggestions
- Additional example opportunities
- Alternative explanations
- Enhancement ideas
## Quality Checklist
### Pre-Generation
- [ ] All agents completed successfully
- [ ] No errors in consolidation report
- [ ] Overall score ≥ 60%
### Post-Generation
- [ ] HTML renders correctly
- [ ] Search returns relevant results
- [ ] All screenshots display
- [ ] Theme toggle works
- [ ] Print preview looks good
### Final Review
- [ ] User previewed and approved
- [ ] File size reasonable (< 10MB)
- [ ] No console errors in browser
- [ ] Accessible (keyboard navigation works)
## Automated Checks
```javascript
function runQualityChecks(workDir) {
const results = {
completeness: checkCompleteness(workDir),
consistency: checkConsistency(workDir),
depth: checkDepth(workDir),
readability: checkReadability(workDir)
};
results.overall = (
results.completeness * 0.25 +
results.consistency * 0.25 +
results.depth * 0.25 +
results.readability * 0.25
);
return results;
}
function checkCompleteness(workDir) {
const requiredSections = [
'section-overview.md',
'section-ui-guide.md',
'section-api-reference.md',
'section-configuration.md',
'section-troubleshooting.md',
'section-examples.md'
];
const existing = Glob(`${workDir}/sections/section-*.md`);
const found = requiredSections.filter(s =>
existing.some(e => e.endsWith(s))
);
return (found.length / requiredSections.length) * 100;
}
function checkConsistency(workDir) {
// Check terminology, cross-references, naming conventions
const issues = [];
// ... implementation ...
return Math.max(0, 100 - issues.length * 10);
}
function checkDepth(workDir) {
// Check content length, examples, edge cases
const sections = Glob(`${workDir}/sections/section-*.md`);
let totalScore = 0;
for (const section of sections) {
const content = Read(section);
let sectionScore = 0;
if (content.length > 500) sectionScore += 20;
if (content.includes('```')) sectionScore += 20;
if (content.includes('Example')) sectionScore += 20;
if (content.match(/\d+\./g)?.length > 3) sectionScore += 20;
if (content.includes('Note:') || content.includes('Tip:')) sectionScore += 20;
totalScore += sectionScore;
}
return totalScore / sections.length;
}
function checkReadability(workDir) {
// Check structure, formatting, organization
const sections = Glob(`${workDir}/sections/section-*.md`);
let issues = 0;
for (const section of sections) {
const content = Read(section);
// Check heading hierarchy
if (!content.startsWith('# ')) issues++;
// Check code block languages
const codeBlocks = content.match(/```\w*/g);
if (codeBlocks?.some(b => b === '```')) issues++;
// Check paragraph length
const paragraphs = content.split('\n\n');
if (paragraphs.some(p => p.split('. ').length > 7)) issues++;
}
return Math.max(0, 100 - issues * 10);
}
```

View File

@@ -1,298 +0,0 @@
# Writing Style Guide
User-friendly writing standards for software manuals.
## Core Principles
### 1. User-Centered
Write for the user, not the developer.
**Do**:
- "Click the **Save** button to save your changes"
- "Enter your email address in the login form"
**Don't**:
- "The onClick handler triggers the save mutation"
- "POST to /api/auth/login with email in body"
### 2. Action-Oriented
Focus on what users can **do**, not what the system does.
**Do**:
- "You can export your data as CSV"
- "To create a new project, click **New Project**"
**Don't**:
- "The system exports data in CSV format"
- "A new project is created when the button is clicked"
### 3. Clear and Direct
Use simple, straightforward language.
**Do**:
- "Select a file to upload"
- "The maximum file size is 10MB"
**Don't**:
- "Utilize the file selection interface to designate a file for uploading"
- "File size constraints limit uploads to 10 megabytes"
## Tone
### Friendly but Professional
- Conversational but not casual
- Helpful but not condescending
- Confident but not arrogant
**Examples**:
| Too Casual | Just Right | Too Formal |
|------------|------------|------------|
| "Yo, here's how..." | "Here's how to..." | "The following procedure describes..." |
| "Easy peasy!" | "That's all you need to do." | "The procedure has been completed." |
| "Don't worry about it" | "You don't need to change this" | "This parameter does not require modification" |
### Second Person
Address the user directly as "you".
**Do**: "You can customize your dashboard..."
**Don't**: "Users can customize their dashboards..."
## Structure
### Headings
Use clear, descriptive headings that tell users what they'll learn.
**Good Headings**:
- "Getting Started"
- "Creating Your First Project"
- "Configuring Email Notifications"
- "Troubleshooting Login Issues"
**Weak Headings**:
- "Overview"
- "Step 1"
- "Settings"
- "FAQ"
### Procedures
Number steps for sequential tasks.
```markdown
## Creating a New User
1. Navigate to **Settings** > **Users**.
2. Click the **Add User** button.
3. Enter the user's email address.
4. Select a role from the dropdown.
5. Click **Save**.
The new user will receive an invitation email.
```
### Features/Options
Use bullet lists for non-sequential items.
```markdown
## Export Options
You can export your data in several formats:
- **CSV**: Compatible with spreadsheets
- **JSON**: Best for developers
- **PDF**: Ideal for sharing reports
```
### Comparisons
Use tables for comparing options.
```markdown
## Plan Comparison
| Feature | Free | Pro | Enterprise |
|---------|------|-----|------------|
| Projects | 3 | Unlimited | Unlimited |
| Storage | 1GB | 10GB | 100GB |
| Support | Community | Email | Dedicated |
```
## Content Types
### Conceptual (What Is)
Explain what something is and why it matters.
```markdown
## What is a Workspace?
A workspace is a container for your projects and team members. Each workspace
has its own settings, billing, and permissions. You might create separate
workspaces for different clients or departments.
```
### Procedural (How To)
Step-by-step instructions for completing a task.
```markdown
## How to Create a Workspace
1. Click your profile icon in the top-right corner.
2. Select **Create Workspace**.
3. Enter a name for your workspace.
4. Choose a plan (you can upgrade later).
5. Click **Create**.
Your new workspace is ready to use.
```
### Reference (API/Config)
Detailed specifications and parameters.
```markdown
## Configuration Options
### `DATABASE_URL`
- **Type**: String (required)
- **Format**: `postgresql://user:password@host:port/database`
- **Example**: `postgresql://admin:secret@localhost:5432/myapp`
Database connection string for PostgreSQL.
```
## Formatting
### Bold
Use for:
- UI elements: Click **Save**
- First use of key terms: **Workspaces** contain projects
- Emphasis: **Never** share your API key
### Italic
Use for:
- Introducing new terms: A *workspace* is...
- Placeholders: Replace *your-api-key* with...
- Emphasis (sparingly): This is *really* important
### Code
Use for:
- Commands: Run `npm install`
- File paths: Edit `config/settings.json`
- Environment variables: Set `DATABASE_URL`
- API endpoints: POST `/api/users`
- Code references: The `handleSubmit` function
### Code Blocks
Always specify the language.
```javascript
// Example: Fetching user data
const response = await fetch('/api/user');
const user = await response.json();
```
### Notes and Warnings
Use for important callouts.
```markdown
> **Note**: This feature requires a Pro plan.
> **Warning**: Deleting a workspace cannot be undone.
> **Tip**: Use keyboard shortcuts to work faster.
```
## Screenshots
### When to Include
- First time showing a UI element
- Complex interfaces
- Before/after comparisons
- Error states
### Guidelines
- Capture just the relevant area
- Use consistent dimensions
- Highlight important elements
- Add descriptive captions
```markdown
<!-- SCREENSHOT: id="ss-dashboard" description="Main dashboard showing project list" -->
*The dashboard displays all your projects with their status.*
```
## Examples
### Good Section Example
```markdown
## Inviting Team Members
You can invite colleagues to collaborate on your projects.
### To invite a team member:
1. Open **Settings** > **Team**.
2. Click **Invite Member**.
3. Enter their email address.
4. Select their role:
- **Admin**: Full access to all settings
- **Editor**: Can edit projects
- **Viewer**: Read-only access
5. Click **Send Invite**.
The person will receive an email with a link to join your workspace.
> **Note**: You can have up to 5 team members on the Free plan.
<!-- SCREENSHOT: id="ss-invite-team" description="Team invitation dialog" -->
```
## Language Guidelines
### Avoid Jargon
| Technical | User-Friendly |
|-----------|---------------|
| Execute | Run |
| Terminate | Stop, End |
| Instantiate | Create |
| Invoke | Call, Use |
| Parameterize | Set, Configure |
| Persist | Save |
### Be Specific
| Vague | Specific |
|-------|----------|
| "Click the button" | "Click **Save**" |
| "Enter information" | "Enter your email address" |
| "An error occurred" | "Your password must be at least 8 characters" |
| "It takes a moment" | "This typically takes 2-3 seconds" |
### Use Active Voice
| Passive | Active |
|---------|--------|
| "The file is uploaded" | "Upload the file" |
| "Settings are saved" | "Click **Save** to keep your changes" |
| "Errors are displayed" | "The form shows any errors" |

View File

@@ -1,984 +0,0 @@
/* ========================================
Docsify-Style Documentation CSS
Software Manual Skill - Modern Theme
======================================== */
/* ========== CSS Variables ========== */
:root {
/* Light Theme - Teal Accent */
--bg-color: #ffffff;
--bg-secondary: #f8fafc;
--bg-tertiary: #f1f5f9;
--text-color: #1e293b;
--text-secondary: #64748b;
--text-muted: #94a3b8;
--border-color: #e2e8f0;
--accent-color: #14b8a6;
--accent-hover: #0d9488;
--accent-light: rgba(20, 184, 166, 0.1);
--link-color: #14b8a6;
--sidebar-bg: #ffffff;
--sidebar-width: 280px;
--code-bg: #1e293b;
--code-color: #e2e8f0;
--shadow-sm: 0 1px 2px rgba(0,0,0,0.05);
--shadow-md: 0 4px 6px -1px rgba(0,0,0,0.1), 0 2px 4px -2px rgba(0,0,0,0.1);
--shadow-lg: 0 10px 15px -3px rgba(0,0,0,0.1), 0 4px 6px -4px rgba(0,0,0,0.1);
/* Callout Colors */
--tip-bg: rgba(20, 184, 166, 0.08);
--tip-border: #14b8a6;
--warning-bg: rgba(245, 158, 11, 0.08);
--warning-border: #f59e0b;
--danger-bg: rgba(239, 68, 68, 0.08);
--danger-border: #ef4444;
--info-bg: rgba(59, 130, 246, 0.08);
--info-border: #3b82f6;
/* Typography */
--font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Noto Sans SC', sans-serif;
--font-mono: 'JetBrains Mono', 'Fira Code', 'SF Mono', Monaco, Consolas, monospace;
--font-size-xs: 0.75rem;
--font-size-sm: 0.875rem;
--font-size-base: 1rem;
--font-size-lg: 1.125rem;
--line-height: 1.75;
/* Spacing */
--space-xs: 0.25rem;
--space-sm: 0.5rem;
--space-md: 1rem;
--space-lg: 1.5rem;
--space-xl: 2rem;
--space-2xl: 3rem;
/* Border Radius */
--radius-sm: 4px;
--radius-md: 8px;
--radius-lg: 12px;
/* Transitions */
--transition: 0.2s ease;
--transition-slow: 0.3s ease;
}
/* Dark Theme */
[data-theme="dark"] {
--bg-color: #0f172a;
--bg-secondary: #1e293b;
--bg-tertiary: #334155;
--text-color: #f1f5f9;
--text-secondary: #94a3b8;
--text-muted: #64748b;
--border-color: #334155;
--sidebar-bg: #1e293b;
--code-bg: #0f172a;
--code-color: #e2e8f0;
--tip-bg: rgba(20, 184, 166, 0.15);
--warning-bg: rgba(245, 158, 11, 0.15);
--danger-bg: rgba(239, 68, 68, 0.15);
--info-bg: rgba(59, 130, 246, 0.15);
}
/* ========== Reset ========== */
*, *::before, *::after {
box-sizing: border-box;
margin: 0;
padding: 0;
}
html, body {
height: 100%;
}
body {
font-family: var(--font-family);
font-size: var(--font-size-base);
line-height: var(--line-height);
color: var(--text-color);
background-color: var(--bg-color);
-webkit-font-smoothing: antialiased;
}
/* ========== Layout ========== */
.docsify-container {
display: flex;
min-height: 100vh;
}
/* ========== Sidebar ========== */
.sidebar {
position: fixed;
top: 0;
left: 0;
width: var(--sidebar-width);
height: 100vh;
background: var(--sidebar-bg);
border-right: 1px solid var(--border-color);
display: flex;
flex-direction: column;
z-index: 100;
transition: transform var(--transition);
}
.sidebar-header {
padding: var(--space-lg);
border-bottom: 1px solid var(--border-color);
}
.logo {
display: flex;
align-items: center;
gap: var(--space-sm);
}
.logo-icon {
width: 36px;
height: 36px;
display: flex;
align-items: center;
justify-content: center;
background: linear-gradient(135deg, var(--accent-color), #3eaf7c);
border-radius: 8px;
color: #fff;
font-weight: bold;
font-size: 1.25rem;
}
.logo-text h1 {
font-size: var(--font-size-base);
font-weight: 600;
color: var(--text-color);
margin: 0;
line-height: 1.2;
}
.logo-text .version {
font-size: var(--font-size-sm);
color: var(--text-muted);
}
/* ========== Search ========== */
.sidebar-search {
padding: var(--space-md);
position: relative;
}
.search-box {
position: relative;
display: flex;
align-items: center;
}
.search-icon {
position: absolute;
left: 10px;
color: var(--text-muted);
pointer-events: none;
}
.search-box input {
width: 100%;
padding: 10px 60px 10px 36px;
border: 1px solid var(--border-color);
border-radius: var(--radius-md);
font-size: var(--font-size-sm);
background: var(--bg-secondary);
color: var(--text-color);
transition: all var(--transition);
}
.search-box input:focus {
outline: none;
border-color: var(--accent-color);
box-shadow: 0 0 0 3px var(--accent-light);
background: var(--bg-color);
}
.search-box input::placeholder {
color: var(--text-muted);
}
/* Keyboard shortcut hint */
.search-box::after {
content: 'Ctrl K';
position: absolute;
right: 10px;
top: 50%;
transform: translateY(-50%);
font-size: var(--font-size-xs);
color: var(--text-muted);
background: var(--bg-color);
padding: 2px 6px;
border-radius: var(--radius-sm);
border: 1px solid var(--border-color);
font-family: var(--font-mono);
pointer-events: none;
}
.search-results {
position: absolute;
top: 100%;
left: var(--space-md);
right: var(--space-md);
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: 8px;
box-shadow: var(--shadow-lg);
max-height: 400px;
overflow-y: auto;
opacity: 0;
visibility: hidden;
transform: translateY(-4px);
transition: all var(--transition);
z-index: 200;
}
.search-results.visible {
opacity: 1;
visibility: visible;
transform: translateY(0);
}
.search-result-item {
display: block;
padding: var(--space-sm) var(--space-md);
text-decoration: none;
color: var(--text-color);
border-bottom: 1px solid var(--border-color);
transition: background var(--transition);
}
.search-result-item:last-child {
border-bottom: none;
}
.search-result-item:hover {
background: var(--bg-secondary);
}
.result-title {
font-weight: 600;
font-size: var(--font-size-sm);
margin-bottom: 2px;
}
.result-excerpt {
font-size: 0.8rem;
color: var(--text-secondary);
line-height: 1.4;
}
.result-excerpt mark {
background: var(--accent-light);
color: var(--accent-color);
padding: 1px 4px;
border-radius: var(--radius-sm);
font-weight: 500;
}
.no-results {
padding: var(--space-md);
text-align: center;
color: var(--text-muted);
font-size: var(--font-size-sm);
}
/* ========== Sidebar Navigation ========== */
.sidebar-nav {
flex: 1;
overflow-y: auto;
padding: var(--space-md) 0;
}
.nav-group {
margin-bottom: var(--space-xs);
}
.nav-group-header {
display: flex;
align-items: center;
padding: var(--space-sm) var(--space-md);
cursor: pointer;
user-select: none;
transition: background var(--transition);
}
.nav-group-header:hover {
background: var(--bg-secondary);
}
.nav-group-toggle {
width: 20px;
height: 20px;
display: flex;
align-items: center;
justify-content: center;
margin-right: var(--space-xs);
background: none;
border: none;
color: var(--text-muted);
cursor: pointer;
transition: transform var(--transition);
}
.nav-group-toggle svg {
width: 12px;
height: 12px;
}
.nav-group.expanded .nav-group-toggle {
transform: rotate(90deg);
}
.nav-group-title {
font-size: var(--font-size-sm);
font-weight: 600;
color: var(--text-color);
}
.nav-group-items {
display: none;
padding-left: var(--space-lg);
}
.nav-group.expanded .nav-group-items {
display: block;
}
.nav-item {
display: block;
padding: 8px var(--space-md) 8px calc(var(--space-md) + 4px);
font-size: var(--font-size-sm);
color: var(--text-secondary);
text-decoration: none;
border-left: 2px solid transparent;
margin: 2px 8px 2px 0;
border-radius: 0 var(--radius-md) var(--radius-md) 0;
transition: all var(--transition);
cursor: pointer;
}
.nav-item:hover {
color: var(--text-color);
background: var(--bg-secondary);
}
.nav-item.active {
color: var(--accent-color);
border-left-color: var(--accent-color);
background: var(--accent-light);
font-weight: 500;
}
/* Top-level nav items (no group) */
.nav-item.top-level {
padding-left: var(--space-md);
border-left: none;
margin: 2px 8px;
border-radius: var(--radius-md);
}
.nav-item.top-level.active {
background: var(--accent-light);
}
/* ========== Main Content ========== */
.main-content {
flex: 1;
margin-left: var(--sidebar-width);
min-height: 100vh;
overflow-y: auto;
display: flex;
flex-direction: column;
}
.mobile-header {
display: none;
position: sticky;
top: 0;
padding: var(--space-sm) var(--space-md);
background: var(--bg-color);
border-bottom: 1px solid var(--border-color);
z-index: 50;
align-items: center;
gap: var(--space-sm);
}
.sidebar-toggle {
background: none;
border: none;
padding: var(--space-xs);
color: var(--text-color);
cursor: pointer;
border-radius: 4px;
transition: background var(--transition);
}
.sidebar-toggle:hover {
background: var(--bg-secondary);
}
.current-section {
flex: 1;
font-weight: 600;
font-size: var(--font-size-sm);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.theme-toggle-mobile {
background: none;
border: none;
padding: var(--space-xs);
font-size: 1.25rem;
cursor: pointer;
}
/* ========== Content Sections ========== */
.content-wrapper {
flex: 1;
max-width: 860px;
margin: 0 auto;
padding: var(--space-2xl) var(--space-xl);
width: 100%;
}
.content-section {
display: none;
animation: fadeIn 0.3s ease;
}
.content-section.active {
display: block;
}
@keyframes fadeIn {
from { opacity: 0; transform: translateY(8px); }
to { opacity: 1; transform: translateY(0); }
}
/* ========== Content Typography ========== */
.content-section h1 {
font-size: 2rem;
font-weight: 700;
margin-bottom: var(--space-lg);
padding-bottom: var(--space-md);
border-bottom: 1px solid var(--border-color);
}
.content-section h2 {
font-size: 1.5rem;
font-weight: 600;
margin-top: var(--space-2xl);
margin-bottom: var(--space-md);
padding-bottom: var(--space-sm);
border-bottom: 1px solid var(--border-color);
}
.content-section h3 {
font-size: 1.25rem;
font-weight: 600;
margin-top: var(--space-xl);
margin-bottom: var(--space-sm);
}
.content-section h4 {
font-size: 1.1rem;
font-weight: 600;
margin-top: var(--space-lg);
margin-bottom: var(--space-sm);
}
.content-section p {
margin-bottom: var(--space-md);
}
.content-section a {
color: var(--link-color);
text-decoration: none;
}
.content-section a:hover {
text-decoration: underline;
}
/* Lists */
.content-section ul,
.content-section ol {
margin: var(--space-md) 0;
padding-left: var(--space-xl);
}
.content-section li {
margin-bottom: var(--space-sm);
}
.content-section li::marker {
color: var(--accent-color);
}
/* Inline Code */
.content-section code {
font-family: var(--font-mono);
font-size: 0.85em;
padding: 3px 8px;
background: var(--bg-tertiary);
color: var(--accent-color);
border-radius: var(--radius-sm);
font-weight: 500;
}
/* Code Blocks */
.code-block-wrapper {
position: relative;
margin: var(--space-lg) 0;
border-radius: var(--radius-lg);
overflow: hidden;
box-shadow: var(--shadow-md);
}
.content-section pre {
margin: 0;
padding: var(--space-lg);
padding-top: calc(var(--space-lg) + 40px);
background: var(--code-bg);
overflow-x: auto;
border-radius: var(--radius-lg);
}
.content-section pre code {
display: block;
padding: 0;
background: none;
color: var(--code-color);
font-size: var(--font-size-sm);
line-height: 1.7;
}
/* Code Block Header */
.code-block-wrapper::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 40px;
background: rgba(255,255,255,0.03);
border-bottom: 1px solid rgba(255,255,255,0.05);
}
/* Code Block Actions */
.code-block-actions {
position: absolute;
top: 8px;
right: 12px;
display: flex;
gap: 8px;
z-index: 10;
}
.copy-code-btn {
position: absolute;
top: 8px;
right: 12px;
padding: 6px 12px;
background: rgba(255,255,255,0.08);
border: 1px solid rgba(255,255,255,0.1);
border-radius: var(--radius-md);
color: var(--code-color);
cursor: pointer;
opacity: 0;
transition: all var(--transition);
display: flex;
align-items: center;
gap: 6px;
font-size: var(--font-size-xs);
font-family: var(--font-family);
}
.code-block-wrapper:hover .copy-code-btn {
opacity: 1;
}
.copy-code-btn:hover {
background: rgba(255,255,255,0.15);
border-color: rgba(255,255,255,0.2);
}
.copy-code-btn.copied {
background: var(--accent-color);
border-color: var(--accent-color);
color: #fff;
}
/* Code syntax colors */
.content-section pre .keyword { color: #c678dd; }
.content-section pre .string { color: #98c379; }
.content-section pre .number { color: #d19a66; }
.content-section pre .comment { color: #5c6370; font-style: italic; }
.content-section pre .function { color: #61afef; }
.content-section pre .operator { color: #56b6c2; }
/* Tables */
.content-section table {
width: 100%;
margin: var(--space-lg) 0;
border-collapse: collapse;
font-size: var(--font-size-sm);
border-radius: var(--radius-md);
overflow: hidden;
box-shadow: var(--shadow-sm);
}
.content-section th {
padding: var(--space-md);
background: var(--accent-color);
color: #fff;
font-weight: 600;
text-align: left;
font-size: var(--font-size-sm);
letter-spacing: 0.02em;
}
.content-section th:first-child {
border-top-left-radius: var(--radius-md);
}
.content-section th:last-child {
border-top-right-radius: var(--radius-md);
}
.content-section td {
padding: var(--space-sm) var(--space-md);
border-bottom: 1px solid var(--border-color);
vertical-align: top;
}
.content-section tbody tr:nth-child(even) {
background: var(--bg-secondary);
}
.content-section tbody tr:hover {
background: var(--accent-light);
}
.content-section tbody tr:last-child td {
border-bottom: none;
}
.content-section tbody tr:last-child td:first-child {
border-bottom-left-radius: var(--radius-md);
}
.content-section tbody tr:last-child td:last-child {
border-bottom-right-radius: var(--radius-md);
}
/* Blockquote / Callouts */
.content-section blockquote {
position: relative;
margin: var(--space-lg) 0;
padding: var(--space-md) var(--space-lg);
padding-left: calc(var(--space-lg) + 32px);
background: var(--tip-bg);
border: 1px solid var(--tip-border);
border-radius: var(--radius-lg);
}
.content-section blockquote::before {
content: '💡';
position: absolute;
left: var(--space-md);
top: var(--space-md);
font-size: 1.25rem;
line-height: 1;
}
.content-section blockquote p:last-child {
margin-bottom: 0;
}
.content-section blockquote p:first-child {
font-weight: 500;
color: var(--text-color);
}
/* Warning callout */
.content-section blockquote.warning,
.content-section blockquote:has(strong:first-child:contains("警告")),
.content-section blockquote:has(strong:first-child:contains("Warning")) {
background: var(--warning-bg);
border-color: var(--warning-border);
}
.content-section blockquote.warning::before {
content: '⚠️';
}
/* Danger callout */
.content-section blockquote.danger,
.content-section blockquote:has(strong:first-child:contains("危险")),
.content-section blockquote:has(strong:first-child:contains("Danger")) {
background: var(--danger-bg);
border-color: var(--danger-border);
}
.content-section blockquote.danger::before {
content: '🚨';
}
/* Info callout */
.content-section blockquote.info,
.content-section blockquote:has(strong:first-child:contains("注意")),
.content-section blockquote:has(strong:first-child:contains("Note")) {
background: var(--info-bg);
border-color: var(--info-border);
}
.content-section blockquote.info::before {
content: '';
}
/* Images */
.content-section img {
max-width: 100%;
height: auto;
border-radius: 8px;
box-shadow: var(--shadow-md);
margin: var(--space-md) 0;
}
.screenshot-placeholder {
padding: var(--space-xl);
background: var(--bg-secondary);
border: 2px dashed var(--border-color);
border-radius: 8px;
text-align: center;
color: var(--text-muted);
margin: var(--space-md) 0;
}
/* ========== Footer ========== */
.main-footer {
padding: var(--space-lg);
text-align: center;
color: var(--text-muted);
font-size: var(--font-size-sm);
border-top: 1px solid var(--border-color);
margin-top: auto;
}
/* ========== Theme Toggle (Desktop) ========== */
.theme-toggle {
position: fixed;
bottom: var(--space-lg);
right: var(--space-lg);
width: 44px;
height: 44px;
border-radius: 50%;
border: 1px solid var(--border-color);
background: var(--bg-color);
box-shadow: var(--shadow-md);
cursor: pointer;
font-size: 1.25rem;
z-index: 100;
transition: transform var(--transition);
}
.theme-toggle:hover {
transform: scale(1.1);
}
[data-theme="light"] .moon-icon { display: inline; }
[data-theme="light"] .sun-icon { display: none; }
[data-theme="dark"] .moon-icon { display: none; }
[data-theme="dark"] .sun-icon { display: inline; }
/* ========== Back to Top ========== */
.back-to-top {
position: fixed;
bottom: calc(var(--space-lg) + 56px);
right: var(--space-lg);
width: 40px;
height: 40px;
border-radius: 50%;
border: 1px solid var(--border-color);
background: var(--bg-color);
box-shadow: var(--shadow-md);
color: var(--text-secondary);
cursor: pointer;
opacity: 0;
visibility: hidden;
transition: all var(--transition);
z-index: 100;
display: flex;
align-items: center;
justify-content: center;
}
.back-to-top.visible {
opacity: 1;
visibility: visible;
}
.back-to-top:hover {
color: var(--accent-color);
border-color: var(--accent-color);
}
/* ========== Responsive ========== */
@media (max-width: 960px) {
.sidebar {
transform: translateX(-100%);
}
.sidebar.open {
transform: translateX(0);
box-shadow: var(--shadow-lg);
}
.main-content {
margin-left: 0;
}
.mobile-header {
display: flex;
}
.content-wrapper {
padding: var(--space-lg);
}
.theme-toggle {
display: none;
}
}
@media (max-width: 640px) {
.content-section h1 {
font-size: 1.5rem;
}
.content-section h2 {
font-size: 1.25rem;
}
.content-wrapper {
padding: var(--space-md);
}
}
/* ========== Print Styles ========== */
@media print {
.sidebar,
.mobile-header,
.theme-toggle,
.back-to-top,
.copy-code-btn {
display: none !important;
}
.main-content {
margin-left: 0;
}
.content-section {
display: block !important;
page-break-after: always;
}
.content-section pre {
background: #f5f5f5 !important;
color: #333 !important;
}
}
/* ========== Pygments Syntax Highlighting (One Dark Theme) ========== */
/* Generated for CodeHilite extension */
.highlight { background: #282c34; border-radius: 8px; padding: 1em; overflow-x: auto; margin: var(--spacing-md) 0; }
.highlight pre { margin: 0; background: transparent; padding: 0; }
.highlight code { background: transparent; border: none; padding: 0; color: #abb2bf; font-size: var(--font-size-sm); }
/* Pygments Token Colors - One Dark Theme */
.highlight .hll { background-color: #3e4451; }
.highlight .c { color: #5c6370; font-style: italic; } /* Comment */
.highlight .err { color: #e06c75; } /* Error */
.highlight .k { color: #c678dd; } /* Keyword */
.highlight .l { color: #98c379; } /* Literal */
.highlight .n { color: #abb2bf; } /* Name */
.highlight .o { color: #56b6c2; } /* Operator */
.highlight .p { color: #abb2bf; } /* Punctuation */
.highlight .ch { color: #5c6370; font-style: italic; } /* Comment.Hashbang */
.highlight .cm { color: #5c6370; font-style: italic; } /* Comment.Multiline */
.highlight .cp { color: #5c6370; font-style: italic; } /* Comment.Preproc */
.highlight .cpf { color: #5c6370; font-style: italic; } /* Comment.PreprocFile */
.highlight .c1 { color: #5c6370; font-style: italic; } /* Comment.Single */
.highlight .cs { color: #5c6370; font-style: italic; } /* Comment.Special */
.highlight .gd { color: #e06c75; } /* Generic.Deleted */
.highlight .ge { font-style: italic; } /* Generic.Emph */
.highlight .gh { color: #abb2bf; font-weight: bold; } /* Generic.Heading */
.highlight .gi { color: #98c379; } /* Generic.Inserted */
.highlight .go { color: #5c6370; } /* Generic.Output */
.highlight .gp { color: #5c6370; } /* Generic.Prompt */
.highlight .gs { font-weight: bold; } /* Generic.Strong */
.highlight .gu { color: #56b6c2; font-weight: bold; } /* Generic.Subheading */
.highlight .gt { color: #e06c75; } /* Generic.Traceback */
.highlight .kc { color: #c678dd; } /* Keyword.Constant */
.highlight .kd { color: #c678dd; } /* Keyword.Declaration */
.highlight .kn { color: #c678dd; } /* Keyword.Namespace */
.highlight .kp { color: #c678dd; } /* Keyword.Pseudo */
.highlight .kr { color: #c678dd; } /* Keyword.Reserved */
.highlight .kt { color: #e5c07b; } /* Keyword.Type */
.highlight .ld { color: #98c379; } /* Literal.Date */
.highlight .m { color: #d19a66; } /* Literal.Number */
.highlight .s { color: #98c379; } /* Literal.String */
.highlight .na { color: #d19a66; } /* Name.Attribute */
.highlight .nb { color: #e5c07b; } /* Name.Builtin */
.highlight .nc { color: #e5c07b; } /* Name.Class */
.highlight .no { color: #d19a66; } /* Name.Constant */
.highlight .nd { color: #e5c07b; } /* Name.Decorator */
.highlight .ni { color: #abb2bf; } /* Name.Entity */
.highlight .ne { color: #e06c75; } /* Name.Exception */
.highlight .nf { color: #61afef; } /* Name.Function */
.highlight .nl { color: #abb2bf; } /* Name.Label */
.highlight .nn { color: #e5c07b; } /* Name.Namespace */
.highlight .nx { color: #abb2bf; } /* Name.Other */
.highlight .py { color: #abb2bf; } /* Name.Property */
.highlight .nt { color: #e06c75; } /* Name.Tag */
.highlight .nv { color: #e06c75; } /* Name.Variable */
.highlight .ow { color: #56b6c2; } /* Operator.Word */
.highlight .w { color: #abb2bf; } /* Text.Whitespace */
.highlight .mb { color: #d19a66; } /* Literal.Number.Bin */
.highlight .mf { color: #d19a66; } /* Literal.Number.Float */
.highlight .mh { color: #d19a66; } /* Literal.Number.Hex */
.highlight .mi { color: #d19a66; } /* Literal.Number.Integer */
.highlight .mo { color: #d19a66; } /* Literal.Number.Oct */
.highlight .sa { color: #98c379; } /* Literal.String.Affix */
.highlight .sb { color: #98c379; } /* Literal.String.Backtick */
.highlight .sc { color: #98c379; } /* Literal.String.Char */
.highlight .dl { color: #98c379; } /* Literal.String.Delimiter */
.highlight .sd { color: #98c379; } /* Literal.String.Doc */
.highlight .s2 { color: #98c379; } /* Literal.String.Double */
.highlight .se { color: #d19a66; } /* Literal.String.Escape */
.highlight .sh { color: #98c379; } /* Literal.String.Heredoc */
.highlight .si { color: #98c379; } /* Literal.String.Interpol */
.highlight .sx { color: #98c379; } /* Literal.String.Other */
.highlight .sr { color: #56b6c2; } /* Literal.String.Regex */
.highlight .s1 { color: #98c379; } /* Literal.String.Single */
.highlight .ss { color: #56b6c2; } /* Literal.String.Symbol */
.highlight .bp { color: #e5c07b; } /* Name.Builtin.Pseudo */
.highlight .fm { color: #61afef; } /* Name.Function.Magic */
.highlight .vc { color: #e06c75; } /* Name.Variable.Class */
.highlight .vg { color: #e06c75; } /* Name.Variable.Global */
.highlight .vi { color: #e06c75; } /* Name.Variable.Instance */
.highlight .vm { color: #e06c75; } /* Name.Variable.Magic */
.highlight .il { color: #d19a66; } /* Literal.Number.Integer.Long */
/* Dark theme override for highlight */
[data-theme="dark"] .highlight {
background: #1e2128;
border: 1px solid #3d4450;
}

View File

@@ -1,788 +0,0 @@
/* ========================================
TiddlyWiki-Style Base CSS
Software Manual Skill
======================================== */
/* ========== CSS Variables ========== */
:root {
/* Light Theme */
--bg-primary: #ffffff;
--bg-secondary: #f8f9fa;
--bg-tertiary: #e9ecef;
--text-primary: #212529;
--text-secondary: #495057;
--text-muted: #6c757d;
--border-color: #dee2e6;
--accent-color: #0d6efd;
--accent-hover: #0b5ed7;
--success-color: #198754;
--warning-color: #ffc107;
--danger-color: #dc3545;
--info-color: #0dcaf0;
/* Layout */
--sidebar-width: 280px;
--header-height: 60px;
--content-max-width: 900px;
--spacing-xs: 0.25rem;
--spacing-sm: 0.5rem;
--spacing-md: 1rem;
--spacing-lg: 1.5rem;
--spacing-xl: 2rem;
/* Typography */
--font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
--font-family-mono: 'SF Mono', Monaco, Consolas, 'Liberation Mono', 'Courier New', monospace;
--font-size-sm: 0.875rem;
--font-size-base: 1rem;
--font-size-lg: 1.125rem;
--font-size-xl: 1.25rem;
--line-height: 1.6;
/* Shadows */
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.05);
--shadow-md: 0 4px 6px rgba(0, 0, 0, 0.1);
--shadow-lg: 0 10px 15px rgba(0, 0, 0, 0.1);
/* Transitions */
--transition-fast: 150ms ease;
--transition-base: 300ms ease;
}
/* ========== Reset & Base ========== */
*, *::before, *::after {
box-sizing: border-box;
margin: 0;
padding: 0;
}
html {
scroll-behavior: smooth;
}
body {
font-family: var(--font-family);
font-size: var(--font-size-base);
line-height: var(--line-height);
color: var(--text-primary);
background-color: var(--bg-secondary);
}
/* ========== Layout ========== */
.wiki-container {
display: flex;
min-height: 100vh;
}
/* ========== Sidebar ========== */
.wiki-sidebar {
position: fixed;
top: 0;
left: 0;
width: var(--sidebar-width);
height: 100vh;
background-color: var(--bg-primary);
border-right: 1px solid var(--border-color);
overflow-y: auto;
z-index: 100;
display: flex;
flex-direction: column;
transition: transform var(--transition-base);
}
/* Logo Area */
.wiki-logo {
padding: var(--spacing-lg);
text-align: center;
border-bottom: 1px solid var(--border-color);
}
.wiki-logo .logo-placeholder {
width: 60px;
height: 60px;
margin: 0 auto var(--spacing-sm);
background: linear-gradient(135deg, var(--accent-color), var(--info-color));
border-radius: 12px;
display: flex;
align-items: center;
justify-content: center;
color: white;
font-weight: bold;
font-size: var(--font-size-xl);
}
.wiki-logo h1 {
font-size: var(--font-size-lg);
font-weight: 600;
margin-bottom: var(--spacing-xs);
}
.wiki-logo .version {
font-size: var(--font-size-sm);
color: var(--text-muted);
}
/* Search */
.wiki-search {
padding: var(--spacing-md);
position: relative;
}
.wiki-search input {
width: 100%;
padding: var(--spacing-sm) var(--spacing-md);
border: 1px solid var(--border-color);
border-radius: 6px;
font-size: var(--font-size-sm);
background-color: var(--bg-secondary);
transition: border-color var(--transition-fast), box-shadow var(--transition-fast);
}
.wiki-search input:focus {
outline: none;
border-color: var(--accent-color);
box-shadow: 0 0 0 3px rgba(13, 110, 253, 0.15);
}
.search-results {
position: absolute;
top: 100%;
left: var(--spacing-md);
right: var(--spacing-md);
background: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 6px;
box-shadow: var(--shadow-lg);
max-height: 400px;
overflow-y: auto;
z-index: 200;
}
.search-result-item {
display: block;
padding: var(--spacing-sm) var(--spacing-md);
text-decoration: none;
color: var(--text-primary);
border-bottom: 1px solid var(--border-color);
transition: background-color var(--transition-fast);
}
.search-result-item:last-child {
border-bottom: none;
}
.search-result-item:hover {
background-color: var(--bg-secondary);
}
.result-title {
font-weight: 600;
margin-bottom: var(--spacing-xs);
}
.result-excerpt {
font-size: var(--font-size-sm);
color: var(--text-secondary);
}
.result-excerpt mark {
background-color: var(--warning-color);
padding: 0 2px;
border-radius: 2px;
}
.no-results {
padding: var(--spacing-md);
text-align: center;
color: var(--text-muted);
}
/* Tags */
.wiki-tags {
padding: var(--spacing-md);
display: flex;
flex-wrap: wrap;
gap: var(--spacing-xs);
border-bottom: 1px solid var(--border-color);
}
.wiki-tags .tag {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: var(--font-size-sm);
border: 1px solid var(--border-color);
border-radius: 20px;
background: var(--bg-secondary);
color: var(--text-secondary);
cursor: pointer;
transition: all var(--transition-fast);
}
.wiki-tags .tag:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
.wiki-tags .tag.active {
background-color: var(--accent-color);
border-color: var(--accent-color);
color: white;
}
/* Table of Contents */
.wiki-toc {
flex: 1;
padding: var(--spacing-md);
overflow-y: auto;
}
.wiki-toc h3 {
font-size: var(--font-size-sm);
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--text-muted);
margin-bottom: var(--spacing-md);
}
.wiki-toc ul {
list-style: none;
}
.wiki-toc li {
margin-bottom: var(--spacing-xs);
}
.wiki-toc a {
display: flex;
align-items: center;
justify-content: space-between;
padding: var(--spacing-sm);
color: var(--text-secondary);
text-decoration: none;
border-radius: 6px;
font-size: var(--font-size-sm);
transition: all var(--transition-fast);
}
.wiki-toc a:hover {
background-color: var(--bg-secondary);
color: var(--accent-color);
}
/* ========== Main Content ========== */
.wiki-content {
flex: 1;
margin-left: var(--sidebar-width);
min-height: 100vh;
display: flex;
flex-direction: column;
}
/* Header */
.content-header {
position: sticky;
top: 0;
background-color: var(--bg-primary);
border-bottom: 1px solid var(--border-color);
padding: var(--spacing-sm) var(--spacing-lg);
display: flex;
align-items: center;
justify-content: space-between;
z-index: 50;
}
.sidebar-toggle {
display: none;
flex-direction: column;
gap: 4px;
padding: var(--spacing-sm);
background: none;
border: none;
cursor: pointer;
}
.sidebar-toggle span {
display: block;
width: 20px;
height: 2px;
background-color: var(--text-primary);
transition: transform var(--transition-fast);
}
.header-actions {
display: flex;
gap: var(--spacing-sm);
}
.header-actions button {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: var(--font-size-sm);
border: 1px solid var(--border-color);
border-radius: 4px;
background: var(--bg-primary);
color: var(--text-secondary);
cursor: pointer;
transition: all var(--transition-fast);
}
.header-actions button:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
/* Tiddler Container */
.tiddler-container {
flex: 1;
max-width: var(--content-max-width);
margin: 0 auto;
padding: var(--spacing-lg);
width: 100%;
}
/* ========== Tiddler (Content Block) ========== */
.tiddler {
background-color: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 8px;
margin-bottom: var(--spacing-lg);
box-shadow: var(--shadow-sm);
transition: box-shadow var(--transition-fast);
}
.tiddler:hover {
box-shadow: var(--shadow-md);
}
.tiddler-header {
padding: var(--spacing-md) var(--spacing-lg);
border-bottom: 1px solid var(--border-color);
display: flex;
align-items: center;
justify-content: space-between;
flex-wrap: wrap;
gap: var(--spacing-sm);
}
.tiddler-title {
display: flex;
align-items: center;
gap: var(--spacing-sm);
font-size: var(--font-size-xl);
font-weight: 600;
margin: 0;
}
.collapse-toggle {
background: none;
border: none;
font-size: var(--font-size-sm);
color: var(--text-muted);
cursor: pointer;
padding: var(--spacing-xs);
transition: transform var(--transition-fast);
}
.tiddler.collapsed .collapse-toggle {
transform: rotate(-90deg);
}
.tiddler-meta {
display: flex;
gap: var(--spacing-sm);
flex-wrap: wrap;
}
.difficulty-badge {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: 0.75rem;
font-weight: 500;
border-radius: 4px;
text-transform: uppercase;
}
.difficulty-badge.beginner {
background-color: #d1fae5;
color: #065f46;
}
.difficulty-badge.intermediate {
background-color: #fef3c7;
color: #92400e;
}
.difficulty-badge.advanced {
background-color: #fee2e2;
color: #991b1b;
}
.tag-badge {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: 0.75rem;
background-color: var(--bg-tertiary);
color: var(--text-secondary);
border-radius: 4px;
}
.tiddler-content {
padding: var(--spacing-lg);
}
.tiddler.collapsed .tiddler-content {
display: none;
}
/* ========== Content Typography ========== */
.tiddler-content h1,
.tiddler-content h2,
.tiddler-content h3,
.tiddler-content h4 {
margin-top: var(--spacing-lg);
margin-bottom: var(--spacing-md);
font-weight: 600;
}
.tiddler-content h1 { font-size: 1.75rem; }
.tiddler-content h2 { font-size: 1.5rem; }
.tiddler-content h3 { font-size: 1.25rem; }
.tiddler-content h4 { font-size: 1.125rem; }
.tiddler-content p {
margin-bottom: var(--spacing-md);
}
/* Lists - Enhanced Styling */
.tiddler-content ul,
.tiddler-content ol {
margin: var(--spacing-md) 0;
padding-left: var(--spacing-xl);
}
.tiddler-content ul {
list-style: none;
}
.tiddler-content ul > li {
position: relative;
margin-bottom: var(--spacing-sm);
padding-left: 8px;
}
.tiddler-content ul > li::before {
content: "•";
position: absolute;
left: -16px;
color: var(--accent-color);
font-weight: bold;
}
.tiddler-content ol {
list-style: none;
counter-reset: item;
}
.tiddler-content ol > li {
position: relative;
margin-bottom: var(--spacing-sm);
padding-left: 8px;
counter-increment: item;
}
.tiddler-content ol > li::before {
content: counter(item) ".";
position: absolute;
left: -24px;
color: var(--accent-color);
font-weight: 600;
}
/* Nested lists */
.tiddler-content ul ul,
.tiddler-content ol ol,
.tiddler-content ul ol,
.tiddler-content ol ul {
margin: var(--spacing-xs) 0;
}
.tiddler-content ul ul > li::before {
content: "◦";
}
.tiddler-content ul ul ul > li::before {
content: "▪";
}
.tiddler-content a {
color: var(--accent-color);
text-decoration: none;
}
.tiddler-content a:hover {
text-decoration: underline;
}
/* Inline Code - Red Highlight */
.tiddler-content code {
font-family: var(--font-family-mono);
font-size: 0.875em;
padding: 2px 6px;
background-color: #fff5f5;
color: #c92a2a;
border-radius: 4px;
border: 1px solid #ffc9c9;
}
/* Code Blocks - Dark Background */
.tiddler-content pre {
position: relative;
margin: var(--spacing-md) 0;
padding: 0;
background-color: #1e2128;
border-radius: 8px;
overflow: hidden;
border: 1px solid #3d4450;
}
.tiddler-content pre::before {
content: attr(data-language);
display: block;
padding: 8px 16px;
background-color: #2d333b;
color: #8b949e;
font-size: 0.75rem;
font-family: var(--font-family);
text-transform: uppercase;
letter-spacing: 0.05em;
border-bottom: 1px solid #3d4450;
}
.tiddler-content pre code {
display: block;
padding: var(--spacing-md);
background: none;
color: #e6edf3;
font-size: var(--font-size-sm);
line-height: 1.6;
overflow-x: auto;
border: none;
}
.copy-code-btn {
position: absolute;
top: 6px;
right: 12px;
padding: 4px 10px;
font-size: 0.7rem;
background-color: #3d4450;
color: #8b949e;
border: 1px solid #4d5566;
border-radius: 4px;
cursor: pointer;
opacity: 0;
transition: all var(--transition-fast);
}
.copy-code-btn:hover {
background-color: #4d5566;
color: #e6edf3;
}
.tiddler-content pre:hover .copy-code-btn {
opacity: 1;
}
/* Tables - Blue Header Style */
.tiddler-content table {
width: 100%;
margin: var(--spacing-md) 0;
border-collapse: collapse;
border: 1px solid #dee2e6;
border-radius: 8px;
overflow: hidden;
}
.tiddler-content th {
padding: 12px 16px;
background: linear-gradient(135deg, #1971c2, #228be6);
color: white;
font-weight: 600;
text-align: left;
border: none;
border-bottom: 2px solid #1864ab;
}
.tiddler-content td {
padding: 10px 16px;
border: 1px solid #e9ecef;
text-align: left;
}
.tiddler-content tbody tr:nth-child(odd) {
background-color: #f8f9fa;
}
.tiddler-content tbody tr:nth-child(even) {
background-color: #ffffff;
}
.tiddler-content tbody tr:hover {
background-color: #e7f5ff;
}
/* Screenshots */
.screenshot {
margin: var(--spacing-lg) 0;
text-align: center;
}
.screenshot img {
max-width: 100%;
border: 1px solid var(--border-color);
border-radius: 8px;
box-shadow: var(--shadow-md);
}
.screenshot figcaption {
margin-top: var(--spacing-sm);
font-size: var(--font-size-sm);
color: var(--text-muted);
font-style: italic;
}
.screenshot-placeholder {
padding: var(--spacing-xl);
background-color: var(--bg-tertiary);
border: 2px dashed var(--border-color);
border-radius: 8px;
color: var(--text-muted);
text-align: center;
}
/* ========== Footer ========== */
.wiki-footer {
padding: var(--spacing-lg);
text-align: center;
color: var(--text-muted);
font-size: var(--font-size-sm);
border-top: 1px solid var(--border-color);
background-color: var(--bg-primary);
}
/* ========== Theme Toggle ========== */
.theme-toggle {
position: fixed;
bottom: var(--spacing-lg);
right: var(--spacing-lg);
width: 48px;
height: 48px;
border-radius: 50%;
border: none;
background-color: var(--bg-primary);
box-shadow: var(--shadow-lg);
cursor: pointer;
font-size: 1.5rem;
z-index: 100;
transition: transform var(--transition-fast);
}
.theme-toggle:hover {
transform: scale(1.1);
}
[data-theme="light"] .moon-icon { display: inline; }
[data-theme="light"] .sun-icon { display: none; }
[data-theme="dark"] .moon-icon { display: none; }
[data-theme="dark"] .sun-icon { display: inline; }
/* ========== Back to Top ========== */
.back-to-top {
position: fixed;
bottom: calc(var(--spacing-lg) + 60px);
right: var(--spacing-lg);
width: 40px;
height: 40px;
border-radius: 50%;
border: none;
background-color: var(--accent-color);
color: white;
font-size: 1.25rem;
cursor: pointer;
opacity: 0;
visibility: hidden;
transition: all var(--transition-fast);
z-index: 100;
}
.back-to-top.visible {
opacity: 1;
visibility: visible;
}
.back-to-top:hover {
background-color: var(--accent-hover);
}
/* ========== Responsive ========== */
@media (max-width: 1024px) {
.wiki-sidebar {
transform: translateX(-100%);
}
.wiki-sidebar.open {
transform: translateX(0);
}
.wiki-content {
margin-left: 0;
}
.sidebar-toggle {
display: flex;
}
}
@media (max-width: 640px) {
.tiddler-header {
flex-direction: column;
align-items: flex-start;
}
.header-actions {
display: none;
}
.wiki-tags {
overflow-x: auto;
flex-wrap: nowrap;
padding-bottom: var(--spacing-md);
}
}
/* ========== Print Styles ========== */
@media print {
.wiki-sidebar,
.theme-toggle,
.back-to-top,
.content-header,
.collapse-toggle,
.copy-code-btn {
display: none !important;
}
.wiki-content {
margin-left: 0;
}
.tiddler {
break-inside: avoid;
box-shadow: none;
border: 1px solid #ccc;
}
.tiddler.collapsed .tiddler-content {
display: block;
}
.tiddler-content pre {
background-color: #f5f5f5 !important;
color: #333 !important;
}
}

View File

@@ -1,278 +0,0 @@
/* ========================================
TiddlyWiki-Style Dark Theme
Software Manual Skill
======================================== */
[data-theme="dark"] {
/* Dark Theme Colors */
--bg-primary: #1a1a2e;
--bg-secondary: #16213e;
--bg-tertiary: #0f3460;
--text-primary: #eaeaea;
--text-secondary: #b8b8b8;
--text-muted: #888888;
--border-color: #2d3748;
--accent-color: #4dabf7;
--accent-hover: #339af0;
--success-color: #51cf66;
--warning-color: #ffd43b;
--danger-color: #ff6b6b;
--info-color: #22b8cf;
/* Shadows */
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.3);
--shadow-md: 0 4px 6px rgba(0, 0, 0, 0.4);
--shadow-lg: 0 10px 15px rgba(0, 0, 0, 0.5);
}
/* Dark theme specific overrides */
[data-theme="dark"] .wiki-logo .logo-placeholder {
background: linear-gradient(135deg, var(--accent-color), #6741d9);
}
[data-theme="dark"] .wiki-search input {
background-color: var(--bg-tertiary);
border-color: var(--border-color);
color: var(--text-primary);
}
[data-theme="dark"] .wiki-search input::placeholder {
color: var(--text-muted);
}
[data-theme="dark"] .search-results {
background-color: var(--bg-secondary);
border-color: var(--border-color);
}
[data-theme="dark"] .search-result-item {
border-color: var(--border-color);
}
[data-theme="dark"] .search-result-item:hover {
background-color: var(--bg-tertiary);
}
[data-theme="dark"] .result-excerpt mark {
background-color: rgba(255, 212, 59, 0.3);
color: var(--warning-color);
}
[data-theme="dark"] .wiki-tags .tag {
background-color: var(--bg-tertiary);
border-color: var(--border-color);
color: var(--text-secondary);
}
[data-theme="dark"] .wiki-tags .tag:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
[data-theme="dark"] .wiki-tags .tag.active {
background-color: var(--accent-color);
border-color: var(--accent-color);
color: #1a1a2e;
}
[data-theme="dark"] .wiki-toc a:hover {
background-color: var(--bg-tertiary);
}
[data-theme="dark"] .content-header {
background-color: var(--bg-primary);
border-color: var(--border-color);
}
[data-theme="dark"] .sidebar-toggle span {
background-color: var(--text-primary);
}
[data-theme="dark"] .header-actions button {
background-color: var(--bg-secondary);
border-color: var(--border-color);
color: var(--text-secondary);
}
[data-theme="dark"] .header-actions button:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
[data-theme="dark"] .tiddler {
background-color: var(--bg-primary);
border-color: var(--border-color);
}
[data-theme="dark"] .tiddler-header {
border-color: var(--border-color);
}
[data-theme="dark"] .difficulty-badge.beginner {
background-color: rgba(81, 207, 102, 0.2);
color: var(--success-color);
}
[data-theme="dark"] .difficulty-badge.intermediate {
background-color: rgba(255, 212, 59, 0.2);
color: var(--warning-color);
}
[data-theme="dark"] .difficulty-badge.advanced {
background-color: rgba(255, 107, 107, 0.2);
color: var(--danger-color);
}
[data-theme="dark"] .tag-badge {
background-color: var(--bg-tertiary);
color: var(--text-secondary);
}
[data-theme="dark"] .tiddler-content code {
background-color: var(--bg-tertiary);
color: var(--accent-color);
}
[data-theme="dark"] .tiddler-content pre {
background-color: #0d1117;
border: 1px solid var(--border-color);
}
[data-theme="dark"] .tiddler-content pre code {
color: #e6e6e6;
}
[data-theme="dark"] .copy-code-btn {
background-color: var(--bg-tertiary);
color: var(--text-secondary);
}
[data-theme="dark"] .tiddler-content th {
background-color: var(--bg-tertiary);
}
[data-theme="dark"] .tiddler-content tr:nth-child(even) {
background-color: var(--bg-secondary);
}
[data-theme="dark"] .tiddler-content th,
[data-theme="dark"] .tiddler-content td {
border-color: var(--border-color);
}
[data-theme="dark"] .screenshot img {
border-color: var(--border-color);
}
[data-theme="dark"] .screenshot-placeholder {
background-color: var(--bg-tertiary);
border-color: var(--border-color);
}
[data-theme="dark"] .wiki-footer {
background-color: var(--bg-primary);
border-color: var(--border-color);
}
[data-theme="dark"] .theme-toggle {
background-color: var(--bg-secondary);
color: var(--warning-color);
}
[data-theme="dark"] .back-to-top {
background-color: var(--accent-color);
}
[data-theme="dark"] .back-to-top:hover {
background-color: var(--accent-hover);
}
/* Scrollbar styling for dark theme */
[data-theme="dark"] ::-webkit-scrollbar {
width: 8px;
height: 8px;
}
[data-theme="dark"] ::-webkit-scrollbar-track {
background: var(--bg-secondary);
}
[data-theme="dark"] ::-webkit-scrollbar-thumb {
background: var(--bg-tertiary);
border-radius: 4px;
}
[data-theme="dark"] ::-webkit-scrollbar-thumb:hover {
background: var(--border-color);
}
/* Selection color */
[data-theme="dark"] ::selection {
background-color: rgba(77, 171, 247, 0.3);
color: var(--text-primary);
}
/* Focus styles for accessibility */
[data-theme="dark"] :focus {
outline-color: var(--accent-color);
}
[data-theme="dark"] .wiki-search input:focus {
border-color: var(--accent-color);
box-shadow: 0 0 0 3px rgba(77, 171, 247, 0.2);
}
/* Link colors */
[data-theme="dark"] .tiddler-content a {
color: var(--accent-color);
}
[data-theme="dark"] .tiddler-content a:hover {
color: var(--accent-hover);
}
/* Blockquote styling */
[data-theme="dark"] .tiddler-content blockquote {
border-left: 4px solid var(--accent-color);
background-color: var(--bg-tertiary);
padding: var(--spacing-md);
margin: var(--spacing-md) 0;
color: var(--text-secondary);
}
/* Horizontal rule */
[data-theme="dark"] .tiddler-content hr {
border: none;
border-top: 1px solid var(--border-color);
margin: var(--spacing-lg) 0;
}
/* Alert/Note boxes */
[data-theme="dark"] .note,
[data-theme="dark"] .warning,
[data-theme="dark"] .tip,
[data-theme="dark"] .danger {
padding: var(--spacing-md);
border-radius: 6px;
margin: var(--spacing-md) 0;
}
[data-theme="dark"] .note {
background-color: rgba(34, 184, 207, 0.1);
border-left: 4px solid var(--info-color);
}
[data-theme="dark"] .warning {
background-color: rgba(255, 212, 59, 0.1);
border-left: 4px solid var(--warning-color);
}
[data-theme="dark"] .tip {
background-color: rgba(81, 207, 102, 0.1);
border-left: 4px solid var(--success-color);
}
[data-theme="dark"] .danger {
background-color: rgba(255, 107, 107, 0.1);
border-left: 4px solid var(--danger-color);
}

View File

@@ -1,466 +0,0 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="{{SOFTWARE_NAME}} - Interactive Software Manual">
<meta name="generator" content="software-manual-skill">
<title>{{SOFTWARE_NAME}} v{{VERSION}} - User Manual</title>
<style>
{{EMBEDDED_CSS}}
</style>
</head>
<body class="docsify-container" data-theme="light">
<!-- Sidebar Navigation -->
<aside class="sidebar" id="sidebar">
<!-- Logo and Title -->
<div class="sidebar-header">
<div class="logo">
<span class="logo-icon">{{LOGO_ICON}}</span>
<div class="logo-text">
<h1>{{SOFTWARE_NAME}}</h1>
<span class="version">v{{VERSION}}</span>
</div>
</div>
</div>
<!-- Search Box -->
<div class="sidebar-search">
<div class="search-box">
<svg class="search-icon" viewBox="0 0 24 24" width="16" height="16">
<circle cx="11" cy="11" r="8" fill="none" stroke="currentColor" stroke-width="2"/>
<path d="M21 21l-4.35-4.35" fill="none" stroke="currentColor" stroke-width="2"/>
</svg>
<input type="text" id="searchInput" placeholder="搜索文档..." aria-label="Search">
</div>
<div id="searchResults" class="search-results"></div>
</div>
<!-- Hierarchical Navigation -->
<nav class="sidebar-nav" id="sidebarNav">
{{SIDEBAR_NAV_HTML}}
</nav>
</aside>
<!-- Main Content Area -->
<main class="main-content" id="mainContent">
<!-- Mobile Header -->
<header class="mobile-header">
<button class="sidebar-toggle" id="sidebarToggle" aria-label="Toggle sidebar">
<svg viewBox="0 0 24 24" width="24" height="24">
<path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z" fill="currentColor"/>
</svg>
</button>
<span class="current-section" id="currentSection">{{SOFTWARE_NAME}}</span>
<button class="theme-toggle-mobile" id="themeToggleMobile" aria-label="Toggle theme">
<span class="sun-icon">&#9728;</span>
<span class="moon-icon">&#9790;</span>
</button>
</header>
<!-- Content Sections (only one visible at a time) -->
<div class="content-wrapper">
{{SECTIONS_HTML}}
</div>
<!-- Footer -->
<footer class="main-footer">
<p>Generated by <strong>software-manual-skill</strong> | Last updated: {{TIMESTAMP}}</p>
</footer>
</main>
<!-- Theme Toggle (Desktop) -->
<button class="theme-toggle" id="themeToggle" aria-label="Toggle theme">
<span class="sun-icon">&#9728;</span>
<span class="moon-icon">&#9790;</span>
</button>
<!-- Back to Top -->
<button class="back-to-top" id="backToTop" aria-label="Back to top">
<svg viewBox="0 0 24 24" width="20" height="20">
<path d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z" fill="currentColor"/>
</svg>
</button>
<!-- Search Index Data -->
<script id="search-index" type="application/json">
{{SEARCH_INDEX_JSON}}
</script>
<!-- Navigation Structure Data -->
<script id="nav-structure" type="application/json">
{{NAV_STRUCTURE_JSON}}
</script>
<!-- Mermaid.js for diagram rendering -->
<script src="https://cdn.jsdelivr.net/npm/mermaid@10/dist/mermaid.min.js"></script>
<script>
mermaid.initialize({
startOnLoad: false,
theme: document.body.dataset.theme === 'dark' ? 'dark' : 'default',
securityLevel: 'loose'
});
</script>
<!-- Embedded JavaScript -->
<script>
(function() {
'use strict';
// ========== State Management ==========
let currentSectionId = null;
const sections = document.querySelectorAll('.content-section');
const navItems = document.querySelectorAll('.nav-item');
// ========== Section Navigation ==========
function showSection(sectionId) {
// Hide all sections
sections.forEach(s => s.classList.remove('active'));
// Show target section
const target = document.getElementById('section-' + sectionId);
if (target) {
target.classList.add('active');
currentSectionId = sectionId;
// Update URL hash
history.pushState(null, '', '#/' + sectionId);
// Update nav active state
navItems.forEach(item => {
item.classList.remove('active');
if (item.dataset.section === sectionId) {
item.classList.add('active');
// Expand parent groups
expandParentGroups(item);
}
});
// Update mobile header
const currentSectionEl = document.getElementById('currentSection');
if (currentSectionEl && target.dataset.title) {
currentSectionEl.textContent = target.dataset.title;
}
// Scroll to top
document.getElementById('mainContent').scrollTop = 0;
}
}
function expandParentGroups(item) {
let parent = item.parentElement;
while (parent) {
if (parent.classList.contains('nav-group')) {
parent.classList.add('expanded');
const toggle = parent.querySelector('.nav-group-toggle');
if (toggle) toggle.setAttribute('aria-expanded', 'true');
}
parent = parent.parentElement;
}
}
// ========== Navigation Click Handlers ==========
navItems.forEach(item => {
item.addEventListener('click', function(e) {
e.preventDefault();
const sectionId = this.dataset.section;
if (sectionId) {
showSection(sectionId);
// Close sidebar on mobile
document.getElementById('sidebar').classList.remove('open');
}
});
});
// ========== Navigation Group Toggle ==========
document.querySelectorAll('.nav-group-toggle').forEach(toggle => {
toggle.addEventListener('click', function(e) {
e.stopPropagation();
const group = this.closest('.nav-group');
group.classList.toggle('expanded');
this.setAttribute('aria-expanded', group.classList.contains('expanded'));
});
});
// ========== Search Functionality ==========
const indexData = JSON.parse(document.getElementById('search-index').textContent);
const searchInput = document.getElementById('searchInput');
const searchResults = document.getElementById('searchResults');
function searchDocs(query) {
if (!query || query.length < 2) return [];
const results = [];
const lowerQuery = query.toLowerCase();
for (const [id, content] of Object.entries(indexData)) {
let score = 0;
const titleLower = content.title.toLowerCase();
const bodyLower = content.body.toLowerCase();
if (titleLower.includes(lowerQuery)) score += 10;
if (bodyLower.includes(lowerQuery)) score += 5;
if (score > 0) {
results.push({
id,
title: content.title,
excerpt: getExcerpt(content.body, query),
score
});
}
}
return results.sort((a, b) => b.score - a.score).slice(0, 8);
}
function getExcerpt(text, query) {
const maxLength = 120;
const lowerText = text.toLowerCase();
const lowerQuery = query.toLowerCase();
const index = lowerText.indexOf(lowerQuery);
if (index === -1) {
return text.substring(0, maxLength) + (text.length > maxLength ? '...' : '');
}
const start = Math.max(0, index - 30);
const end = Math.min(text.length, index + query.length + 60);
let excerpt = text.substring(start, end);
if (start > 0) excerpt = '...' + excerpt;
if (end < text.length) excerpt += '...';
const regex = new RegExp('(' + query.replace(/[.*+?^${}()|[\]\\]/g, '\\$&') + ')', 'gi');
return excerpt.replace(regex, '<mark>$1</mark>');
}
searchInput.addEventListener('input', function() {
const query = this.value.trim();
const results = searchDocs(query);
if (results.length === 0) {
searchResults.innerHTML = query.length >= 2
? '<div class="no-results">未找到结果</div>'
: '';
searchResults.classList.toggle('visible', query.length >= 2);
return;
}
searchResults.innerHTML = results.map(r => `
<a href="#/${r.id}" class="search-result-item" data-section="${r.id}">
<div class="result-title">${r.title}</div>
<div class="result-excerpt">${r.excerpt}</div>
</a>
`).join('');
searchResults.classList.add('visible');
});
searchResults.addEventListener('click', function(e) {
const item = e.target.closest('.search-result-item');
if (item) {
e.preventDefault();
searchInput.value = '';
searchResults.innerHTML = '';
searchResults.classList.remove('visible');
showSection(item.dataset.section);
}
});
// Close search results when clicking outside
document.addEventListener('click', function(e) {
if (!e.target.closest('.sidebar-search')) {
searchResults.classList.remove('visible');
}
});
// ========== Theme Toggle ==========
function setTheme(theme) {
document.body.dataset.theme = theme;
localStorage.setItem('docs-theme', theme);
}
const savedTheme = localStorage.getItem('docs-theme') || 'light';
setTheme(savedTheme);
document.getElementById('themeToggle').addEventListener('click', function() {
setTheme(document.body.dataset.theme === 'dark' ? 'light' : 'dark');
});
document.getElementById('themeToggleMobile').addEventListener('click', function() {
setTheme(document.body.dataset.theme === 'dark' ? 'light' : 'dark');
});
// ========== Sidebar Toggle (Mobile) ==========
document.getElementById('sidebarToggle').addEventListener('click', function() {
document.getElementById('sidebar').classList.toggle('open');
});
// Close sidebar when clicking outside on mobile
document.addEventListener('click', function(e) {
const sidebar = document.getElementById('sidebar');
const toggle = document.getElementById('sidebarToggle');
if (!sidebar.contains(e.target) && !toggle.contains(e.target)) {
sidebar.classList.remove('open');
}
});
// ========== Back to Top ==========
const backToTop = document.getElementById('backToTop');
const mainContent = document.getElementById('mainContent');
mainContent.addEventListener('scroll', function() {
backToTop.classList.toggle('visible', this.scrollTop > 300);
});
backToTop.addEventListener('click', function() {
mainContent.scrollTo({ top: 0, behavior: 'smooth' });
});
// ========== Code Block Copy ==========
document.querySelectorAll('pre').forEach(pre => {
const wrapper = document.createElement('div');
wrapper.className = 'code-block-wrapper';
pre.parentNode.insertBefore(wrapper, pre);
wrapper.appendChild(pre);
const copyBtn = document.createElement('button');
copyBtn.className = 'copy-code-btn';
copyBtn.innerHTML = '<svg viewBox="0 0 24 24" width="16" height="16"><path d="M16 1H4c-1.1 0-2 .9-2 2v14h2V3h12V1zm3 4H8c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h11c1.1 0 2-.9 2-2V7c0-1.1-.9-2-2-2zm0 16H8V7h11v14z" fill="currentColor"/></svg>';
copyBtn.addEventListener('click', function() {
const code = pre.querySelector('code') || pre;
navigator.clipboard.writeText(code.textContent).then(() => {
copyBtn.innerHTML = '<svg viewBox="0 0 24 24" width="16" height="16"><path d="M9 16.17L4.83 12l-1.42 1.41L9 19 21 7l-1.41-1.41z" fill="currentColor"/></svg>';
setTimeout(() => {
copyBtn.innerHTML = '<svg viewBox="0 0 24 24" width="16" height="16"><path d="M16 1H4c-1.1 0-2 .9-2 2v14h2V3h12V1zm3 4H8c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h11c1.1 0 2-.9 2-2V7c0-1.1-.9-2-2-2zm0 16H8V7h11v14z" fill="currentColor"/></svg>';
}, 2000);
});
});
wrapper.appendChild(copyBtn);
});
// ========== Mermaid Diagram Rendering ==========
function renderMermaidDiagrams() {
// Find all mermaid code blocks and convert them to diagrams
document.querySelectorAll('pre code.language-mermaid, pre code.highlight-mermaid').forEach((codeBlock, index) => {
const pre = codeBlock.parentElement;
const wrapper = pre.parentElement;
const code = codeBlock.textContent;
// Create mermaid container
const mermaidDiv = document.createElement('div');
mermaidDiv.className = 'mermaid';
mermaidDiv.textContent = code;
// Replace code block with mermaid div
if (wrapper && wrapper.classList.contains('code-block-wrapper')) {
wrapper.parentElement.replaceChild(mermaidDiv, wrapper);
} else {
pre.parentElement.replaceChild(mermaidDiv, pre);
}
});
// Also handle codehilite blocks with mermaid
document.querySelectorAll('.highlight').forEach((block) => {
const code = block.querySelector('code, pre');
if (code && code.textContent.trim().startsWith('graph ') ||
code && code.textContent.trim().startsWith('sequenceDiagram') ||
code && code.textContent.trim().startsWith('flowchart ') ||
code && code.textContent.trim().startsWith('classDiagram') ||
code && code.textContent.trim().startsWith('stateDiagram') ||
code && code.textContent.trim().startsWith('erDiagram') ||
code && code.textContent.trim().startsWith('gantt') ||
code && code.textContent.trim().startsWith('pie') ||
code && code.textContent.trim().startsWith('journey')) {
const mermaidDiv = document.createElement('div');
mermaidDiv.className = 'mermaid';
mermaidDiv.textContent = code.textContent;
block.parentElement.replaceChild(mermaidDiv, block);
}
});
// Render all mermaid diagrams
if (typeof mermaid !== 'undefined') {
mermaid.run();
}
}
// ========== Internal Anchor Links Handler ==========
// Handle clicks on internal anchor links (TOC links like #材料管理api)
document.addEventListener('click', function(e) {
const link = e.target.closest('a[href^="#"]');
if (!link) return;
const href = link.getAttribute('href');
// Skip section navigation links (handled by nav-item)
if (link.classList.contains('nav-item')) return;
// Skip search result links
if (link.classList.contains('search-result-item')) return;
// Check if it's an internal anchor (not a section link)
if (href && href.startsWith('#') && !href.startsWith('#/')) {
e.preventDefault();
const anchorId = href.substring(1);
const targetElement = document.getElementById(anchorId);
if (targetElement) {
// Scroll to the anchor within current section
targetElement.scrollIntoView({ behavior: 'smooth', block: 'start' });
// Update URL without triggering popstate
history.pushState(null, '', '#/' + currentSectionId + '/' + anchorId);
}
}
});
// ========== Hash Parser ==========
function parseHash(hash) {
// Handle formats: #/sectionId, #/sectionId/anchorId, #anchorId
if (!hash || hash === '#' || hash === '#/') return { section: null, anchor: null };
if (hash.startsWith('#/')) {
const parts = hash.substring(2).split('/');
return { section: parts[0] || null, anchor: parts[1] || null };
} else {
// Plain anchor like #材料管理api - stay on current section
return { section: null, anchor: hash.substring(1) };
}
}
// ========== Initial Load ==========
// Check URL hash or show first section
const initialHash = parseHash(window.location.hash);
if (initialHash.section && document.getElementById('section-' + initialHash.section)) {
showSection(initialHash.section);
// Scroll to anchor if present
if (initialHash.anchor) {
setTimeout(() => {
const anchor = document.getElementById(initialHash.anchor);
if (anchor) anchor.scrollIntoView({ behavior: 'smooth', block: 'start' });
}, 100);
}
} else if (sections.length > 0) {
const firstSection = sections[0].id.replace('section-', '');
showSection(firstSection);
}
// Render mermaid diagrams after initial load
setTimeout(renderMermaidDiagrams, 100);
// Handle browser back/forward
window.addEventListener('popstate', function() {
const parsed = parseHash(window.location.hash);
if (parsed.section) {
showSection(parsed.section);
if (parsed.anchor) {
setTimeout(() => {
const anchor = document.getElementById(parsed.anchor);
if (anchor) anchor.scrollIntoView({ behavior: 'smooth', block: 'start' });
}, 100);
}
}
});
})();
</script>
</body>
</html>

View File

@@ -1,327 +0,0 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="{{SOFTWARE_NAME}} - Interactive Software Manual">
<meta name="generator" content="software-manual-skill">
<title>{{SOFTWARE_NAME}} v{{VERSION}} - User Manual</title>
<style>
{{EMBEDDED_CSS}}
</style>
</head>
<body class="wiki-container" data-theme="light">
<!-- Sidebar Navigation -->
<aside class="wiki-sidebar">
<!-- Logo and Title -->
<div class="wiki-logo">
<div class="logo-placeholder">{{SOFTWARE_NAME}}</div>
<h1>{{SOFTWARE_NAME}}</h1>
<span class="version">v{{VERSION}}</span>
</div>
<!-- Search Box -->
<div class="wiki-search">
<input type="text" id="searchInput" placeholder="Search documentation..." aria-label="Search">
<div id="searchResults" class="search-results" aria-live="polite"></div>
</div>
<!-- Tag Navigation (Dynamic) -->
<nav class="wiki-tags" aria-label="Filter by category">
<button class="tag active" data-tag="all">全部</button>
{{TAG_BUTTONS_HTML}}
</nav>
<!-- Table of Contents -->
{{TOC_HTML}}
</aside>
<!-- Main Content Area -->
<main class="wiki-content">
<!-- Header Bar -->
<header class="content-header">
<button class="sidebar-toggle" id="sidebarToggle" aria-label="Toggle sidebar">
<span></span>
<span></span>
<span></span>
</button>
<div class="header-actions">
<button class="expand-all" id="expandAll">Expand All</button>
<button class="collapse-all" id="collapseAll">Collapse All</button>
<button class="print-btn" id="printBtn">Print</button>
</div>
</header>
<!-- Tiddler Container -->
<div class="tiddler-container">
{{TIDDLERS_HTML}}
</div>
<!-- Footer -->
<footer class="wiki-footer">
<p>Generated by <strong>software-manual-skill</strong></p>
<p>Last updated: <time datetime="{{TIMESTAMP}}">{{TIMESTAMP}}</time></p>
</footer>
</main>
<!-- Theme Toggle Button -->
<button class="theme-toggle" id="themeToggle" aria-label="Toggle theme">
<span class="sun-icon">&#9728;</span>
<span class="moon-icon">&#9790;</span>
</button>
<!-- Back to Top Button -->
<button class="back-to-top" id="backToTop" aria-label="Back to top">&#8593;</button>
<!-- Search Index Data -->
<script id="search-index" type="application/json">
{{SEARCH_INDEX_JSON}}
</script>
<!-- Embedded JavaScript -->
<script>
(function() {
'use strict';
// ========== Search Functionality ==========
class WikiSearch {
constructor(indexData) {
this.index = indexData;
}
search(query) {
if (!query || query.length < 2) return [];
const results = [];
const lowerQuery = query.toLowerCase();
const queryWords = lowerQuery.split(/\s+/);
for (const [id, content] of Object.entries(this.index)) {
let score = 0;
// Title match (higher weight)
const titleLower = content.title.toLowerCase();
if (titleLower.includes(lowerQuery)) {
score += 10;
}
queryWords.forEach(word => {
if (titleLower.includes(word)) score += 3;
});
// Body match
const bodyLower = content.body.toLowerCase();
if (bodyLower.includes(lowerQuery)) {
score += 5;
}
queryWords.forEach(word => {
if (bodyLower.includes(word)) score += 1;
});
// Tag match
if (content.tags) {
content.tags.forEach(tag => {
if (tag.toLowerCase().includes(lowerQuery)) score += 4;
});
}
if (score > 0) {
results.push({
id,
title: content.title,
excerpt: this.highlight(content.body, query),
score
});
}
}
return results
.sort((a, b) => b.score - a.score)
.slice(0, 10);
}
highlight(text, query) {
const maxLength = 150;
const lowerText = text.toLowerCase();
const lowerQuery = query.toLowerCase();
const index = lowerText.indexOf(lowerQuery);
if (index === -1) {
return text.substring(0, maxLength) + (text.length > maxLength ? '...' : '');
}
const start = Math.max(0, index - 40);
const end = Math.min(text.length, index + query.length + 80);
let excerpt = text.substring(start, end);
if (start > 0) excerpt = '...' + excerpt;
if (end < text.length) excerpt += '...';
// Highlight matches
const regex = new RegExp('(' + query.replace(/[.*+?^${}()|[\]\\]/g, '\\$&') + ')', 'gi');
return excerpt.replace(regex, '<mark>$1</mark>');
}
}
// Initialize search
const indexData = JSON.parse(document.getElementById('search-index').textContent);
const search = new WikiSearch(indexData);
const searchInput = document.getElementById('searchInput');
const searchResults = document.getElementById('searchResults');
searchInput.addEventListener('input', function() {
const query = this.value.trim();
const results = search.search(query);
if (results.length === 0) {
searchResults.innerHTML = query.length >= 2
? '<div class="no-results">No results found</div>'
: '';
return;
}
searchResults.innerHTML = results.map(r => `
<a href="#${r.id}" class="search-result-item" data-tiddler="${r.id}">
<div class="result-title">${r.title}</div>
<div class="result-excerpt">${r.excerpt}</div>
</a>
`).join('');
});
// Clear search on result click
searchResults.addEventListener('click', function(e) {
const item = e.target.closest('.search-result-item');
if (item) {
searchInput.value = '';
searchResults.innerHTML = '';
// Expand target tiddler
const tiddlerId = item.dataset.tiddler;
const tiddler = document.getElementById(tiddlerId);
if (tiddler) {
tiddler.classList.remove('collapsed');
const toggle = tiddler.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▼';
}
}
});
// ========== Collapse/Expand ==========
document.querySelectorAll('.collapse-toggle').forEach(btn => {
btn.addEventListener('click', function() {
const tiddler = this.closest('.tiddler');
tiddler.classList.toggle('collapsed');
this.textContent = tiddler.classList.contains('collapsed') ? '▶' : '▼';
});
});
// Expand/Collapse All
document.getElementById('expandAll').addEventListener('click', function() {
document.querySelectorAll('.tiddler').forEach(t => {
t.classList.remove('collapsed');
const toggle = t.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▼';
});
});
document.getElementById('collapseAll').addEventListener('click', function() {
document.querySelectorAll('.tiddler').forEach(t => {
t.classList.add('collapsed');
const toggle = t.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▶';
});
});
// ========== Tag Filtering ==========
document.querySelectorAll('.wiki-tags .tag').forEach(tag => {
tag.addEventListener('click', function() {
const filter = this.dataset.tag;
// Update active state
document.querySelectorAll('.wiki-tags .tag').forEach(t => t.classList.remove('active'));
this.classList.add('active');
// Filter tiddlers
document.querySelectorAll('.tiddler').forEach(tiddler => {
if (filter === 'all') {
tiddler.style.display = '';
} else {
const tags = tiddler.dataset.tags || '';
tiddler.style.display = tags.includes(filter) ? '' : 'none';
}
});
});
});
// ========== Theme Toggle ==========
const themeToggle = document.getElementById('themeToggle');
const savedTheme = localStorage.getItem('wiki-theme');
if (savedTheme) {
document.body.dataset.theme = savedTheme;
}
themeToggle.addEventListener('click', function() {
const isDark = document.body.dataset.theme === 'dark';
document.body.dataset.theme = isDark ? 'light' : 'dark';
localStorage.setItem('wiki-theme', document.body.dataset.theme);
});
// ========== Sidebar Toggle (Mobile) ==========
document.getElementById('sidebarToggle').addEventListener('click', function() {
document.querySelector('.wiki-sidebar').classList.toggle('open');
});
// ========== Back to Top ==========
const backToTop = document.getElementById('backToTop');
window.addEventListener('scroll', function() {
backToTop.classList.toggle('visible', window.scrollY > 300);
});
backToTop.addEventListener('click', function() {
window.scrollTo({ top: 0, behavior: 'smooth' });
});
// ========== Print ==========
document.getElementById('printBtn').addEventListener('click', function() {
window.print();
});
// ========== TOC Navigation ==========
document.querySelectorAll('.wiki-toc a').forEach(link => {
link.addEventListener('click', function(e) {
const tiddlerId = this.getAttribute('href').substring(1);
const tiddler = document.getElementById(tiddlerId);
if (tiddler) {
// Expand if collapsed
tiddler.classList.remove('collapsed');
const toggle = tiddler.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▼';
// Close sidebar on mobile
document.querySelector('.wiki-sidebar').classList.remove('open');
}
});
});
// ========== Code Block Copy ==========
document.querySelectorAll('pre').forEach(pre => {
const copyBtn = document.createElement('button');
copyBtn.className = 'copy-code-btn';
copyBtn.textContent = 'Copy';
copyBtn.addEventListener('click', function() {
const code = pre.querySelector('code');
navigator.clipboard.writeText(code.textContent).then(() => {
copyBtn.textContent = 'Copied!';
setTimeout(() => copyBtn.textContent = 'Copy', 2000);
});
});
pre.appendChild(copyBtn);
});
})();
</script>
</body>
</html>

View File

@@ -1,322 +0,0 @@
---
name: team-skill-designer
description: Design and generate unified team skills with role-based routing. All team members invoke ONE skill, SKILL.md routes to role-specific execution via --role arg. Triggers on "design team skill", "create team skill", "team skill designer".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
---
# Team Skill Designer
Meta-skill for creating unified team skills where all team members invoke ONE skill with role-based routing. Generates a complete skill package with SKILL.md as role router and `roles/` folder for per-role execution detail.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Team Skill Designer (this meta-skill) │
│ → Collect requirements → Analyze patterns → Generate skill pkg │
└───────────────┬─────────────────────────────────────────────────┘
┌───────────┼───────────┬───────────┬───────────┐
↓ ↓ ↓ ↓ ↓
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │ │ Phase 5 │
│ Require │ │ Pattern │ │ Skill │ │ Integ │ │ Valid │
│ Collect │ │ Analyze │ │ Gen │ │ Verify │ │ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
↓ ↓ ↓ ↓ ↓
team- patterns SKILL.md + report validated
config.json .json roles/*.md .json skill pkg
```
## Key Innovation: Unified Skill + Role Router
**Before (command approach)**:
```
.claude/commands/team/
├── coordinate.md → /team:coordinate
├── plan.md → /team:plan
├── execute.md → /team:execute
├── test.md → /team:test
└── review.md → /team:review
```
→ 5 separate command files, 5 separate skill paths
**After (unified skill approach)**:
```
.claude/skills/team-{name}/
├── SKILL.md → Skill(skill="team-{name}", args="--role=xxx")
├── roles/
│ ├── coordinator/
│ │ ├── role.md # Orchestrator
│ │ └── commands/ # Modular command files
│ ├── planner/
│ │ ├── role.md
│ │ └── commands/
│ ├── executor/
│ │ ├── role.md
│ │ └── commands/
│ ├── tester/
│ │ ├── role.md
│ │ └── commands/
│ └── reviewer/
│ ├── role.md
│ └── commands/
└── specs/
└── team-config.json
```
→ 1 skill entry point, --role arg routes to per-role execution
**Coordinator spawns teammates with**:
```javascript
Task({
prompt: `...调用 Skill(skill="team-{name}", args="--role=planner") 执行规划...`
})
```
## Target Output Structure
```
.claude/skills/team-{name}/
├── SKILL.md # Role router + shared infrastructure
│ ├─ Frontmatter
│ ├─ Architecture Overview (role routing diagram)
│ ├─ Command Architecture (folder structure explanation)
│ ├─ Role Router (parse --role → Read roles/{role}/role.md → execute)
│ ├─ Shared Infrastructure (message bus, task lifecycle)
│ ├─ Coordinator Spawn Template
│ └─ Error Handling
├── roles/ # Role-specific execution detail (folder-based)
│ ├── coordinator/
│ │ ├── role.md # Orchestrator (Phase 1/5 inline, Phase 2-4 delegate)
│ │ └── commands/
│ │ ├── dispatch.md # Task chain creation
│ │ └── monitor.md # Progress monitoring
│ ├── {role-1}/
│ │ ├── role.md # Worker orchestrator
│ │ └── commands/
│ │ └── *.md # Role-specific command files
│ └── {role-2}/
│ ├── role.md
│ └── commands/
│ └── *.md
└── specs/ # [Optional] Team-specific config
└── team-config.json
```
## Core Design Patterns
### Pattern 1: Role Router (Unified Entry Point)
SKILL.md parses `$ARGUMENTS` to extract `--role`:
```
Input: Skill(skill="team-{name}", args="--role=planner")
↓ Parse --role=planner
↓ Read roles/planner/role.md
↓ Execute planner-specific 5-phase logic
```
No --role → error (role is required, set by coordinator spawn).
### Pattern 2: Shared Infrastructure in SKILL.md
SKILL.md defines ONCE, all roles inherit:
- Message bus pattern (team_msg + CLI fallback)
- Task lifecycle (TaskList → TaskGet → TaskUpdate)
- Team name and session directory conventions
- Error handling and escalation rules
### Pattern 3: Role Files = Full Execution Detail
Each `roles/{role}/role.md` contains:
- Toolbox section (available commands, subagent capabilities, CLI capabilities)
- Role-specific 5-phase implementation (Phase 1/5 inline, Phase 2-4 delegate or inline)
- Per-role message types
- Per-role task prefix
- Complete code (no `Ref:` back to SKILL.md)
- Command files in `commands/` for complex phases (subagent delegation, CLI fan-out)
### Pattern 4: Batch Role Generation
Phase 1 collects ALL roles at once (not one at a time):
- Team name + all role definitions in one pass
- Coordinator is always generated
- Worker roles collected as a batch
### Pattern 5: Self-Contained Specs
Design pattern specs are included locally in `specs/`:
```
specs/team-design-patterns.md # Infrastructure patterns (9) + collaboration index
specs/collaboration-patterns.md # 10 collaboration patterns with convergence control
specs/quality-standards.md # Quality criteria (incl. command file standards)
```
---
## Mandatory Prerequisites
> **Do NOT skip**: Read these before any execution.
### Specification Documents (Required Reading)
| Document | Purpose | When |
|----------|---------|------|
| [specs/team-design-patterns.md](specs/team-design-patterns.md) | Infrastructure patterns (8) + collaboration index | **Must read** |
| [specs/collaboration-patterns.md](specs/collaboration-patterns.md) | 10 collaboration patterns with convergence control | **Must read** |
| [specs/quality-standards.md](specs/quality-standards.md) | Quality criteria | Must read before generation |
### Template Files (Must read before generation)
| Document | Purpose |
|----------|---------|
| [templates/skill-router-template.md](templates/skill-router-template.md) | Generated SKILL.md template with role router + command architecture |
| [templates/role-template.md](templates/role-template.md) | Generated role file template with Toolbox + command delegation |
| [templates/role-command-template.md](templates/role-command-template.md) | Command file template with 7 pre-built patterns |
### Existing Reference
| Document | Purpose |
|----------|---------|
| `.claude/commands/team/coordinate.md` | Coordinator spawn patterns |
| `.claude/commands/team/plan.md` | Planner role reference |
| `.claude/commands/team/execute.md` | Executor role reference |
| `.claude/commands/team/test.md` | Tester role reference |
| `.claude/commands/team/review.md` | Reviewer role reference |
---
## Execution Flow
```
Phase 0: Specification Study (MANDATORY)
-> Read: specs/team-design-patterns.md
-> Read: specs/collaboration-patterns.md
-> Read: templates/skill-router-template.md + templates/role-template.md
-> Read: 1-2 existing team commands for reference
-> Output: Internalized requirements (in-memory)
Phase 1: Requirements Collection
-> Ref: phases/01-requirements-collection.md
- Collect team name and ALL role definitions (batch)
- For each role: name, responsibility, task prefix, capabilities
- Pipeline definition (task chain order)
- Output: team-config.json (team-level + per-role config)
Phase 2: Pattern Analysis
-> Ref: phases/02-pattern-analysis.md
- Per-role: find most similar existing command
- Per-role: select infrastructure + collaboration patterns
- Per-role: map 5-phase structure
- Output: pattern-analysis.json
Phase 3: Skill Package Generation
-> Ref: phases/03-skill-generation.md
- Generate SKILL.md (role router + command architecture + shared infrastructure)
- Generate roles/{name}/role.md (per-role orchestrator with Toolbox)
- Generate roles/{name}/commands/*.md (modular command files)
- Generate specs/team-config.json
- Output: .claude/skills/team-{name}/ complete package
Phase 4: Integration Verification
-> Ref: phases/04-integration-verification.md
- Verify role router references match role files
- Verify task prefixes are unique across roles
- Verify message type compatibility
- Output: integration-report.json
Phase 5: Validation
-> Ref: phases/05-validation.md
- Structural completeness per role file
- Pattern compliance per role file
- Quality scoring and delivery
- Output: validation-report.json + delivered skill package
```
**Phase Reference Documents** (read on-demand):
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-requirements-collection.md](phases/01-requirements-collection.md) | Batch collect team + all role definitions |
| 2 | [phases/02-pattern-analysis.md](phases/02-pattern-analysis.md) | Per-role pattern matching and phase mapping |
| 3 | [phases/03-skill-generation.md](phases/03-skill-generation.md) | Generate unified skill package |
| 4 | [phases/04-integration-verification.md](phases/04-integration-verification.md) | Verify internal consistency |
| 5 | [phases/05-validation.md](phases/05-validation.md) | Quality gate and delivery |
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/team-skill-${timestamp}`;
Bash(`mkdir -p "${workDir}"`);
```
## Output Structure
```
.workflow/.scratchpad/team-skill-{timestamp}/
├── team-config.json # Phase 1 output (team + all roles)
├── pattern-analysis.json # Phase 2 output (per-role patterns)
├── integration-report.json # Phase 4 output
├── validation-report.json # Phase 5 output
└── preview/ # Phase 3 output (preview before delivery)
├── SKILL.md
├── roles/
│ ├── coordinator/
│ │ ├── role.md
│ │ └── commands/
│ │ ├── dispatch.md
│ │ └── monitor.md
│ └── {role-N}/
│ ├── role.md
│ └── commands/
│ └── *.md
└── specs/
└── team-config.json
Final delivery:
.claude/skills/team-{name}/
├── SKILL.md
├── roles/
│ ├── coordinator/
│ │ ├── role.md
│ │ └── commands/
│ └── {role-N}/
│ ├── role.md
│ └── commands/
└── specs/
└── team-config.json
```
## Comparison: Command Designer vs Skill Designer
| Aspect | team-command-designer | team-skill-designer |
|--------|----------------------|---------------------|
| Output | N separate .md command files | 1 skill package (SKILL.md + roles/) |
| Entry point | N skill paths (/team:xxx) | 1 skill path + --role arg |
| Shared infra | Duplicated in each command | Defined once in SKILL.md |
| Role isolation | Complete (separate files) | Complete (roles/ directory) |
| Coordinator spawn | `Skill(skill="team:plan")` | `Skill(skill="team-{name}", args="--role=planner")` |
| Role generation | One role at a time | All roles in batch |
| Template | command-template.md | skill-router-template.md + role-template.md |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Specs not found | Fall back to inline pattern knowledge |
| Role name conflicts | AskUserQuestion for rename |
| Task prefix conflicts | Suggest alternative prefix |
| Template variable unresolved | FAIL with specific variable name |
| Quality score < 60% | Re-run Phase 3 with additional context |
## Debugging
| Issue | Solution |
|-------|----------|
| Generated SKILL.md missing router | Check templates/skill-router-template.md |
| Role file missing message bus | Check templates/role-template.md |
| Command file not found | Check templates/role-command-template.md |
| Role folder structure wrong | Verify roles/{name}/role.md + commands/ layout |
| Integration check fails | Review phases/04-integration-verification.md |
| Quality score below threshold | Review specs/quality-standards.md |

View File

@@ -1,437 +0,0 @@
# Phase 1: Requirements Collection (Task-Driven Inference)
Analyze task requirements, infer appropriate roles, and generate team configuration.
## Objective
- Determine team name and display name
- **Analyze task description → infer needed roles** (coordinator always included)
- For each role: name, responsibility, task prefix, capabilities
- Build pipeline from inferred roles
- Generate team-config.json
## Input
- User request (`$ARGUMENTS` or interactive input)
- Specification: `specs/team-design-patterns.md` (read in Phase 0)
## Execution Steps
### Step 1: Team Name + Task Description
```javascript
const teamInfo = await AskUserQuestion({
questions: [
{
question: "团队名称是什么?(小写,用作 skill 文件夹名:.claude/skills/team-{name}/",
header: "Team Name",
multiSelect: false,
options: [
{ label: "自定义", description: "输入自定义团队名称" },
{ label: "dev", description: "通用开发团队" },
{ label: "spec", description: "规格文档团队" },
{ label: "security", description: "安全审计团队" }
]
},
{
question: "这个团队的核心任务是什么?(描述目标场景,系统将自动推断所需角色)",
header: "Task Desc",
multiSelect: false,
options: [
{ label: "自定义", description: "输入具体任务描述,如:实现新功能并确保质量" },
{ label: "全栈开发", description: "需求分析 → 规划 → 编码 → 测试 → 审查" },
{ label: "代码审查与重构", description: "代码分析 → 问题发现 → 重构实施 → 验证" },
{ label: "文档编写", description: "调研 → 讨论 → 撰写 → 审校" }
]
}
]
})
```
### Step 2: Role Inference (Task-Driven)
```javascript
// Coordinator 始终存在
const roles = [{
name: "coordinator",
responsibility_type: "Orchestration",
task_prefix: null, // coordinator creates tasks, doesn't receive them
description: "Pipeline orchestration, team lifecycle, cross-stage coordination"
}]
// 角色需求分析矩阵 — 根据任务描述中的意图信号推断角色
const taskDesc = teamInfo["Task Desc"]
const ROLE_SIGNALS = {
planner: {
signals: /规划|计划|设计|架构|plan|design|architect|分析需求|探索|explore/i,
role: { name: "planner", responsibility_type: "Orchestration", task_prefix: "PLAN", description: "Code exploration and implementation planning" }
},
executor: {
signals: /实现|开发|编码|编写|创建|构建|implement|develop|build|code|create|重构|refactor|迁移|migrate/i,
role: { name: "executor", responsibility_type: "Code generation", task_prefix: "IMPL", description: "Code implementation following approved plan" }
},
tester: {
signals: /测试|验证|质量|test|verify|validate|QA|回归|regression|修复|fix|bug/i,
role: { name: "tester", responsibility_type: "Validation", task_prefix: "TEST", description: "Test execution and fix cycles" }
},
reviewer: {
signals: /审查|审核|review|audit|检查|inspect|代码质量|code quality/i,
role: { name: "reviewer", responsibility_type: "Read-only analysis", task_prefix: "REVIEW", description: "Multi-dimensional code review" }
},
analyst: {
signals: /调研|研究|分析|research|analyze|探索|investigate|诊断|diagnose/i,
role: { name: "analyst", responsibility_type: "Orchestration", task_prefix: "RESEARCH", description: "Codebase exploration and context collection" }
},
writer: {
signals: /文档|撰写|编写文档|document|write doc|生成报告|report/i,
role: { name: "writer", responsibility_type: "Code generation", task_prefix: "DRAFT", description: "Document drafting following templates" }
},
debugger: {
signals: /debug|调试|排查|定位问题|根因|root cause|故障|troubleshoot/i,
role: { name: "debugger", responsibility_type: "Orchestration", task_prefix: "DEBUG", description: "Bug diagnosis and root cause analysis" }
},
security: {
signals: /安全|漏洞|security|vulnerability|渗透|penetration|OWASP|合规|compliance/i,
role: { name: "security", responsibility_type: "Read-only analysis", task_prefix: "SEC", description: "Security analysis and vulnerability assessment" }
}
}
// 推断角色:匹配信号 + 隐含角色补全
const inferredRoles = []
for (const [key, entry] of Object.entries(ROLE_SIGNALS)) {
if (entry.signals.test(taskDesc)) {
inferredRoles.push(entry.role)
}
}
// 隐含角色补全规则:
// - 有 executor 必有 planner编码前需要规划
// - 有 executor 必有 tester编码后需要验证
// - 有 debugger 必有 tester调试需要验证修复
// - 有 writer 必有 reviewer文档需要审校
const hasRole = name => inferredRoles.some(r => r.name === name)
if (hasRole('executor') && !hasRole('planner')) {
inferredRoles.unshift(ROLE_SIGNALS.planner.role)
}
if (hasRole('executor') && !hasRole('tester')) {
inferredRoles.push(ROLE_SIGNALS.tester.role)
}
if (hasRole('debugger') && !hasRole('tester')) {
inferredRoles.push(ROLE_SIGNALS.tester.role)
}
if (hasRole('writer') && !hasRole('reviewer')) {
inferredRoles.push(ROLE_SIGNALS.reviewer.role)
}
// 最少保证 2 个 worker 角色
if (inferredRoles.length < 2) {
// 回退:标准 plan → implement → test → review
inferredRoles.length = 0
inferredRoles.push(
ROLE_SIGNALS.planner.role,
ROLE_SIGNALS.executor.role,
ROLE_SIGNALS.tester.role,
ROLE_SIGNALS.reviewer.role
)
}
// 去重 + 加入总角色列表
const seen = new Set()
for (const role of inferredRoles) {
if (!seen.has(role.name)) {
seen.add(role.name)
roles.push(role)
}
}
// 推断 pipeline 类型标签(用于后续 Step 5
const pipelineType = inferredRoles.some(r => r.name === 'writer') ? 'Document'
: inferredRoles.some(r => r.name === 'debugger') ? 'Debug'
: 'Standard'
```
### Step 3: Role Confirmation (Interactive)
```javascript
// 展示推断结果,让用户确认或调整
const workerRoles = roles.filter(r => r.name !== 'coordinator')
const rolesSummary = workerRoles
.map(r => `${r.name} (${r.responsibility_type}, ${r.task_prefix})`)
.join('\n')
const confirmation = await AskUserQuestion({
questions: [
{
question: `根据任务描述,推断出以下角色:\n${rolesSummary}\n\n是否需要调整?`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "确认使用 (Recommended)", description: "使用推断的角色组合" },
{ label: "添加角色", description: "在推断结果基础上添加角色" },
{ label: "移除角色", description: "移除某些不需要的角色" },
{ label: "重新描述", description: "重新输入任务描述,重新推断" }
]
}
]
})
if (confirmation["Confirm"].includes("添加角色")) {
const newRole = await AskUserQuestion({
questions: [
{
question: "新角色名称?(小写)",
header: "Role Name",
multiSelect: false,
options: [
{ label: "自定义", description: "输入自定义角色名" },
{ label: "deployer", description: "部署和发布管理" },
{ label: "documenter", description: "文档生成" },
{ label: "monitor", description: "监控和告警" }
]
},
{
question: "角色职责类型?",
header: "Type",
multiSelect: false,
options: [
{ label: "Read-only analysis", description: "分析/审查/报告(不修改文件)" },
{ label: "Code generation", description: "写/改代码文件" },
{ label: "Orchestration", description: "协调子任务和 agent" },
{ label: "Validation", description: "测试/验证/审计" }
]
}
]
})
// Add to roles array
}
```
### Step 4: Capability Selection (Per Role)
```javascript
// For each worker role, determine capabilities
for (const role of roles.filter(r => r.name !== 'coordinator')) {
// Infer capabilities from responsibility type
const baseTools = ["SendMessage(*)", "TaskUpdate(*)", "TaskList(*)", "TaskGet(*)", "TodoWrite(*)", "Read(*)", "Bash(*)", "Glob(*)", "Grep(*)"]
if (role.responsibility_type === "Code generation") {
role.allowed_tools = [...baseTools, "Write(*)", "Edit(*)", "Task(*)"]
role.adaptive_routing = true
} else if (role.responsibility_type === "Orchestration") {
role.allowed_tools = [...baseTools, "Write(*)", "Task(*)"]
role.adaptive_routing = true
} else if (role.responsibility_type === "Validation") {
role.allowed_tools = [...baseTools, "Write(*)", "Edit(*)", "Task(*)"]
role.adaptive_routing = false
} else {
// Read-only analysis
role.allowed_tools = [...baseTools, "Task(*)"]
role.adaptive_routing = false
}
// Infer message types
const roleMsgTypes = {
"Read-only analysis": [
{ type: `${role.name}_result`, trigger: "Analysis complete" },
{ type: "error", trigger: "Blocking error" }
],
"Code generation": [
{ type: `${role.name}_complete`, trigger: "Generation complete" },
{ type: `${role.name}_progress`, trigger: "Batch progress" },
{ type: "error", trigger: "Blocking error" }
],
"Orchestration": [
{ type: `${role.name}_ready`, trigger: "Results ready" },
{ type: `${role.name}_progress`, trigger: "Progress update" },
{ type: "error", trigger: "Blocking error" }
],
"Validation": [
{ type: `${role.name}_result`, trigger: "Validation complete" },
{ type: "fix_required", trigger: "Critical issues found" },
{ type: "error", trigger: "Blocking error" }
]
}
role.message_types = roleMsgTypes[role.responsibility_type] || []
}
// Coordinator special config
roles[0].allowed_tools = [
"TeamCreate(*)", "TeamDelete(*)", "SendMessage(*)",
"TaskCreate(*)", "TaskUpdate(*)", "TaskList(*)", "TaskGet(*)",
"Task(*)", "AskUserQuestion(*)", "TodoWrite(*)",
"Read(*)", "Bash(*)", "Glob(*)", "Grep(*)"
]
roles[0].message_types = [
{ type: "plan_approved", trigger: "Plan approved" },
{ type: "plan_revision", trigger: "Revision requested" },
{ type: "task_unblocked", trigger: "Task unblocked" },
{ type: "shutdown", trigger: "Team shutdown" },
{ type: "error", trigger: "Coordination error" }
]
```
### Step 4b: Toolbox Inference (Per Role)
```javascript
// Infer commands, subagents, and CLI tools based on responsibility type
const toolboxMap = {
"Read-only analysis": {
commands: ["review", "analyze"],
subagents: [],
cli_tools: [
{ tool: "gemini", mode: "analysis", purpose: "Multi-perspective code analysis" },
{ tool: "codex", mode: "review", purpose: "Git-aware code review" }
],
phase_commands: { phase2: null, phase3: "analyze", phase4: null }
},
"Code generation": {
commands: ["implement", "validate"],
subagents: [
{ type: "code-developer", purpose: "Complex implementation delegation" }
],
cli_tools: [],
phase_commands: { phase2: null, phase3: "implement", phase4: "validate" }
},
"Orchestration": {
commands: ["explore", "plan"],
subagents: [
{ type: "cli-explore-agent", purpose: "Multi-angle codebase exploration" },
{ type: "cli-lite-planning-agent", purpose: "Structured planning" }
],
cli_tools: [
{ tool: "gemini", mode: "analysis", purpose: "Architecture analysis" }
],
phase_commands: { phase2: "explore", phase3: null, phase4: null }
},
"Validation": {
commands: ["validate"],
subagents: [
{ type: "code-developer", purpose: "Test-fix iteration" }
],
cli_tools: [],
phase_commands: { phase2: null, phase3: "validate", phase4: null }
}
}
for (const role of roles.filter(r => r.name !== 'coordinator')) {
const toolbox = toolboxMap[role.responsibility_type] || { commands: [], subagents: [], cli_tools: [], phase_commands: {} }
role.commands = toolbox.commands
role.subagents = toolbox.subagents
role.cli_tools = toolbox.cli_tools
role.phase_commands = toolbox.phase_commands
}
// Coordinator always gets dispatch + monitor
roles[0].commands = ["dispatch", "monitor"]
roles[0].subagents = []
roles[0].cli_tools = []
roles[0].phase_commands = { phase2: null, phase3: "dispatch", phase4: "monitor" }
```
### Step 5: Pipeline Definition (Dynamic)
```javascript
// 从推断的角色动态构建 pipeline
// 排序权重:分析/探索类 < 规划类 < 实现类 < 验证/审查类
const PHASE_ORDER = {
analyst: 1, debugger: 1, security: 1,
planner: 2,
executor: 3, writer: 3,
tester: 4, reviewer: 4
}
function buildPipeline(roles) {
const workers = roles
.filter(r => r.name !== 'coordinator')
.sort((a, b) => (PHASE_ORDER[a.name] || 3) - (PHASE_ORDER[b.name] || 3))
// 按阶段分组
const phaseGroups = {}
for (const r of workers) {
const order = PHASE_ORDER[r.name] || 3
if (!phaseGroups[order]) phaseGroups[order] = []
phaseGroups[order].push(r)
}
// 构建依赖链:每个阶段依赖前一阶段的所有角色
const stages = []
const sortedPhases = Object.keys(phaseGroups).map(Number).sort((a, b) => a - b)
let prevPrefixes = []
for (const phase of sortedPhases) {
const group = phaseGroups[phase]
for (const r of group) {
stages.push({
name: r.task_prefix,
role: r.name,
blockedBy: [...prevPrefixes]
})
}
prevPrefixes = group.map(r => r.task_prefix)
}
// 生成 pipeline 图
const diagramParts = sortedPhases.map(phase => {
const group = phaseGroups[phase]
if (group.length === 1) return `[${group[0].task_prefix}: ${group[0].name}]`
return `[${group.map(r => `${r.task_prefix}`).join(' + ')}: ${group.map(r => r.name).join('/')}]`
})
const diagram = `需求 → ${diagramParts.join(' → ')} → 汇报`
return { stages, diagram }
}
const pipeline = buildPipeline(roles)
```
### Step 6: Generate Configuration
```javascript
const teamName = teamInfo["Team Name"] === "自定义"
? teamInfo["Team Name_other"]
: teamInfo["Team Name"]
const config = {
team_name: teamName,
team_display_name: teamName.charAt(0).toUpperCase() + teamName.slice(1),
skill_name: `team-${teamName}`,
skill_path: `.claude/skills/team-${teamName}/`,
pipeline_type: pipelineType,
pipeline: pipeline,
roles: roles.map(r => ({
...r,
display_name: `${teamName} ${r.name}`,
name_upper: r.name.toUpperCase()
})),
worker_roles: roles.filter(r => r.name !== 'coordinator').map(r => ({
...r,
display_name: `${teamName} ${r.name}`,
name_upper: r.name.toUpperCase()
})),
all_roles_tools_union: [...new Set(roles.flatMap(r => r.allowed_tools))].join(', '),
role_list: roles.map(r => r.name).join(', ')
}
Write(`${workDir}/team-config.json`, JSON.stringify(config, null, 2))
```
## Output
- **File**: `team-config.json`
- **Format**: JSON
- **Location**: `{workDir}/team-config.json`
## Quality Checklist
- [ ] Team name is lowercase, valid as folder/skill name
- [ ] Coordinator is always included
- [ ] At least 2 worker roles defined
- [ ] Task prefixes are UPPERCASE and unique across roles
- [ ] Pipeline stages reference valid roles
- [ ] All roles have message types defined
- [ ] Allowed tools include minimum set per role
## Next Phase
-> [Phase 2: Pattern Analysis](02-pattern-analysis.md)

View File

@@ -1,253 +0,0 @@
# Phase 2: Pattern Analysis
Analyze applicable patterns for each role in the team.
## Objective
- Per-role: find most similar existing command
- Per-role: select infrastructure + collaboration patterns
- Per-role: map 5-phase structure to role responsibilities
- Generate pattern-analysis.json
## Input
- Dependency: `team-config.json` (Phase 1)
- Specification: `specs/team-design-patterns.md` (read in Phase 0)
## Execution Steps
### Step 1: Load Configuration
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
```
### Step 2: Per-Role Similarity Mapping
```javascript
const similarityMap = {
"Read-only analysis": {
primary: "review", secondary: "plan",
reason: "Both analyze code and report findings with severity classification"
},
"Code generation": {
primary: "execute", secondary: "test",
reason: "Both write/modify code and self-validate"
},
"Orchestration": {
primary: "plan", secondary: "coordinate",
reason: "Both coordinate sub-tasks and produce structured output"
},
"Validation": {
primary: "test", secondary: "review",
reason: "Both validate quality with structured criteria"
}
}
const roleAnalysis = config.worker_roles.map(role => {
const similarity = similarityMap[role.responsibility_type]
return {
role_name: role.name,
similar_to: similarity,
reference_command: `.claude/commands/team/${similarity.primary}.md`
}
})
```
### Step 3: Per-Role Phase Mapping
```javascript
const phaseMapping = {
"Read-only analysis": {
phase2: "Context Loading",
phase3: "Analysis Execution",
phase4: "Finding Summary"
},
"Code generation": {
phase2: "Task & Plan Loading",
phase3: "Code Implementation",
phase4: "Self-Validation"
},
"Orchestration": {
phase2: "Context & Complexity Assessment",
phase3: "Orchestrated Execution",
phase4: "Result Aggregation"
},
"Validation": {
phase2: "Environment Detection",
phase3: "Execution & Fix Cycle",
phase4: "Result Analysis"
}
}
roleAnalysis.forEach(ra => {
const role = config.worker_roles.find(r => r.name === ra.role_name)
ra.phase_structure = {
phase1: "Task Discovery",
...phaseMapping[role.responsibility_type],
phase5: "Report to Coordinator"
}
})
```
### Step 4: Per-Role Infrastructure Patterns
```javascript
roleAnalysis.forEach(ra => {
const role = config.worker_roles.find(r => r.name === ra.role_name)
// Core patterns (mandatory for all)
ra.core_patterns = [
"pattern-1-message-bus",
"pattern-2-yaml-front-matter", // Adapted: no YAML in skill role files
"pattern-3-task-lifecycle",
"pattern-4-five-phase",
"pattern-6-coordinator-spawn",
"pattern-7-error-handling"
]
// Conditional patterns
ra.conditional_patterns = []
if (role.adaptive_routing) {
ra.conditional_patterns.push("pattern-5-complexity-adaptive")
}
if (role.responsibility_type === "Code generation" || role.responsibility_type === "Orchestration") {
ra.conditional_patterns.push("pattern-8-session-files")
}
})
```
### Step 4b: Command-to-Phase Mapping
```javascript
// Map commands to phases and determine extraction criteria
roleAnalysis.forEach(ra => {
const role = config.worker_roles.find(r => r.name === ra.role_name)
ra.command_mapping = {
commands: role.commands || [],
phase_commands: role.phase_commands || {},
extraction_reasons: []
}
// Determine extraction reasons per command
for (const cmd of (role.commands || [])) {
const reasons = []
if ((role.subagents || []).length > 0) reasons.push("subagent-delegation")
if ((role.cli_tools || []).length > 0) reasons.push("cli-fan-out")
if (role.adaptive_routing) reasons.push("complexity-adaptive")
ra.command_mapping.extraction_reasons.push({ command: cmd, reasons })
}
// Pattern 9 selection
ra.uses_pattern_9 = (role.subagents || []).length > 0 || (role.cli_tools || []).length > 0
})
```
### Step 5: Collaboration Pattern Selection
```javascript
// Team-level collaboration patterns
function selectTeamPatterns(config) {
const patterns = ['CP-1'] // Linear Pipeline is always base
const hasValidation = config.worker_roles.some(r =>
r.responsibility_type === 'Validation' || r.responsibility_type === 'Read-only analysis'
)
if (hasValidation) patterns.push('CP-2') // Review-Fix Cycle
const hasOrchestration = config.worker_roles.some(r =>
r.responsibility_type === 'Orchestration'
)
if (hasOrchestration) patterns.push('CP-3') // Fan-out/Fan-in
if (config.worker_roles.length >= 4) patterns.push('CP-6') // Incremental Delivery
patterns.push('CP-5') // Escalation Chain (always available)
patterns.push('CP-10') // Post-Mortem (always at team level)
return [...new Set(patterns)]
}
const collaborationPatterns = selectTeamPatterns(config)
// Convergence defaults
const convergenceConfig = collaborationPatterns.map(cp => {
const defaults = {
'CP-1': { max_iterations: 1, success_gate: 'all_stages_completed' },
'CP-2': { max_iterations: 5, success_gate: 'verdict_approve_or_conditional' },
'CP-3': { max_iterations: 1, success_gate: 'quorum_100_percent' },
'CP-5': { max_iterations: null, success_gate: 'issue_resolved_at_any_level' },
'CP-6': { max_iterations: 3, success_gate: 'all_increments_validated' },
'CP-10': { max_iterations: 1, success_gate: 'report_generated' }
}
return { pattern: cp, convergence: defaults[cp] || {} }
})
```
### Step 6: Read Reference Commands
```javascript
// Read the most referenced commands for extraction
const referencedCommands = [...new Set(roleAnalysis.map(ra => ra.similar_to.primary))]
const referenceContent = {}
for (const cmdName of referencedCommands) {
try {
referenceContent[cmdName] = Read(`.claude/commands/team/${cmdName}.md`)
} catch {
referenceContent[cmdName] = null
}
}
```
### Step 7: Generate Analysis Document
```javascript
const analysis = {
team_name: config.team_name,
role_count: config.roles.length,
worker_count: config.worker_roles.length,
role_analysis: roleAnalysis,
collaboration_patterns: collaborationPatterns,
convergence_config: convergenceConfig,
referenced_commands: referencedCommands,
pipeline: config.pipeline,
// Skill-specific patterns
skill_patterns: {
role_router: "Parse --role from $ARGUMENTS → dispatch to roles/{role}.md",
shared_infrastructure: "Message bus + task lifecycle defined once in SKILL.md",
progressive_loading: "Only read roles/{role}.md when that role executes"
},
command_architecture: {
enabled: true,
role_commands: roleAnalysis.map(ra => ({
role: ra.role_name,
commands: ra.command_mapping?.commands || [],
phase_commands: ra.command_mapping?.phase_commands || {},
uses_pattern_9: ra.uses_pattern_9 || false
}))
}
}
Write(`${workDir}/pattern-analysis.json`, JSON.stringify(analysis, null, 2))
```
## Output
- **File**: `pattern-analysis.json`
- **Format**: JSON
- **Location**: `{workDir}/pattern-analysis.json`
## Quality Checklist
- [ ] Every worker role has similarity mapping
- [ ] Every worker role has 5-phase structure
- [ ] Infrastructure patterns include all mandatory patterns
- [ ] Collaboration patterns selected at team level
- [ ] Referenced commands are readable
- [ ] Skill-specific patterns documented
## Next Phase
-> [Phase 3: Skill Package Generation](03-skill-generation.md)

View File

@@ -1,806 +0,0 @@
# Phase 3: Skill Package Generation
Generate the unified team skill package: SKILL.md (role router) + roles/{name}/role.md (per-role orchestrator) + roles/{name}/commands/*.md (command modules).
## Objective
- Generate SKILL.md with role router and shared infrastructure
- Generate roles/coordinator/role.md + commands/
- Generate roles/{worker-role}/role.md + commands/ for each worker role
- Generate specs/team-config.json
- All files written to preview directory first
## Input
- Dependency: `team-config.json` (Phase 1), `pattern-analysis.json` (Phase 2)
- Templates: `templates/skill-router-template.md`, `templates/role-template.md`, `templates/role-command-template.md`
- Reference: existing team commands (read in Phase 0)
## Execution Steps
### Step 1: Load Inputs
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
const analysis = JSON.parse(Read(`${workDir}/pattern-analysis.json`))
const routerTemplate = Read(`${skillDir}/templates/skill-router-template.md`)
const roleTemplate = Read(`${skillDir}/templates/role-template.md`)
const commandTemplate = Read(`${skillDir}/templates/role-command-template.md`)
// Create preview directory with folder-based role structure
const previewDir = `${workDir}/preview`
const roleDirs = config.roles.map(r => `"${previewDir}/roles/${r.name}/commands"`).join(' ')
Bash(`mkdir -p ${roleDirs} "${previewDir}/specs"`)
```
### Step 2: Generate SKILL.md (Role Router)
This is the unified entry point. All roles invoke this skill with `--role=xxx`.
```javascript
const rolesTable = config.roles.map(r =>
`| \`${r.name}\` | ${r.task_prefix || 'N/A'} | ${r.description} | [roles/${r.name}/role.md](roles/${r.name}/role.md) |`
).join('\n')
const roleDispatchEntries = config.roles.map(r =>
` "${r.name}": { file: "roles/${r.name}/role.md", prefix: "${r.task_prefix || 'N/A'}" }`
).join(',\n')
const messageBusTable = config.worker_roles.map(r =>
`| ${r.name} | ${r.message_types.map(mt => '\`' + mt.type + '\`').join(', ')} |`
).join('\n')
const spawnBlocks = config.worker_roles.map(r => `
// ${r.display_name}
Task({
subagent_type: "general-purpose",
team_name: teamName,
name: "${r.name}",
prompt: \`你是 team "\${teamName}" 的 ${r.name_upper}
当你收到 ${r.task_prefix}-* 任务时,调用 Skill(skill="${config.skill_name}", args="--role=${r.name}") 执行。
当前需求: \${taskDescription}
约束: \${constraints}
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录:
mcp__ccw-tools__team_msg({ operation: "log", team: "\${teamName}", from: "${r.name}", to: "coordinator", type: "<type>", summary: "<摘要>" })
工作流程:
1. TaskList → 找到 ${r.task_prefix}-* 任务
2. Skill(skill="${config.skill_name}", args="--role=${r.name}") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务\`
})`).join('\n')
const skillMd = `---
name: ${config.skill_name}
description: Unified team skill for ${config.team_name} team. All roles invoke this skill with --role arg. Triggers on "team ${config.team_name}".
allowed-tools: ${config.all_roles_tools_union}
---
# Team ${config.team_display_name}
Unified team skill. All team members invoke this skill with \`--role=xxx\` for role-specific execution.
## Architecture Overview
\`\`\`
┌───────────────────────────────────────────┐
│ Skill(skill="${config.skill_name}") │
│ args="--role=xxx" │
└───────────────┬───────────────────────────┘
│ Role Router
┌───────────┼${'───────────┬'.repeat(Math.min(config.roles.length - 1, 3))}
${config.roles.map(r => ``).join('').trim()}
${config.roles.map(r => `┌──────────┐ `).join('').trim()}
${config.roles.map(r => `${r.name.padEnd(10)}`).join('').trim()}
${config.roles.map(r => `│ roles/ │ `).join('').trim()}
${config.roles.map(r => `└──────────┘ `).join('').trim()}
\`\`\`
## Role Router
### Input Parsing
Parse \`$ARGUMENTS\` to extract \`--role\`:
\`\`\`javascript
const args = "$ARGUMENTS"
const roleMatch = args.match(/--role[=\\s]+(\\w+)/)
if (!roleMatch) {
throw new Error("Missing --role argument. Available roles: ${config.role_list}")
}
const role = roleMatch[1]
const teamName = "${config.team_name}"
\`\`\`
### Role Dispatch
\`\`\`javascript
const VALID_ROLES = {
${roleDispatchEntries}
}
if (!VALID_ROLES[role]) {
throw new Error(\\\`Unknown role: \\\${role}. Available: \\\${Object.keys(VALID_ROLES).join(', ')}\\\`)
}
// Read and execute role-specific logic
Read(VALID_ROLES[role].file)
// → Execute the 5-phase process defined in that file
\`\`\`
### Available Roles
| Role | Task Prefix | Responsibility | Role File |
|------|-------------|----------------|-----------|
${rolesTable}
## Shared Infrastructure
### Team Configuration
\`\`\`javascript
const TEAM_CONFIG = {
name: "${config.team_name}",
sessionDir: ".workflow/.team-plan/${config.team_name}/",
msgDir: ".workflow/.team-msg/${config.team_name}/"
}
\`\`\`
### Message Bus (All Roles)
Every SendMessage **before**, must call \`mcp__ccw-tools__team_msg\`:
\`\`\`javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "${config.team_name}",
from: role,
to: "coordinator",
type: "<type>",
summary: "<summary>"
})
\`\`\`
**Message types by role**:
| Role | Types |
|------|-------|
${messageBusTable}
### CLI 回退
\`\`\`javascript
Bash(\\\`ccw team log --team "${config.team_name}" --from "\\\${role}" --to "coordinator" --type "<type>" --summary "<摘要>" --json\\\`)
\`\`\`
### Task Lifecycle (All Roles)
\`\`\`javascript
// Phase 1: Discovery
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith(\\\`\\\${VALID_ROLES[role].prefix}-\\\`) &&
t.owner === role &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
// Phase 2-4: Role-specific (see roles/{role}.md)
// Phase 5: Report + Loop
TaskUpdate({ taskId: task.id, status: 'completed' })
\`\`\`
## Pipeline
\`\`\`
${config.pipeline.diagram}
\`\`\`
## Coordinator Spawn Template
\`\`\`javascript
TeamCreate({ team_name: "${config.team_name}" })
${spawnBlocks}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Error with usage hint |
| Role file not found | Error with expected path |
`
Write(`${previewDir}/SKILL.md`, skillMd)
```
### Step 3: Generate Coordinator Role File
```javascript
const taskChainCode = config.pipeline.stages.map((stage, i) => {
const blockedByIds = stage.blockedBy.map(dep => {
const depIdx = config.pipeline.stages.findIndex(s => s.name === dep)
return `\${task${depIdx}Id}`
})
return `TaskCreate({ subject: "${stage.name}-001: ${stage.role} work", description: \`\${taskDescription}\`, activeForm: "${stage.name}进行中" })
TaskUpdate({ taskId: task${i}Id, owner: "${stage.role}"${blockedByIds.length > 0 ? `, addBlockedBy: [${blockedByIds.join(', ')}]` : ''} })`
}).join('\n\n')
const coordinationHandlers = config.worker_roles.map(r => {
const resultType = r.message_types.find(mt => !mt.type.includes('error') && !mt.type.includes('progress'))
return `| ${r.name_upper}: ${resultType?.trigger || 'work complete'} | team_msg log → TaskUpdate ${r.task_prefix} completed → check next |`
}).join('\n')
const coordinatorMd = `# Role: coordinator
Team coordinator. Orchestrates pipeline: requirement clarification → team creation → task chain → dispatch → monitoring → reporting.
## Role Identity
- **Name**: \`coordinator\`
- **Task Prefix**: N/A (creates tasks, doesn't receive them)
- **Responsibility**: Orchestration
- **Communication**: SendMessage to all teammates
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| \`plan_approved\` | coordinator → planner | Plan approved |
| \`plan_revision\` | coordinator → planner | Revision requested |
| \`task_unblocked\` | coordinator → worker | Task dependency met |
| \`shutdown\` | coordinator → all | Team shutdown |
| \`error\` | coordinator → user | Coordination error |
## Execution
### Phase 1: Requirement Clarification
Parse \`$ARGUMENTS\` for task description. Use AskUserQuestion for:
- MVP scope (minimal / full / comprehensive)
- Key constraints (backward compatible / follow patterns / test coverage)
Simple tasks can skip clarification.
### Phase 2: Create Team + Spawn Teammates
\`\`\`javascript
TeamCreate({ team_name: "${config.team_name}" })
${spawnBlocks}
\`\`\`
### Phase 3: Create Task Chain
\`\`\`javascript
${taskChainCode}
\`\`\`
### Phase 4: Coordination Loop
Receive teammate messages, dispatch based on content.
**Before each decision**: \`team_msg list\` to check recent messages.
**After each decision**: \`team_msg log\` to record.
| Received Message | Action |
|-----------------|--------|
${coordinationHandlers}
| Worker: error | Assess severity → retry or escalate to user |
| All tasks completed | → Phase 5 |
### Phase 5: Report + Persist
Summarize changes, test results, review findings.
\`\`\`javascript
AskUserQuestion({
questions: [{
question: "当前需求已完成。下一步:",
header: "Next",
multiSelect: false,
options: [
{ label: "新需求", description: "提交新需求给当前团队" },
{ label: "关闭团队", description: "关闭所有 teammate 并清理" }
]
}]
})
// 新需求 → 回到 Phase 1
// 关闭 → shutdown → TeamDelete()
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Teammate unresponsive | Send follow-up, 2x → respawn |
| Plan rejected 3+ times | Coordinator self-plans |
| Test stuck >5 iterations | Escalate to user |
| Review finds critical | Create fix task for executor |
`
Write(`${previewDir}/roles/coordinator/role.md`, coordinatorMd)
// Generate coordinator command files
const coordinatorCommands = config.roles[0].commands || ["dispatch", "monitor"]
for (const cmd of coordinatorCommands) {
// Read pre-built command pattern from template
const cmdTemplate = commandTemplate // templates/role-command-template.md
// Extract matching pre-built pattern section and customize for this team
const cmdContent = generateCommandFile(cmd, "coordinator", config)
Write(`${previewDir}/roles/coordinator/commands/${cmd}.md`, cmdContent)
}
```
### Step 4: Generate Worker Role Files (Folder Structure)
For each worker role, generate a complete role file with 5-phase execution.
```javascript
for (const role of config.worker_roles) {
const ra = analysis.role_analysis.find(r => r.role_name === role.name)
// Phase 2 content based on responsibility type
const phase2Content = {
"Read-only analysis": `\`\`\`javascript
// Load plan for criteria reference
const planPathMatch = task.description.match(/\\.workflow\\/\\.team-plan\\/[^\\s]+\\/plan\\.json/)
let plan = null
if (planPathMatch) {
try { plan = JSON.parse(Read(planPathMatch[0])) } catch {}
}
// Get changed files
const changedFiles = Bash(\`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached\`)
.split('\\n').filter(Boolean)
// Read file contents for analysis
const fileContents = {}
for (const file of changedFiles.slice(0, 20)) {
try { fileContents[file] = Read(file) } catch {}
}
\`\`\``,
"Code generation": `\`\`\`javascript
// Extract plan path from task description
const planPathMatch = task.description.match(/\\.workflow\\/\\.team-plan\\/[^\\s]+\\/plan\\.json/)
if (!planPathMatch) {
mcp__ccw-tools__team_msg({ operation: "log", team: "${config.team_name}", from: "${role.name}", to: "coordinator", type: "error", summary: "plan.json路径无效" })
SendMessage({ type: "message", recipient: "coordinator", content: \`Cannot find plan.json in \${task.subject}\`, summary: "Plan not found" })
return
}
const plan = JSON.parse(Read(planPathMatch[0]))
const planTasks = plan.task_ids.map(id =>
JSON.parse(Read(\`\${planPathMatch[0].replace('plan.json', '')}.task/\${id}.json\`))
)
\`\`\``,
"Orchestration": `\`\`\`javascript
function assessComplexity(desc) {
let score = 0
if (/refactor|architect|restructure|module|system/.test(desc)) score += 2
if (/multiple|across|cross/.test(desc)) score += 2
if (/integrate|api|database/.test(desc)) score += 1
if (/security|performance/.test(desc)) score += 1
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
}
const complexity = assessComplexity(task.description)
\`\`\``,
"Validation": `\`\`\`javascript
// Detect changed files for validation scope
const changedFiles = Bash(\`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached\`)
.split('\\n').filter(Boolean)
\`\`\``
}
// Phase 3 content based on responsibility type
const phase3Content = {
"Read-only analysis": `\`\`\`javascript
// Core analysis logic
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
// Analyze each file
for (const [file, content] of Object.entries(fileContents)) {
// Domain-specific analysis
}
\`\`\``,
"Code generation": `\`\`\`javascript
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
${role.adaptive_routing ? `// Complexity-adaptive execution
if (planTasks.length <= 2) {
// Direct file editing
for (const pt of planTasks) {
for (const f of (pt.files || [])) {
const content = Read(f.path)
Edit({ file_path: f.path, old_string: "...", new_string: "..." })
}
}
} else {
// Delegate to code-developer sub-agent
Task({
subagent_type: "code-developer",
run_in_background: false,
description: \`Implement \${planTasks.length} tasks\`,
prompt: \`## Goal
\${plan.summary}
## Tasks
\${planTasks.map(t => \`### \${t.title}\\n\${t.description}\`).join('\\n\\n')}
Complete each task according to its convergence criteria.\`
})
}` : `// Direct execution
for (const pt of planTasks) {
for (const f of (pt.files || [])) {
const content = Read(f.path)
Edit({ file_path: f.path, old_string: "...", new_string: "..." })
}
}`}
\`\`\``,
"Orchestration": `\`\`\`javascript
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
${role.adaptive_routing ? `if (complexity === 'Low') {
// Direct execution with mcp__ace-tool__search_context + Grep/Glob
} else {
// Launch sub-agents for complex work
Task({
subagent_type: "universal-executor",
run_in_background: false,
description: "${role.name} orchestration",
prompt: \`Execute ${role.name} work for: \${task.description}\`
})
}` : `// Direct orchestration`}
\`\`\``,
"Validation": `\`\`\`javascript
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
let iteration = 0
const MAX_ITERATIONS = 5
while (iteration < MAX_ITERATIONS) {
// Run validation
const result = Bash(\`npm test 2>&1 || true\`)
const passed = !result.includes('FAIL')
if (passed) break
// Attempt fix
iteration++
if (iteration < MAX_ITERATIONS) {
// Auto-fix or delegate
}
}
\`\`\``
}
// Phase 4 content
const phase4Content = {
"Read-only analysis": `\`\`\`javascript
// Classify findings by severity
const findings = { critical: [], high: [], medium: [], low: [] }
// ... populate findings from Phase 3 analysis
\`\`\``,
"Code generation": `\`\`\`javascript
// Self-validation
const syntaxResult = Bash(\`tsc --noEmit 2>&1 || true\`)
const hasSyntaxErrors = syntaxResult.includes('error TS')
if (hasSyntaxErrors) {
// Attempt auto-fix
}
\`\`\``,
"Orchestration": `\`\`\`javascript
// Aggregate results from sub-agents
const aggregated = {
// Merge findings, results, outputs
}
\`\`\``,
"Validation": `\`\`\`javascript
// Analyze results
const resultSummary = {
iterations: iteration,
passed: iteration < MAX_ITERATIONS,
// Coverage, pass rate, etc.
}
\`\`\``
}
const msgTypesTable = role.message_types.map(mt =>
`| \`${mt.type}\` | ${role.name} → coordinator | ${mt.trigger} |`
).join('\n')
const primaryMsgType = role.message_types.find(mt => !mt.type.includes('error') && !mt.type.includes('progress'))?.type || `${role.name}_complete`
const roleMd = `# Role: ${role.name}
${role.description}
## Role Identity
- **Name**: \`${role.name}\`
- **Task Prefix**: \`${role.task_prefix}-*\`
- **Responsibility**: ${role.responsibility_type}
- **Communication**: SendMessage to coordinator only
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
${msgTypesTable}
## Execution (5-Phase)
### Phase 1: Task Discovery
\`\`\`javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('${role.task_prefix}-') &&
t.owner === '${role.name}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
\`\`\`
### Phase 2: ${ra.phase_structure.phase2}
${phase2Content[role.responsibility_type]}
### Phase 3: ${ra.phase_structure.phase3}
${phase3Content[role.responsibility_type]}
### Phase 4: ${ra.phase_structure.phase4}
${phase4Content[role.responsibility_type]}
### Phase 5: Report to Coordinator
\`\`\`javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "${config.team_name}",
from: "${role.name}",
to: "coordinator",
type: "${primaryMsgType}",
summary: \`${role.task_prefix} complete: \${task.subject}\`
})
SendMessage({
type: "message",
recipient: "coordinator",
content: \`## ${role.display_name} Results
**Task**: \${task.subject}
**Status**: \${resultStatus}
### Summary
\${resultSummary}
### Details
\${resultDetails}\`,
summary: \`${role.task_prefix} complete\`
})
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('${role.task_prefix}-') &&
t.owner === '${role.name}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No ${role.task_prefix}-* tasks available | Idle, wait for coordinator |
| Context/Plan file not found | Notify coordinator |
${role.adaptive_routing ? '| Sub-agent failure | Retry once, fallback to direct |\n' : ''}| Critical issue beyond scope | SendMessage fix_required |
| Unexpected error | Log via team_msg, report |
`
Write(`${previewDir}/roles/${role.name}/role.md`, roleMd)
// Generate command files for this role
const roleCommands = role.commands || []
for (const cmd of roleCommands) {
const cmdContent = generateCommandFile(cmd, role.name, config)
Write(`${previewDir}/roles/${role.name}/commands/${cmd}.md`, cmdContent)
}
}
// Helper: Generate command file from pre-built patterns
function generateCommandFile(cmdName, roleName, config) {
// 7 pre-built command patterns (from templates/role-command-template.md)
const prebuiltPatterns = {
"explore": {
description: "Multi-angle codebase exploration using parallel cli-explore-agent instances.",
delegation: "Subagent Fan-out",
agentType: "cli-explore-agent",
phase: 2
},
"analyze": {
description: "Multi-perspective code analysis using parallel ccw cli calls.",
delegation: "CLI Fan-out",
cliTool: "gemini",
phase: 3
},
"implement": {
description: "Code implementation via code-developer subagent delegation with batch routing.",
delegation: "Sequential Delegation",
agentType: "code-developer",
phase: 3
},
"validate": {
description: "Iterative test-fix cycle with max iteration control.",
delegation: "Sequential Delegation",
agentType: "code-developer",
phase: 3
},
"review": {
description: "4-dimensional code review with optional codex review integration.",
delegation: "CLI Fan-out",
cliTool: "gemini",
phase: 3
},
"dispatch": {
description: "Task chain creation with dependency management for coordinator.",
delegation: "Direct",
phase: 3
},
"monitor": {
description: "Message bus polling and coordination loop for coordinator.",
delegation: "Direct",
phase: 4
}
}
const pattern = prebuiltPatterns[cmdName]
if (!pattern) {
// Custom command: generate from template skeleton
return `# Command: ${cmdName}\n\n> Custom command for ${roleName}\n\n## When to Use\n\n- Custom trigger conditions\n\n## Strategy\n\n### Delegation Mode\n\n**Mode**: TBD\n\n## Execution Steps\n\n### Step 1: Context Preparation\n\n### Step 2: Execute Strategy\n\n### Step 3: Result Processing\n\n## Output Format\n\n## Error Handling\n\n| Scenario | Resolution |\n|----------|------------|\n| Agent/CLI failure | Retry once, then fallback to inline execution |\n`
}
// Read full pattern from template file and customize
// The template contains all 7 patterns with complete implementation
// Extract and customize the matching pattern section
const cmdContent = `# Command: ${cmdName}
> ${pattern.description}
## When to Use
- Phase ${pattern.phase} of ${roleName} role in team "${config.team_name}"
- See templates/role-command-template.md for full pattern specification
## Strategy
### Delegation Mode
**Mode**: ${pattern.delegation}
${pattern.agentType ? `**Agent Type**: \`${pattern.agentType}\`` : ''}
${pattern.cliTool ? `**CLI Tool**: \`${pattern.cliTool}\`\n**CLI Mode**: \`analysis\`` : ''}
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
// Load task context
const task = TaskGet({ taskId: currentTaskId })
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
// See templates/role-command-template.md → "${cmdName}" pattern for full implementation
\`\`\`
### Step 3: Result Processing
\`\`\`javascript
// Aggregate and format results
\`\`\`
## Output Format
\`\`\`
## ${cmdName.charAt(0).toUpperCase() + cmdName.slice(1)} Results
### Summary
### Details
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Agent/CLI failure | Retry once, then fallback to inline execution |
| Timeout (>5 min) | Report partial results, notify coordinator |
| No results | Report empty, suggest alternative approach |
`
return cmdContent
}
```
### Step 5: Generate specs/team-config.json
```javascript
Write(`${previewDir}/specs/team-config.json`, JSON.stringify({
team_name: config.team_name,
skill_name: config.skill_name,
pipeline_type: config.pipeline_type,
pipeline: config.pipeline,
roles: config.roles.map(r => ({
name: r.name,
task_prefix: r.task_prefix,
responsibility_type: r.responsibility_type,
description: r.description
})),
collaboration_patterns: analysis.collaboration_patterns,
generated_at: new Date().toISOString()
}, null, 2))
```
## Output
- **Directory**: `{workDir}/preview/`
- **Files**:
- `preview/SKILL.md` - Role router + shared infrastructure + command architecture
- `preview/roles/coordinator/role.md` - Coordinator orchestrator
- `preview/roles/coordinator/commands/*.md` - Coordinator command files (dispatch, monitor)
- `preview/roles/{role}/role.md` - Per-worker role orchestrator
- `preview/roles/{role}/commands/*.md` - Per-worker command files
- `preview/specs/team-config.json` - Team configuration
## Quality Checklist
- [ ] SKILL.md contains role router with all roles (dispatch to `roles/{name}/role.md`)
- [ ] SKILL.md contains command architecture section
- [ ] SKILL.md contains shared infrastructure (message bus, task lifecycle)
- [ ] SKILL.md contains coordinator spawn template
- [ ] Every role has a folder in roles/ with role.md
- [ ] Every role.md has 5-phase execution (Phase 1/5 inline, Phase 2-4 delegate or inline)
- [ ] Every role.md has Toolbox section (commands, subagents, cli_tools)
- [ ] Every role.md has message types table
- [ ] Every role.md has error handling
- [ ] Command files exist for each entry in role.md Toolbox
- [ ] Command files are self-contained (Strategy, Execution Steps, Error Handling)
- [ ] No cross-command references between command files
- [ ] team-config.json is valid JSON
## Next Phase
-> [Phase 4: Integration Verification](04-integration-verification.md)

View File

@@ -1,216 +0,0 @@
# Phase 4: Integration Verification
Verify the generated skill package is internally consistent.
## Objective
- Verify SKILL.md role router references match actual role files
- Verify task prefixes are unique across all roles
- Verify message types are consistent
- Verify coordinator spawn template uses correct skill invocation
- Generate integration-report.json
## Input
- Dependency: `{workDir}/preview/` directory (Phase 3)
- Reference: `team-config.json` (Phase 1)
## Execution Steps
### Step 1: Load Generated Files
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
const previewDir = `${workDir}/preview`
const skillMd = Read(`${previewDir}/SKILL.md`)
const roleFiles = {}
for (const role of config.roles) {
try {
roleFiles[role.name] = Read(`${previewDir}/roles/${role.name}/role.md`)
} catch {
roleFiles[role.name] = null
}
}
```
### Step 2: Role Router Consistency
```javascript
const routerChecks = config.roles.map(role => {
const hasRouterEntry = skillMd.includes(`"${role.name}"`)
const hasRoleFile = roleFiles[role.name] !== null
const hasRoleLink = skillMd.includes(`roles/${role.name}/role.md`)
return {
role: role.name,
router_entry: hasRouterEntry,
file_exists: hasRoleFile,
link_valid: hasRoleLink,
status: (hasRouterEntry && hasRoleFile && hasRoleLink) ? 'PASS' : 'FAIL'
}
})
```
### Step 3: Task Prefix Uniqueness
```javascript
const prefixes = config.worker_roles.map(r => r.task_prefix)
const uniquePrefixes = [...new Set(prefixes)]
const prefixCheck = {
prefixes: prefixes,
unique: uniquePrefixes,
duplicates: prefixes.filter((p, i) => prefixes.indexOf(p) !== i),
status: prefixes.length === uniquePrefixes.length ? 'PASS' : 'FAIL'
}
```
### Step 4: Message Type Consistency
```javascript
const msgChecks = config.worker_roles.map(role => {
const roleFile = roleFiles[role.name] || ''
const typesInConfig = role.message_types.map(mt => mt.type)
const typesInFile = typesInConfig.filter(t => roleFile.includes(t))
return {
role: role.name,
configured: typesInConfig,
present_in_file: typesInFile,
missing: typesInConfig.filter(t => !typesInFile.includes(t)),
status: typesInFile.length === typesInConfig.length ? 'PASS' : 'WARN'
}
})
```
### Step 5: Spawn Template Verification
```javascript
const spawnChecks = config.worker_roles.map(role => {
const hasSpawn = skillMd.includes(`name: "${role.name}"`)
const hasSkillCall = skillMd.includes(`Skill(skill="${config.skill_name}", args="--role=${role.name}")`)
const hasTaskPrefix = skillMd.includes(`${role.task_prefix}-*`)
return {
role: role.name,
spawn_present: hasSpawn,
skill_call_correct: hasSkillCall,
prefix_in_prompt: hasTaskPrefix,
status: (hasSpawn && hasSkillCall && hasTaskPrefix) ? 'PASS' : 'FAIL'
}
})
```
### Step 6: Role File Pattern Compliance
```javascript
const patternChecks = Object.entries(roleFiles).map(([name, content]) => {
if (!content) return { role: name, status: 'MISSING' }
const checks = {
has_role_identity: /## Role Identity/.test(content),
has_5_phases: /Phase 1/.test(content) && /Phase 5/.test(content),
has_task_lifecycle: /TaskList/.test(content) && /TaskGet/.test(content) && /TaskUpdate/.test(content),
has_message_bus: /team_msg/.test(content),
has_send_message: /SendMessage/.test(content),
has_error_handling: /## Error Handling/.test(content)
}
const passCount = Object.values(checks).filter(Boolean).length
return {
role: name,
checks: checks,
pass_count: passCount,
total: Object.keys(checks).length,
status: passCount === Object.keys(checks).length ? 'PASS' : 'PARTIAL'
}
})
```
### Step 6b: Command File Verification
```javascript
const commandChecks = config.worker_roles.map(role => {
const commands = role.commands || []
if (commands.length === 0) return { role: role.name, status: 'SKIP', reason: 'No commands' }
const checks = commands.map(cmd => {
const cmdPath = `${previewDir}/roles/${role.name}/commands/${cmd}.md`
let content = null
try { content = Read(cmdPath) } catch {}
if (!content) return { command: cmd, status: 'MISSING' }
const requiredSections = {
has_strategy: /## Strategy/.test(content),
has_execution_steps: /## Execution Steps/.test(content),
has_error_handling: /## Error Handling/.test(content),
has_when_to_use: /## When to Use/.test(content),
is_self_contained: !/Read\("\.\.\//.test(content) // No cross-command references
}
const passCount = Object.values(requiredSections).filter(Boolean).length
return {
command: cmd,
checks: requiredSections,
pass_count: passCount,
total: Object.keys(requiredSections).length,
status: passCount === Object.keys(requiredSections).length ? 'PASS' : 'PARTIAL'
}
})
return { role: role.name, commands: checks, status: checks.every(c => c.status === 'PASS') ? 'PASS' : 'NEEDS_ATTENTION' }
})
```
### Step 7: Generate Report
```javascript
const overallStatus = [
...routerChecks.map(c => c.status),
prefixCheck.status,
...spawnChecks.map(c => c.status),
...patternChecks.map(c => c.status),
...commandChecks.filter(c => c.status !== 'SKIP').map(c => c.status)
].every(s => s === 'PASS') ? 'PASS' : 'NEEDS_ATTENTION'
const report = {
team_name: config.team_name,
skill_name: config.skill_name,
checks: {
router_consistency: routerChecks,
prefix_uniqueness: prefixCheck,
message_types: msgChecks,
spawn_template: spawnChecks,
pattern_compliance: patternChecks,
command_files: commandChecks
},
overall: overallStatus,
file_count: {
skill_md: 1,
role_files: Object.keys(roleFiles).length,
total: 1 + Object.keys(roleFiles).length + 1 // SKILL.md + roles + config
}
}
Write(`${workDir}/integration-report.json`, JSON.stringify(report, null, 2))
```
## Output
- **File**: `integration-report.json`
- **Format**: JSON
- **Location**: `{workDir}/integration-report.json`
## Quality Checklist
- [ ] Every role in config has a router entry in SKILL.md
- [ ] Every role has a file in roles/
- [ ] Task prefixes are unique
- [ ] Spawn template uses correct `Skill(skill="...", args="--role=...")`
- [ ] All role files have 5-phase structure
- [ ] All role files have message bus integration
## Next Phase
-> [Phase 5: Validation](05-validation.md)

View File

@@ -1,244 +0,0 @@
# Phase 5: Validation
Verify quality and deliver the final skill package.
## Objective
- Per-role structural completeness check
- Per-role pattern compliance check
- Quality scoring
- Deliver final skill package to `.claude/skills/team-{name}/`
## Input
- Dependency: `{workDir}/preview/` (Phase 3), `integration-report.json` (Phase 4)
- Specification: `specs/quality-standards.md`
## Execution Steps
### Step 1: Load Files
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
const integration = JSON.parse(Read(`${workDir}/integration-report.json`))
const previewDir = `${workDir}/preview`
const skillMd = Read(`${previewDir}/SKILL.md`)
const roleContents = {}
for (const role of config.roles) {
try {
roleContents[role.name] = Read(`${previewDir}/roles/${role.name}.md`)
} catch {
roleContents[role.name] = null
}
}
```
### Step 2: SKILL.md Structural Check
```javascript
const skillChecks = [
{ name: "Frontmatter", pattern: /^---\n[\s\S]+?\n---/ },
{ name: "Architecture Overview", pattern: /## Architecture Overview/ },
{ name: "Role Router", pattern: /## Role Router/ },
{ name: "Role Dispatch Code", pattern: /VALID_ROLES/ },
{ name: "Available Roles Table", pattern: /\| Role \| Task Prefix/ },
{ name: "Shared Infrastructure", pattern: /## Shared Infrastructure/ },
{ name: "Message Bus Section", pattern: /Message Bus/ },
{ name: "team_msg Example", pattern: /team_msg/ },
{ name: "CLI Fallback", pattern: /ccw team log/ },
{ name: "Task Lifecycle", pattern: /Task Lifecycle/ },
{ name: "Pipeline Diagram", pattern: /## Pipeline/ },
{ name: "Coordinator Spawn Template", pattern: /Coordinator Spawn/ },
{ name: "Error Handling", pattern: /## Error Handling/ }
]
const skillResults = skillChecks.map(c => ({
check: c.name,
status: c.pattern.test(skillMd) ? 'PASS' : 'FAIL'
}))
const skillScore = skillResults.filter(r => r.status === 'PASS').length / skillResults.length * 100
```
### Step 3: Per-Role Structural Check
```javascript
const roleChecks = [
{ name: "Role Identity", pattern: /## Role Identity/ },
{ name: "Message Types Table", pattern: /## Message Types/ },
{ name: "5-Phase Execution", pattern: /## Execution/ },
{ name: "Phase 1 Task Discovery", pattern: /Phase 1.*Task Discovery/i },
{ name: "TaskList Usage", pattern: /TaskList/ },
{ name: "TaskGet Usage", pattern: /TaskGet/ },
{ name: "TaskUpdate Usage", pattern: /TaskUpdate/ },
{ name: "team_msg Before SendMessage", pattern: /team_msg/ },
{ name: "SendMessage to Coordinator", pattern: /SendMessage/ },
{ name: "Error Handling", pattern: /## Error Handling/ }
]
const roleResults = {}
for (const [name, content] of Object.entries(roleContents)) {
if (!content) {
roleResults[name] = { status: 'MISSING', checks: [], score: 0 }
continue
}
const checks = roleChecks.map(c => ({
check: c.name,
status: c.pattern.test(content) ? 'PASS' : 'FAIL'
}))
const score = checks.filter(c => c.status === 'PASS').length / checks.length * 100
roleResults[name] = { status: score >= 80 ? 'PASS' : 'PARTIAL', checks, score }
}
```
### Step 3b: Command File Quality Check
```javascript
const commandQuality = {}
for (const [name, content] of Object.entries(roleContents)) {
if (!content) continue
// Check if role has commands directory
const role = config.roles.find(r => r.name === name)
const commands = role?.commands || []
if (commands.length === 0) {
commandQuality[name] = { status: 'N/A', score: 100 }
continue
}
const cmdChecks = commands.map(cmd => {
let cmdContent = null
try { cmdContent = Read(`${previewDir}/roles/${name}/commands/${cmd}.md`) } catch {}
if (!cmdContent) return { command: cmd, score: 0 }
const checks = [
{ name: "When to Use section", pass: /## When to Use/.test(cmdContent) },
{ name: "Strategy section", pass: /## Strategy/.test(cmdContent) },
{ name: "Delegation mode declared", pass: /Delegation Mode/.test(cmdContent) },
{ name: "Execution Steps section", pass: /## Execution Steps/.test(cmdContent) },
{ name: "Error Handling section", pass: /## Error Handling/.test(cmdContent) },
{ name: "Output Format section", pass: /## Output Format/.test(cmdContent) },
{ name: "Self-contained (no cross-ref)", pass: !/Read\("\.\.\//.test(cmdContent) }
]
const score = checks.filter(c => c.pass).length / checks.length * 100
return { command: cmd, checks, score }
})
const avgScore = cmdChecks.reduce((sum, c) => sum + c.score, 0) / cmdChecks.length
commandQuality[name] = { status: avgScore >= 80 ? 'PASS' : 'PARTIAL', checks: cmdChecks, score: avgScore }
}
```
### Step 4: Quality Scoring
```javascript
const scores = {
skill_md: skillScore,
roles_avg: Object.values(roleResults).reduce((sum, r) => sum + r.score, 0) / Object.keys(roleResults).length,
integration: integration.overall === 'PASS' ? 100 : 50,
consistency: checkConsistency(),
command_quality: Object.values(commandQuality).reduce((sum, c) => sum + c.score, 0) / Math.max(Object.keys(commandQuality).length, 1)
}
function checkConsistency() {
let score = 100
// Check skill name in SKILL.md matches config
if (!skillMd.includes(config.skill_name)) score -= 20
// Check team name consistency
if (!skillMd.includes(config.team_name)) score -= 20
// Check all roles referenced in SKILL.md
for (const role of config.roles) {
if (!skillMd.includes(role.name)) score -= 10
}
return Math.max(0, score)
}
const overallScore = Object.values(scores).reduce((a, b) => a + b, 0) / Object.keys(scores).length
const qualityGate = overallScore >= 80 ? 'PASS' : overallScore >= 60 ? 'REVIEW' : 'FAIL'
```
### Step 5: Generate Validation Report
```javascript
const report = {
team_name: config.team_name,
skill_name: config.skill_name,
timestamp: new Date().toISOString(),
scores: scores,
overall_score: overallScore,
quality_gate: qualityGate,
skill_md_checks: skillResults,
role_results: roleResults,
integration_status: integration.overall,
delivery: {
source: previewDir,
destination: `.claude/skills/${config.skill_name}/`,
ready: qualityGate !== 'FAIL'
}
}
Write(`${workDir}/validation-report.json`, JSON.stringify(report, null, 2))
```
### Step 6: Deliver Final Package
```javascript
if (report.delivery.ready) {
const destDir = `.claude/skills/${config.skill_name}`
// Create directory structure
Bash(`mkdir -p "${destDir}/roles" "${destDir}/specs"`)
// Copy all files
Write(`${destDir}/SKILL.md`, skillMd)
for (const [name, content] of Object.entries(roleContents)) {
if (content) {
Write(`${destDir}/roles/${name}.md`, content)
}
}
// Copy team config
const teamConfig = Read(`${previewDir}/specs/team-config.json`)
Write(`${destDir}/specs/team-config.json`, teamConfig)
// Report
console.log(`\nTeam skill delivered to: ${destDir}/`)
console.log(`Skill name: ${config.skill_name}`)
console.log(`Quality score: ${overallScore.toFixed(1)}% (${qualityGate})`)
console.log(`Roles: ${config.role_list}`)
console.log(`\nUsage:`)
console.log(` Skill(skill="${config.skill_name}", args="--role=planner")`)
console.log(` Skill(skill="${config.skill_name}", args="--role=executor")`)
console.log(`\nFile structure:`)
Bash(`find "${destDir}" -type f | sort`)
} else {
console.log(`Validation FAILED (score: ${overallScore.toFixed(1)}%)`)
console.log('Fix issues and re-run Phase 3-5')
}
```
## Output
- **File**: `validation-report.json`
- **Format**: JSON
- **Location**: `{workDir}/validation-report.json`
- **Delivery**: `.claude/skills/team-{name}/` (if validation passes)
## Quality Checklist
- [ ] SKILL.md passes all 13 structural checks
- [ ] All role files pass structural checks (>= 80%)
- [ ] Integration report is PASS
- [ ] Overall score >= 80%
- [ ] Final package delivered to `.claude/skills/team-{name}/`
- [ ] Usage instructions provided
## Completion
This is the final phase. The unified team skill is ready for use.

View File

@@ -1,171 +0,0 @@
# Quality Standards for Team Commands
Quality assessment criteria for generated team command .md files.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase 5 | Score generated command | All dimensions |
| Phase 3 | Guide generation quality | Checklist |
---
## Quality Dimensions
### 1. Completeness (25%)
| Score | Criteria |
|-------|----------|
| 100% | All 15 required sections present with substantive content |
| 80% | 12+ sections present, minor gaps in non-critical areas |
| 60% | Core sections present (front matter, message bus, 5 phases, error handling) |
| 40% | Missing critical sections |
| 0% | Skeleton only |
**Required Sections Checklist (role.md files):**
- [ ] Role Identity (name, responsibility, communication)
- [ ] Message Bus section with team_msg examples
- [ ] Message Types table
- [ ] Toolbox section (Available Commands, Subagent Capabilities, CLI Capabilities)
- [ ] Phase 1: Task Discovery implementation
- [ ] Phase 2: Context Loading / delegation to commands
- [ ] Phase 3: Core Work / delegation to commands
- [ ] Phase 4: Validation/Summary / delegation to commands
- [ ] Phase 5: Report + Loop implementation
- [ ] Error Handling table
- [ ] Code examples in all phases
> **Note**: For `commands/*.md` file quality criteria, see [Command File Quality Standards](#command-file-quality-standards) below.
### 2. Pattern Compliance (25%)
| Score | Criteria |
|-------|----------|
| 100% | All 8 infrastructure patterns + selected collaboration patterns fully implemented |
| 80% | 6 core infra patterns + at least 1 collaboration pattern with convergence |
| 60% | Minimum 6 infra patterns, collaboration patterns present but incomplete |
| 40% | Missing critical patterns (message bus or task lifecycle) |
| 0% | No pattern compliance |
**Infrastructure Pattern Checklist:**
- [ ] Pattern 1: Message bus - team_msg before every SendMessage
- [ ] Pattern 1b: CLI fallback - `ccw team` CLI fallback section with parameter mapping
- [ ] Pattern 2: YAML front matter - all fields present, group: team
- [ ] Pattern 3: Task lifecycle - TaskList/Get/Update flow
- [ ] Pattern 4: Five-phase structure - all 5 phases present
- [ ] Pattern 5: Complexity-adaptive (if applicable)
- [ ] Pattern 6: Coordinator spawn compatible
- [ ] Pattern 7: Error handling table
- [ ] Pattern 8: Session files (if applicable)
**Collaboration Pattern Checklist:**
- [ ] At least one CP selected (CP-1 minimum)
- [ ] Each selected CP has convergence criteria defined
- [ ] Each selected CP has feedback loop mechanism
- [ ] Each selected CP has timeout/fallback behavior
- [ ] CP-specific message types registered in message bus section
- [ ] Escalation path defined (CP-5) for error scenarios
### 3. Integration (25%)
| Score | Criteria |
|-------|----------|
| 100% | All integration checks pass, spawn snippet ready |
| 80% | Minor integration notes, no blocking issues |
| 60% | Some checks need attention but functional |
| 40% | Task prefix conflict or missing critical tools |
| 0% | Incompatible with team system |
### 4. Consistency (25%)
| Score | Criteria |
|-------|----------|
| 100% | Role name, task prefix, message types consistent throughout |
| 80% | Minor inconsistencies in non-critical areas |
| 60% | Some mixed terminology but intent clear |
| 40% | Confusing or contradictory content |
| 0% | Internally inconsistent |
---
## Quality Gates
| Gate | Threshold | Action |
|------|-----------|--------|
| PASS | >= 80% | Deliver to `.claude/commands/team/{team-name}/` |
| REVIEW | 60-79% | Fix recommendations, re-validate |
| FAIL | < 60% | Major rework needed, re-run from Phase 3 |
---
## Issue Classification
### Errors (Must Fix)
- Missing YAML front matter
- Missing `group: team`
- No message bus section
- No task lifecycle (TaskList/Get/Update)
- No SendMessage to coordinator
- Task prefix conflicts with existing
### Warnings (Should Fix)
- Missing error handling table
- Incomplete Phase implementation (skeleton only)
- Missing team_msg before some SendMessage calls
- Missing CLI fallback section (`### CLI 回退` with `ccw team` examples)
- No complexity-adaptive routing when role is complex
### Info (Nice to Have)
- Code examples could be more detailed
- Additional message type examples
- Session file structure documentation
- CLI integration examples
---
## Command File Quality Standards
Quality assessment criteria for generated command `.md` files in `roles/{name}/commands/`.
### 5. Command File Quality (Applies to folder-based roles)
| Score | Criteria |
|-------|----------|
| 100% | All 4 dimensions pass, all command files self-contained |
| 80% | 3/4 dimensions pass, minor gaps in one area |
| 60% | 2/4 dimensions pass, some cross-references or missing sections |
| 40% | Missing required sections or broken references |
| 0% | No command files or non-functional |
#### Dimension 1: Structural Completeness
Each command file MUST contain:
- [ ] `## When to Use` - Trigger conditions
- [ ] `## Strategy` with `### Delegation Mode` (Subagent Fan-out / CLI Fan-out / Sequential Delegation / Direct)
- [ ] `## Execution Steps` with numbered steps and code blocks
- [ ] `## Error Handling` table with Scenario/Resolution
#### Dimension 2: Self-Containment
- [ ] No `Ref:` or cross-references to other command files
- [ ] No imports or dependencies on sibling commands
- [ ] All context loaded within the command (task, plan, files)
- [ ] Any subagent can `Read()` the command and execute independently
#### Dimension 3: Toolbox Consistency
- [ ] Every command listed in role.md Toolbox has a corresponding file in `commands/`
- [ ] Every file in `commands/` is listed in role.md Toolbox
- [ ] Phase mapping in Toolbox matches command's `## When to Use` phase reference
- [ ] Delegation mode in command matches role's subagent/CLI capabilities
#### Dimension 4: Pattern Compliance
- [ ] Pre-built command patterns (explore, analyze, implement, validate, review, dispatch, monitor) follow templates/role-command-template.md
- [ ] Custom commands follow the template skeleton structure
- [ ] Delegation mode is appropriate for the command's complexity
- [ ] Output format is structured and parseable by the calling role.md

View File

@@ -1,570 +0,0 @@
# Team Command Design Patterns
> Extracted from 5 production team commands: coordinate, plan, execute, test, review
> Extended with 10 collaboration patterns for diverse team interaction models
---
## Pattern Architecture
```
Team Design Patterns
├── Section A: Infrastructure Patterns (8) ← HOW to build a team command
│ ├── Pattern 1: Message Bus Integration
│ ├── Pattern 2: YAML Front Matter
│ ├── Pattern 3: Task Lifecycle
│ ├── Pattern 4: Five-Phase Execution
│ ├── Pattern 5: Complexity-Adaptive Routing
│ ├── Pattern 6: Coordinator Spawn Integration
│ ├── Pattern 7: Error Handling Table
│ └── Pattern 8: Session File Structure
└── Section B: Collaboration Patterns (10) ← HOW agents interact
├── CP-1: Linear Pipeline (线性流水线)
├── CP-2: Review-Fix Cycle (审查修复循环)
├── CP-3: Parallel Fan-out/Fan-in (并行扇出扇入)
├── CP-4: Consensus Gate (共识门控)
├── CP-5: Escalation Chain (逐级升级)
├── CP-6: Incremental Delivery (增量交付)
├── CP-7: Swarming (群策攻关)
├── CP-8: Consulting/Advisory (咨询顾问)
├── CP-9: Dual-Track (双轨并行)
└── CP-10: Post-Mortem (复盘回顾)
```
**Section B** collaboration patterns are documented in: [collaboration-patterns.md](collaboration-patterns.md)
---
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase 0 | Understand all patterns before design | All sections |
| Phase 2 | Select applicable infrastructure + collaboration patterns | Pattern catalog |
| Phase 3 | Apply patterns during generation | Implementation details |
| Phase 4 | Verify compliance | Checklists |
---
# Section A: Infrastructure Patterns
## Pattern 1: Message Bus Integration
Every teammate must use `mcp__ccw-tools__team_msg` for persistent logging before every `SendMessage`.
### Structure
```javascript
// BEFORE every SendMessage, call:
mcp__ccw-tools__team_msg({
operation: "log",
team: teamName,
from: "<role-name>", // planner | executor | tester | <new-role>
to: "coordinator",
type: "<message-type>",
summary: "<human-readable summary>",
ref: "<optional file path>",
data: { /* optional structured payload */ }
})
```
### Standard Message Types
| Type | Direction | Trigger | Payload |
|------|-----------|---------|---------|
| `plan_ready` | planner -> coordinator | Plan generation complete | `{ taskCount, complexity }` |
| `plan_approved` | coordinator -> planner | Plan reviewed | `{ approved: true }` |
| `plan_revision` | planner -> coordinator | Plan modified per feedback | `{ changes }` |
| `task_unblocked` | coordinator -> any | Dependency resolved | `{ taskId }` |
| `impl_complete` | executor -> coordinator | Implementation done | `{ changedFiles, syntaxClean }` |
| `impl_progress` | any -> coordinator | Progress update | `{ batch, total }` |
| `test_result` | tester -> coordinator | Test cycle end | `{ passRate, iterations }` |
| `review_result` | tester -> coordinator | Review done | `{ verdict, findings }` |
| `fix_required` | any -> coordinator | Critical issues | `{ details[] }` |
| `error` | any -> coordinator | Blocking error | `{ message }` |
| `shutdown` | coordinator -> all | Team dissolved | `{}` |
### Collaboration Pattern Message Types
| Type | Used By | Direction | Trigger |
|------|---------|-----------|---------|
| `vote` | CP-4 Consensus | any -> coordinator | Agent casts vote on proposal |
| `escalate` | CP-5 Escalation | any -> coordinator | Agent escalates unresolved issue |
| `increment_ready` | CP-6 Incremental | executor -> coordinator | Increment delivered for validation |
| `swarm_join` | CP-7 Swarming | any -> coordinator | Agent joins swarm on blocker |
| `consult_request` | CP-8 Consulting | any -> specialist | Agent requests expert advice |
| `consult_response` | CP-8 Consulting | specialist -> requester | Expert provides advice |
| `sync_checkpoint` | CP-9 Dual-Track | any -> coordinator | Track reaches sync point |
| `retro_finding` | CP-10 Post-Mortem | any -> coordinator | Retrospective insight |
### Adding New Message Types
When designing a new role, define role-specific message types following the convention:
- `{action}_ready` - Work product ready for review
- `{action}_complete` - Work phase finished
- `{action}_progress` - Intermediate progress update
### CLI Fallback
When `mcp__ccw-tools__team_msg` MCP is unavailable, use `ccw team` CLI as equivalent fallback:
```javascript
// Fallback: Replace MCP call with Bash CLI (parameters map 1:1)
Bash(`ccw team log --team "${teamName}" --from "<role>" --to "coordinator" --type "<type>" --summary "<summary>" [--ref <path>] [--data '<json>'] --json`)
```
**Parameter mapping**: `team_msg(params)``ccw team <operation> --team <team> [--from/--to/--type/--summary/--ref/--data/--id/--last] [--json]`
**Coordinator** uses all 4 operations: `log`, `list`, `status`, `read`
**Teammates** primarily use: `log`
### Message Bus Section Template
```markdown
## 消息总线
每次 SendMessage **前**,必须调用 `mcp__ccw-tools__team_msg` 记录消息:
\`\`\`javascript
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "<role>", to: "coordinator", type: "<type>", summary: "<summary>" })
\`\`\`
### 支持的 Message Types
| Type | 方向 | 触发时机 | 说明 |
|------|------|----------|------|
| `<type>` | <role> → coordinator | <when> | <what> |
### CLI 回退
`mcp__ccw-tools__team_msg` MCP 不可用时,使用 `ccw team` CLI 作为等效回退:
\`\`\`javascript
// 回退: 将 MCP 调用替换为 Bash CLI参数一一对应
Bash(\`ccw team log --team "${teamName}" --from "<role>" --to "coordinator" --type "<type>" --summary "<summary>" --json\`)
\`\`\`
**参数映射**: `team_msg(params)``ccw team log --team <team> --from <role> --to coordinator --type <type> --summary "<text>" [--ref <path>] [--data '<json>'] [--json]`
```
---
## Pattern 2: YAML Front Matter
Every team command file must start with standardized YAML front matter.
### Structure
```yaml
---
name: <command-name>
description: Team <role> - <capabilities in Chinese>
argument-hint: ""
allowed-tools: SendMessage(*), TaskUpdate(*), TaskList(*), TaskGet(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), Task(*)
group: team
---
```
### Field Rules
| Field | Rule | Example |
|-------|------|---------|
| `name` | Lowercase, matches filename | `plan`, `execute`, `test` |
| `description` | `Team <role> -` prefix + Chinese capability list | `Team planner - 多角度代码探索、结构化实现规划` |
| `argument-hint` | Empty string for teammates, has hint for coordinator | `""` |
| `allowed-tools` | Start with `SendMessage(*), TaskUpdate(*), TaskList(*), TaskGet(*)` | See each role |
| `group` | Always `team` | `team` |
### Minimum Tool Set (All Teammates)
```
SendMessage(*), TaskUpdate(*), TaskList(*), TaskGet(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Grep(*)
```
### Role-Specific Additional Tools
| Role Type | Additional Tools |
|-----------|-----------------|
| Read-only (reviewer, analyzer) | None extra |
| Write-capable (executor, fixer) | `Write(*), Edit(*)` |
| Agent-delegating (planner, executor) | `Task(*)` |
---
## Pattern 3: Task Lifecycle
All teammates follow the same task discovery and lifecycle pattern.
### Standard Flow
```javascript
// Step 1: Find my tasks
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('<PREFIX>-') && // PLAN-*, IMPL-*, TEST-*, REVIEW-*
t.owner === '<role-name>' &&
t.status === 'pending' &&
t.blockedBy.length === 0 // Not blocked
)
// Step 2: No tasks -> idle
if (myTasks.length === 0) return
// Step 3: Claim task (lowest ID first)
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
// Step 4: Execute work
// ... role-specific logic ...
// Step 5: Complete and loop
TaskUpdate({ taskId: task.id, status: 'completed' })
// Step 6: Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('<PREFIX>-') &&
t.owner === '<role-name>' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task -> back to Step 3
}
```
### Task Prefix Convention
| Prefix | Role | Example |
|--------|------|---------|
| `PLAN-` | planner | `PLAN-001: Explore and plan implementation` |
| `IMPL-` | executor | `IMPL-001: Implement approved plan` |
| `TEST-` | tester | `TEST-001: Test-fix cycle` |
| `REVIEW-` | tester | `REVIEW-001: Code review and requirement verification` |
| `<NEW>-` | new role | Must be unique, uppercase, hyphen-suffixed |
### Task Chain (defined in coordinate.md)
```
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
↑ blockedBy ↑ blockedBy
```
---
## Pattern 4: Five-Phase Execution Structure
Every team command follows a consistent 5-phase internal structure.
### Standard Phases
| Phase | Purpose | Common Actions |
|-------|---------|----------------|
| Phase 1: Task Discovery | Find and claim assigned tasks | TaskList, TaskGet, TaskUpdate |
| Phase 2: Context Loading | Load necessary context for work | Read plan/config, detect framework |
| Phase 3: Core Work | Execute primary responsibility | Role-specific logic |
| Phase 4: Validation/Summary | Verify work quality | Syntax check, criteria verification |
| Phase 5: Report + Loop | Report to coordinator, check next | SendMessage, TaskUpdate, TaskList |
### Phase Structure Template
```markdown
### Phase N: <Phase Name>
\`\`\`javascript
// Implementation code
\`\`\`
```
---
## Pattern 5: Complexity-Adaptive Routing
All roles that process varying-difficulty tasks should implement adaptive routing.
### Decision Logic
```javascript
function assessComplexity(description) {
let score = 0
if (/refactor|architect|restructure|module|system/.test(description)) score += 2
if (/multiple|across|cross/.test(description)) score += 2
if (/integrate|api|database/.test(description)) score += 1
if (/security|performance/.test(description)) score += 1
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
}
```
### Routing Table
| Complexity | Direct Claude | CLI Agent | Sub-agent |
|------------|---------------|-----------|-----------|
| Low | Direct execution | - | - |
| Medium | - | `cli-explore-agent` / `cli-lite-planning-agent` | - |
| High | - | CLI agent | `code-developer` / `universal-executor` |
### Sub-agent Delegation Pattern
```javascript
Task({
subagent_type: "<agent-type>",
run_in_background: false,
description: "<brief description>",
prompt: `
## Task Objective
${taskDescription}
## Output Location
${sessionFolder}/${outputFile}
## MANDATORY FIRST STEPS
1. Read: .workflow/project-tech.json (if exists)
2. Read: .workflow/project-guidelines.json (if exists)
## Expected Output
${expectedFormat}
`
})
```
---
## Pattern 6: Coordinator Spawn Integration
New teammates must be spawnable from coordinate.md using standard pattern.
### Skill Path Format (Folder-Based)
Team commands use folder-based organization with colon-separated skill paths:
```
File location: .claude/commands/team/{team-name}/{role-name}.md
Skill path: team:{team-name}:{role-name}
Example:
.claude/commands/team/spec/analyst.md → team:spec:analyst
.claude/commands/team/security/scanner.md → team:security:scanner
```
### Spawn Template
```javascript
Task({
subagent_type: "general-purpose",
team_name: teamName,
name: "<role-name>",
prompt: `You are team "${teamName}" <ROLE>.
When you receive <PREFIX>-* tasks, call Skill(skill="team:${teamName}:<role-name>") to execute.
Current requirement: ${taskDescription}
Constraints: ${constraints}
## Message Bus (Required)
Before each SendMessage, call mcp__ccw-tools__team_msg:
mcp__ccw-tools__team_msg({ operation: "log", team: "${teamName}", from: "<role>", to: "coordinator", type: "<type>", summary: "<summary>" })
Workflow:
1. TaskList -> find <PREFIX>-* tasks assigned to you
2. Skill(skill="team:${teamName}:<role-name>") to execute
3. team_msg log + SendMessage results to coordinator
4. TaskUpdate completed -> check next task`
})
```
---
## Pattern 7: Error Handling Table
Every command ends with a standardized error handling table.
### Template
```markdown
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No tasks available | Idle, wait for coordinator assignment |
| Plan/Context file not found | Notify coordinator, request location |
| Sub-agent failure | Retry once, then fallback to direct execution |
| Max iterations exceeded | Report to coordinator, suggest intervention |
| Critical issue beyond scope | SendMessage fix_required to coordinator |
```
---
## Pattern 8: Session File Structure
Roles that produce artifacts follow standard session directory patterns.
### Convention
```
.workflow/.team-<purpose>/{identifier}-{YYYY-MM-DD}/
├── <work-product-files>
├── manifest.json (if multiple outputs)
└── .task/ (if generating task files)
├── TASK-001.json
└── TASK-002.json
```
---
# Section B: Collaboration Patterns
> Complete specification: [collaboration-patterns.md](collaboration-patterns.md)
## Collaboration Pattern Quick Reference
Every collaboration pattern has these standard elements:
| Element | Description |
|---------|-------------|
| **Entry Condition** | When to activate this pattern |
| **Workflow** | Step-by-step execution flow |
| **Convergence Criteria** | How the pattern terminates successfully |
| **Feedback Loop** | How information flows back to enable correction |
| **Timeout/Fallback** | What happens when the pattern doesn't converge |
| **Max Iterations** | Hard limit on cycles (where applicable) |
### Pattern Selection Guide
| Scenario | Recommended Pattern | Why |
|----------|-------------------|-----|
| Standard feature development | CP-1: Linear Pipeline | Well-defined sequential stages |
| Code review with fixes needed | CP-2: Review-Fix Cycle | Iterative improvement until quality met |
| Multi-angle analysis needed | CP-3: Fan-out/Fan-in | Parallel exploration, aggregated results |
| Critical decision (architecture, security) | CP-4: Consensus Gate | Multiple perspectives before committing |
| Agent stuck / self-repair failed | CP-5: Escalation Chain | Progressive expertise levels |
| Large feature (many files) | CP-6: Incremental Delivery | Validated increments reduce risk |
| Blocking issue stalls pipeline | CP-7: Swarming | All resources on one problem |
| Domain-specific expertise needed | CP-8: Consulting | Expert advice without role change |
| Design + Implementation parallel | CP-9: Dual-Track | Faster delivery with sync checkpoints |
| Post-completion learning | CP-10: Post-Mortem | Capture insights for future improvement |
---
## Pattern Summary Checklist
When designing a new team command, verify:
### Infrastructure Patterns
- [ ] YAML front matter with `group: team`
- [ ] Message bus section with `team_msg` logging
- [ ] CLI fallback section with `ccw team` CLI examples and parameter mapping
- [ ] Role-specific message types defined
- [ ] Task lifecycle: TaskList -> TaskGet -> TaskUpdate flow
- [ ] Unique task prefix (no collision with existing PLAN/IMPL/TEST/REVIEW, scan `team/**/*.md`)
- [ ] 5-phase execution structure
- [ ] Complexity-adaptive routing (if applicable)
- [ ] Coordinator spawn template integration
- [ ] Error handling table
- [ ] SendMessage communication to coordinator only
- [ ] Session file structure (if producing artifacts)
### Collaboration Patterns
- [ ] At least one collaboration pattern selected
- [ ] Convergence criteria defined (max iterations / quality gate / timeout)
- [ ] Feedback loop implemented (how results flow back)
- [ ] Timeout/fallback behavior specified
- [ ] Pattern-specific message types registered
- [ ] Coordinator aware of pattern (can route messages accordingly)
---
## Pattern 9: Parallel Subagent Orchestration
Roles that need to perform complex, multi-perspective work can delegate to subagents or CLI tools rather than executing everything inline. This pattern defines three delegation modes and context management rules.
### Delegation Modes
#### Mode A: Subagent Fan-out
Launch multiple Task agents in parallel for independent work streams.
```javascript
// Launch 2-4 parallel agents for different perspectives
const agents = [
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore angle 1",
prompt: `Analyze from perspective 1: ${taskDescription}`
}),
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore angle 2",
prompt: `Analyze from perspective 2: ${taskDescription}`
})
]
// Aggregate results after all complete
```
**When to use**: Multi-angle exploration, parallel code analysis, independent subtask execution.
#### Mode B: CLI Fan-out
Launch multiple `ccw cli` calls for multi-perspective analysis.
```javascript
// Parallel CLI calls for different analysis angles
Bash(`ccw cli -p "PURPOSE: Analyze from security angle..." --tool gemini --mode analysis`, { run_in_background: true })
Bash(`ccw cli -p "PURPOSE: Analyze from performance angle..." --tool gemini --mode analysis`, { run_in_background: true })
// Wait for all CLI results, then synthesize
```
**When to use**: Multi-dimensional code review, architecture analysis, security + performance audits.
#### Mode C: Sequential Delegation
Delegate a single heavy task to a specialized agent.
```javascript
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Implement complex feature",
prompt: `## Goal\n${plan.summary}\n\n## Tasks\n${taskDetails}`
})
```
**When to use**: Complex implementation, test-fix cycles, large-scope refactoring.
### Context Management Hierarchy
| Level | Location | Context Size | Use Case |
|-------|----------|-------------|----------|
| Small | role.md inline | < 200 lines | Simple logic, direct execution |
| Medium | commands/*.md | 200-500 lines | Structured delegation with strategy |
| Large | Subagent prompt | Unlimited | Full autonomous execution |
**Rule**: role.md Phase 1/5 are always inline (standardized). Phases 2-4 either inline (small) or delegate to commands (medium/large).
### Command File Extraction Criteria
Extract a phase into a command file when ANY of these conditions are met:
1. **Subagent delegation**: Phase launches Task() agents
2. **CLI fan-out**: Phase runs parallel `ccw cli` calls
3. **Complex strategy**: Phase has >3 conditional branches
4. **Reusable logic**: Same logic used by multiple roles
If none apply, keep the phase inline in role.md.
### Relationship to Other Patterns
- **Pattern 5 (Complexity-Adaptive)**: Pattern 9 provides the delegation mechanisms that Pattern 5 routes to. Low complexity → inline, Medium → CLI agent, High → Subagent fan-out.
- **CP-3 (Parallel Fan-out)**: Pattern 9 Mode A/B are the implementation mechanisms for CP-3 at the role level.
- **Pattern 4 (Five-Phase)**: Pattern 9 does NOT replace the 5-phase structure. It provides delegation options WITHIN phases 2-4.
### Checklist
- [ ] Delegation mode selected based on task characteristics
- [ ] Context management level appropriate (small/medium/large)
- [ ] Command files extracted only when criteria met
- [ ] Subagent prompts include mandatory first steps (read project config)
- [ ] CLI fan-out uses `--mode analysis` by default
- [ ] Results aggregated after parallel completion
- [ ] Error handling covers agent/CLI failure with fallback

View File

@@ -1,725 +0,0 @@
# Role Command Template
Template for generating command files in `roles/{role-name}/commands/{command}.md`.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand command file structure |
| Phase 3 | Apply with role-specific content |
---
## Template
```markdown
# Command: {{command_name}}
> {{command_description}}
## When to Use
{{when_to_use_description}}
**Trigger conditions**:
{{#each triggers}}
- {{this}}
{{/each}}
## Strategy
### Delegation Mode
**Mode**: {{delegation_mode}}
{{#if delegation_mode_subagent}}
**Subagent Type**: `{{subagent_type}}`
**Parallel Count**: {{parallel_count}} (1-4)
{{/if}}
{{#if delegation_mode_cli}}
**CLI Tool**: `{{cli_tool}}`
**CLI Mode**: `{{cli_mode}}`
**Parallel Perspectives**: {{cli_perspectives}}
{{/if}}
{{#if delegation_mode_sequential}}
**Agent Type**: `{{agent_type}}`
**Delegation Scope**: {{delegation_scope}}
{{/if}}
### Decision Logic
\`\`\`javascript
{{decision_logic}}
\`\`\`
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
{{context_preparation_code}}
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
{{execution_code}}
\`\`\`
### Step 3: Result Processing
\`\`\`javascript
{{result_processing_code}}
\`\`\`
## Output Format
\`\`\`
{{output_format}}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
{{#each error_handlers}}
| {{this.scenario}} | {{this.resolution}} |
{{/each}}
| Agent/CLI failure | Retry once, then fallback to inline execution |
| Timeout (>5 min) | Report partial results, notify coordinator |
```
---
## 7 Pre-built Command Patterns
### 1. explore.md (Multi-angle Exploration)
**Delegation Mode**: Subagent Fan-out
**Source Pattern**: team-lifecycle planner Phase 2
**Maps to**: Orchestration roles
```markdown
# Command: explore
> Multi-angle codebase exploration using parallel cli-explore-agent instances.
## When to Use
- Phase 2 of Orchestration roles
- Task requires understanding existing code patterns
- Multiple exploration angles needed (architecture, patterns, dependencies)
**Trigger conditions**:
- New feature planning
- Codebase unfamiliar to the agent
- Cross-module impact analysis
## Strategy
### Delegation Mode
**Mode**: Subagent Fan-out
**Subagent Type**: `cli-explore-agent`
**Parallel Count**: 2-4 (based on complexity)
### Decision Logic
\`\`\`javascript
const angles = []
if (/architect|structure|design/.test(task.description)) angles.push("architecture")
if (/pattern|convention|style/.test(task.description)) angles.push("patterns")
if (/depend|import|module/.test(task.description)) angles.push("dependencies")
if (/test|spec|coverage/.test(task.description)) angles.push("testing")
if (angles.length === 0) angles.push("general", "patterns")
\`\`\`
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
const taskDescription = task.description
const projectRoot = Bash(\`git rev-parse --show-toplevel\`).trim()
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
// Launch parallel exploration agents (1 per angle)
for (const angle of angles) {
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: \`Explore: \${angle}\`,
prompt: \`Explore the codebase from the perspective of \${angle}.
Focus on: \${taskDescription}
Project root: \${projectRoot}
Report findings as structured markdown with file references.\`
})
}
\`\`\`
### Step 3: Result Processing
\`\`\`javascript
// Aggregate exploration results
const aggregated = {
angles_explored: angles,
key_findings: [], // merge from all agents
relevant_files: [], // deduplicate across agents
patterns_found: []
}
\`\`\`
## Output Format
\`\`\`
## Exploration Results
### Angles Explored: [list]
### Key Findings
- [finding with file:line reference]
### Relevant Files
- [file path with relevance note]
### Patterns Found
- [pattern name: description]
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Agent returns no results | Retry with broader search scope |
| Agent timeout | Use partial results, note incomplete angles |
| Project root not found | Fall back to current directory |
```
### 2. analyze.md (Multi-perspective Analysis)
**Delegation Mode**: CLI Fan-out
**Source Pattern**: analyze-with-file Phase 2
**Maps to**: Read-only analysis roles
```markdown
# Command: analyze
> Multi-perspective code analysis using parallel ccw cli calls.
## When to Use
- Phase 3 of Read-only analysis roles
- Multiple analysis dimensions needed (security, performance, quality)
- Deep analysis beyond inline capability
**Trigger conditions**:
- Code review with specific focus areas
- Security/performance audit
- Architecture assessment
## Strategy
### Delegation Mode
**Mode**: CLI Fan-out
**CLI Tool**: `gemini` (primary), `codex` (secondary)
**CLI Mode**: `analysis`
**Parallel Perspectives**: 2-4
### Decision Logic
\`\`\`javascript
const perspectives = []
if (/security|auth|inject|xss/.test(task.description)) perspectives.push("security")
if (/performance|speed|optimize|memory/.test(task.description)) perspectives.push("performance")
if (/quality|clean|maintain|debt/.test(task.description)) perspectives.push("code-quality")
if (/architect|pattern|structure/.test(task.description)) perspectives.push("architecture")
if (perspectives.length === 0) perspectives.push("code-quality", "architecture")
\`\`\`
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
const targetFiles = Bash(\`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached\`)
.split('\\n').filter(Boolean)
const fileContext = targetFiles.map(f => \`@\${f}\`).join(' ')
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
for (const perspective of perspectives) {
Bash(\`ccw cli -p "PURPOSE: Analyze code from \${perspective} perspective
TASK: Review changes in: \${targetFiles.join(', ')}
MODE: analysis
CONTEXT: \${fileContext}
EXPECTED: Findings with severity, file:line references, remediation
CONSTRAINTS: Focus on \${perspective}" --tool gemini --mode analysis\`, { run_in_background: true })
}
// Wait for all CLI results
\`\`\`
### Step 3: Result Processing
\`\`\`javascript
// Aggregate findings across all perspectives
const findings = { critical: [], high: [], medium: [], low: [] }
// Merge, deduplicate, prioritize
\`\`\`
## Output Format
\`\`\`
## Analysis Results
### Perspectives Analyzed: [list]
### Findings by Severity
#### Critical
- [finding with file:line]
#### High
- [finding]
...
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI tool unavailable | Fall back to secondary tool (codex) |
| CLI returns empty | Retry with broader scope |
| Too many findings | Prioritize critical/high, summarize medium/low |
```
### 3. implement.md (Code Implementation)
**Delegation Mode**: Sequential Delegation
**Source Pattern**: team-lifecycle executor Phase 3
**Maps to**: Code generation roles
```markdown
# Command: implement
> Code implementation via code-developer subagent delegation with batch routing.
## When to Use
- Phase 3 of Code generation roles
- Implementation involves >2 files or complex logic
- Plan tasks available with file specifications
**Trigger conditions**:
- Plan approved and tasks defined
- Multi-file implementation needed
- Complex logic requiring specialized agent
## Strategy
### Delegation Mode
**Mode**: Sequential Delegation (with batch routing)
**Agent Type**: `code-developer`
**Delegation Scope**: Per-batch (group related tasks)
### Decision Logic
\`\`\`javascript
const taskCount = planTasks.length
if (taskCount <= 2) {
// Direct: inline Edit/Write
mode = "direct"
} else if (taskCount <= 5) {
// Single agent: one code-developer for all
mode = "single-agent"
} else {
// Batch: group by module, one agent per batch
mode = "batch-agent"
}
\`\`\`
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
const plan = JSON.parse(Read(planPath))
const planTasks = plan.task_ids.map(id =>
JSON.parse(Read(\`\${planDir}/.task/\${id}.json\`))
)
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
if (mode === "direct") {
for (const pt of planTasks) {
for (const f of (pt.files || [])) {
Read(f.path)
Edit({ file_path: f.path, old_string: "...", new_string: "..." })
}
}
} else {
const batches = mode === "batch-agent"
? groupByModule(planTasks)
: [planTasks]
for (const batch of batches) {
Task({
subagent_type: "code-developer",
run_in_background: false,
description: \`Implement \${batch.length} tasks\`,
prompt: \`## Goal\\n\${plan.summary}\\n\\n## Tasks\\n\${
batch.map(t => \`### \${t.title}\\n\${t.description}\`).join('\\n\\n')
}\\n\\nComplete each task according to its convergence criteria.\`
})
}
}
\`\`\`
### Step 3: Result Processing
\`\`\`javascript
const changedFiles = Bash(\`git diff --name-only\`).split('\\n').filter(Boolean)
const syntaxClean = !Bash(\`tsc --noEmit 2>&1 || true\`).includes('error TS')
\`\`\`
## Output Format
\`\`\`
## Implementation Results
### Changed Files: [count]
- [file path]
### Syntax Check: PASS/FAIL
### Tasks Completed: [count]/[total]
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Plan file not found | Notify coordinator, request plan path |
| Agent fails on task | Retry once, then mark task as blocked |
| Syntax errors after impl | Attempt auto-fix, report if unresolved |
```
### 4. validate.md (Test-Fix Cycle)
**Delegation Mode**: Sequential Delegation
**Source Pattern**: team-lifecycle tester
**Maps to**: Validation roles
```markdown
# Command: validate
> Iterative test-fix cycle with max iteration control.
## When to Use
- Phase 3 of Validation roles
- After implementation, before review
- Automated test suite available
## Strategy
### Delegation Mode
**Mode**: Sequential Delegation
**Agent Type**: `code-developer` (for fix iterations)
**Max Iterations**: 5
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
const testCommand = detectTestCommand() // npm test, pytest, etc.
const changedFiles = Bash(\`git diff --name-only\`).split('\\n').filter(Boolean)
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
let iteration = 0
const MAX_ITERATIONS = 5
let lastResult = null
while (iteration < MAX_ITERATIONS) {
lastResult = Bash(\`\${testCommand} 2>&1 || true\`)
const passed = !lastResult.includes('FAIL') && !lastResult.includes('Error')
if (passed) break
// Delegate fix to code-developer
Task({
subagent_type: "code-developer",
run_in_background: false,
description: \`Fix test failures (iteration \${iteration + 1})\`,
prompt: \`Test failures:\\n\${lastResult}\\n\\nFix the failing tests. Changed files: \${changedFiles.join(', ')}\`
})
iteration++
}
\`\`\`
### Step 3: Result Processing
\`\`\`javascript
const result = {
iterations: iteration,
passed: iteration < MAX_ITERATIONS,
lastOutput: lastResult
}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No test command found | Notify coordinator |
| Max iterations exceeded | Report failures, suggest manual intervention |
| Test environment broken | Report environment issue |
```
### 5. review.md (Multi-dimensional Review)
**Delegation Mode**: CLI Fan-out
**Source Pattern**: team-lifecycle reviewer
**Maps to**: Read-only analysis roles
```markdown
# Command: review
> 4-dimensional code review with optional codex review integration.
## When to Use
- Phase 3 of Read-only analysis roles (reviewer type)
- After implementation and testing
- Quality gate before delivery
## Strategy
### Delegation Mode
**Mode**: CLI Fan-out
**CLI Tool**: `gemini` + optional `codex` (review mode)
**Dimensions**: correctness, completeness, maintainability, requirement-fit
## Execution Steps
### Step 2: Execute Strategy
\`\`\`javascript
// Dimension 1-3: Parallel CLI analysis
const dimensions = ["correctness", "completeness", "maintainability"]
for (const dim of dimensions) {
Bash(\`ccw cli -p "PURPOSE: Review code for \${dim}
TASK: Evaluate changes against \${dim} criteria
MODE: analysis
CONTEXT: @\${changedFiles.join(' @')}
EXPECTED: Findings with severity and file:line references" --tool gemini --mode analysis\`, { run_in_background: true })
}
// Dimension 4: Optional codex review
Bash(\`ccw cli --tool codex --mode review --uncommitted\`, { run_in_background: true })
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Codex unavailable | Skip dimension 4, report 3-dimension review |
| No changed files | Review full scope of plan files |
```
### 6. dispatch.md (Task Distribution)
**Delegation Mode**: N/A (Coordinator-only)
**Source Pattern**: auto-parallel + CP-3
**Maps to**: Coordinator role
```markdown
# Command: dispatch
> Task chain creation with dependency management for coordinator.
## When to Use
- Phase 3 of Coordinator role
- After requirement clarification
- When creating and assigning tasks to teammates
## Strategy
### Delegation Mode
**Mode**: Direct (no delegation - coordinator acts directly)
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
const config = TEAM_CONFIG
const pipeline = config.pipeline
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
const taskIds = {}
for (const stage of pipeline.stages) {
const blockedByIds = stage.blockedBy.map(dep => taskIds[dep]).filter(Boolean)
TaskCreate({
subject: \`\${stage.name}-001: \${stage.role} work\`,
description: taskDescription,
activeForm: \`\${stage.name} 进行中\`
})
// Record task ID
taskIds[stage.name] = newTaskId
// Set owner and dependencies
TaskUpdate({
taskId: newTaskId,
owner: stage.role,
addBlockedBy: blockedByIds
})
}
\`\`\`
### Step 3: Result Processing
\`\`\`javascript
// Verify task chain created correctly
const allTasks = TaskList()
const chainValid = pipeline.stages.every(s => taskIds[s.name])
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Task creation fails | Retry, then report to user |
| Dependency cycle detected | Flatten dependencies, warn |
| Role not spawned yet | Queue task, spawn role first |
```
### 7. monitor.md (Progress Monitoring)
**Delegation Mode**: N/A (Coordinator-only)
**Source Pattern**: coordinate.md Phase 4
**Maps to**: Coordinator role
```markdown
# Command: monitor
> Message bus polling and coordination loop for coordinator.
## When to Use
- Phase 4 of Coordinator role
- After task dispatch
- Continuous monitoring until all tasks complete
## Strategy
### Delegation Mode
**Mode**: Direct (coordinator polls and routes)
## Execution Steps
### Step 1: Context Preparation
\`\`\`javascript
const routingTable = {}
for (const role of config.worker_roles) {
const resultType = role.message_types.find(mt =>
!mt.type.includes('error') && !mt.type.includes('progress')
)
routingTable[resultType?.type || \`\${role.name}_complete\`] = {
role: role.name,
action: "Mark task completed, check downstream dependencies"
}
}
routingTable["error"] = { role: "*", action: "Assess severity, retry or escalate" }
routingTable["fix_required"] = { role: "*", action: "Create fix task for executor" }
\`\`\`
### Step 2: Execute Strategy
\`\`\`javascript
// Coordination loop
let allComplete = false
while (!allComplete) {
// Poll message bus
const messages = mcp__ccw-tools__team_msg({
operation: "list",
team: teamName,
last: 10
})
// Route each message
for (const msg of messages) {
const handler = routingTable[msg.type]
if (handler) {
// Execute handler action
}
}
// Check completion
const tasks = TaskList()
allComplete = tasks.filter(t =>
t.owner !== 'coordinator' && t.status !== 'completed'
).length === 0
}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Message bus unavailable | Fall back to TaskList polling |
| Teammate unresponsive | Send follow-up, 2x → respawn |
| Deadlock detected | Identify cycle, break with manual unblock |
```
---
## Variable Reference
| Variable | Source | Description |
|----------|--------|-------------|
| `{{command_name}}` | Command identifier | e.g., "explore", "analyze" |
| `{{command_description}}` | One-line description | What this command does |
| `{{delegation_mode}}` | Mode selection | "Subagent Fan-out", "CLI Fan-out", "Sequential Delegation", "Direct" |
| `{{when_to_use_description}}` | Usage context | When to invoke this command |
| `{{triggers}}` | Trigger conditions | List of conditions |
| `{{decision_logic}}` | Strategy selection code | JavaScript decision code |
| `{{context_preparation_code}}` | Context setup | JavaScript setup code |
| `{{execution_code}}` | Core execution | JavaScript execution code |
| `{{result_processing_code}}` | Result aggregation | JavaScript result code |
| `{{output_format}}` | Expected output structure | Markdown format spec |
| `{{error_handlers}}` | Error handling entries | Array of {scenario, resolution} |
## Self-Containment Rules
1. **No cross-command references**: Each command.md must be executable independently
2. **Include all imports**: List all required context (files, configs) in Step 1
3. **Complete error handling**: Every command handles its own failures
4. **Explicit output format**: Define what the command produces
5. **Strategy declaration**: State delegation mode and decision logic upfront

View File

@@ -1,454 +0,0 @@
# Role File Template
Template for generating per-role execution detail files in `roles/{role-name}/role.md`.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand role file structure |
| Phase 3 | Apply with role-specific content |
---
## Template
```markdown
# Role: {{role_name}}
{{role_description}}
## Role Identity
- **Name**: `{{role_name}}`
- **Task Prefix**: `{{task_prefix}}-*`
- **Responsibility**: {{responsibility_type}}
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[{{role_name}}]`
## Role Boundaries
### MUST
- 仅处理 `{{task_prefix}}-*` 前缀的任务
- 所有输出SendMessage、team_msg、日志必须带 `[{{role_name}}]` 标识
- 仅通过 SendMessage 与 coordinator 通信
- 严格在 {{responsibility_type}} 职责范围内工作
### MUST NOT
- ❌ 执行其他角色职责范围内的工作
- ❌ 直接与其他 worker 角色通信(必须经过 coordinator
- ❌ 为其他角色创建任务TaskCreate 是 coordinator 专属)
- ❌ 修改不属于本角色职责的文件或资源
- ❌ 在输出中省略 `[{{role_name}}]` 标识
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
{{#each message_types}}
| `{{this.type}}` | {{../role_name}} → coordinator | {{this.trigger}} | {{this.description}} |
{{/each}}
## Toolbox
### Available Commands
| Command | File | Phase | Description |
|---------|------|-------|-------------|
{{#each commands}}
| `{{this.name}}` | [commands/{{this.name}}.md](commands/{{this.name}}.md) | Phase {{this.phase}} | {{this.description}} |
{{/each}}
{{#if has_no_commands}}
> No command files — all phases execute inline.
{{/if}}
### Subagent Capabilities
| Agent Type | Used By | Purpose |
|------------|---------|---------|
{{#each subagents}}
| `{{this.type}}` | {{this.used_by}} | {{this.purpose}} |
{{/each}}
### CLI Capabilities
| CLI Tool | Mode | Used By | Purpose |
|----------|------|---------|---------|
{{#each cli_tools}}
| `{{this.tool}}` | {{this.mode}} | {{this.used_by}} | {{this.purpose}} |
{{/each}}
## Execution (5-Phase)
### Phase 1: Task Discovery
\`\`\`javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('{{task_prefix}}-') &&
t.owner === '{{role_name}}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
\`\`\`
### Phase 2: {{phase2_name}}
{{#if phase2_command}}
\`\`\`javascript
// Delegate to command file
try {
const commandContent = Read("commands/{{phase2_command}}.md")
// Execute strategy defined in command file
} catch {
// Fallback: inline execution
}
\`\`\`
**Command**: [commands/{{phase2_command}}.md](commands/{{phase2_command}}.md)
{{else}}
{{phase2_content}}
{{/if}}
### Phase 3: {{phase3_name}}
{{#if phase3_command}}
\`\`\`javascript
// Delegate to command file
try {
const commandContent = Read("commands/{{phase3_command}}.md")
// Execute strategy defined in command file
} catch {
// Fallback: inline execution
}
\`\`\`
**Command**: [commands/{{phase3_command}}.md](commands/{{phase3_command}}.md)
{{else}}
{{phase3_content}}
{{/if}}
### Phase 4: {{phase4_name}}
{{#if phase4_command}}
\`\`\`javascript
// Delegate to command file
try {
const commandContent = Read("commands/{{phase4_command}}.md")
// Execute strategy defined in command file
} catch {
// Fallback: inline execution
}
\`\`\`
**Command**: [commands/{{phase4_command}}.md](commands/{{phase4_command}}.md)
{{else}}
{{phase4_content}}
{{/if}}
### Phase 5: Report to Coordinator
\`\`\`javascript
// Log message before SendMessage — 所有输出必须带 [{{role_name}}] 标识
mcp__ccw-tools__team_msg({
operation: "log",
team: teamName,
from: "{{role_name}}",
to: "coordinator",
type: "{{primary_message_type}}",
summary: \`[{{role_name}}] {{task_prefix}} complete: \${task.subject}\`
})
SendMessage({
type: "message",
recipient: "coordinator",
content: \`## [{{role_name}}] {{display_name}} Results
**Task**: \${task.subject}
**Status**: \${resultStatus}
### Summary
\${resultSummary}
### Details
\${resultDetails}\`,
summary: \`[{{role_name}}] {{task_prefix}} complete\`
})
// Mark task completed
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('{{task_prefix}}-') &&
t.owner === '{{role_name}}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No {{task_prefix}}-* tasks available | Idle, wait for coordinator assignment |
| Context/Plan file not found | Notify coordinator, request location |
{{#if has_commands}}
| Command file not found | Fall back to inline execution |
{{/if}}
{{#if adaptive_routing}}
| Sub-agent failure | Retry once, then fallback to direct execution |
{{/if}}
| Critical issue beyond scope | SendMessage fix_required to coordinator |
| Unexpected error | Log error via team_msg, report to coordinator |
```
---
## Template Sections by Responsibility Type
### Read-only analysis
**Phase 2: Context Loading**
```javascript
// Load plan for criteria reference
const planPathMatch = task.description.match(/\.workflow\/\.team-plan\/[^\s]+\/plan\.json/)
let plan = null
if (planPathMatch) {
try { plan = JSON.parse(Read(planPathMatch[0])) } catch {}
}
// Get changed files
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`)
.split('\n').filter(Boolean)
// Read file contents for analysis
const fileContents = {}
for (const file of changedFiles.slice(0, 20)) {
try { fileContents[file] = Read(file) } catch {}
}
```
**Phase 3: Analysis Execution**
```javascript
// Core analysis logic
// Customize per specific analysis domain
```
**Phase 4: Finding Summary**
```javascript
// Classify findings by severity
const findings = {
critical: [],
high: [],
medium: [],
low: []
}
```
### Code generation
**Phase 2: Task & Plan Loading**
```javascript
const planPathMatch = task.description.match(/\.workflow\/\.team-plan\/[^\s]+\/plan\.json/)
if (!planPathMatch) {
SendMessage({ type: "message", recipient: "coordinator",
content: `Cannot find plan.json in ${task.subject}`, summary: "Plan not found" })
return
}
const plan = JSON.parse(Read(planPathMatch[0]))
const planTasks = plan.task_ids.map(id =>
JSON.parse(Read(`${planPathMatch[0].replace('plan.json', '')}.task/${id}.json`))
)
```
**Phase 3: Code Implementation**
```javascript
// Complexity-adaptive execution
if (complexity === 'Low') {
// Direct file editing
} else {
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Implement plan tasks",
prompt: `...`
})
}
```
**Phase 4: Self-Validation**
```javascript
const syntaxResult = Bash(`tsc --noEmit 2>&1 || true`)
const hasSyntaxErrors = syntaxResult.includes('error TS')
```
### Orchestration
**Phase 2: Context & Complexity Assessment**
```javascript
function assessComplexity(desc) {
let score = 0
if (/refactor|architect|restructure|module|system/.test(desc)) score += 2
if (/multiple|across|cross/.test(desc)) score += 2
if (/integrate|api|database/.test(desc)) score += 1
if (/security|performance/.test(desc)) score += 1
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
}
const complexity = assessComplexity(task.description)
```
**Phase 3: Orchestrated Execution**
```javascript
// Launch parallel sub-agents or sequential stages
```
**Phase 4: Result Aggregation**
```javascript
// Merge and summarize sub-agent results
```
### Validation
**Phase 2: Environment Detection**
```javascript
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`)
.split('\n').filter(Boolean)
```
**Phase 3: Execution & Fix Cycle**
```javascript
// Run validation, collect failures, attempt fixes, re-validate
let iteration = 0
const MAX_ITERATIONS = 5
while (iteration < MAX_ITERATIONS) {
const result = runValidation()
if (result.passRate >= 0.95) break
applyFixes(result.failures)
iteration++
}
```
**Phase 4: Result Analysis**
```javascript
// Analyze pass/fail patterns, coverage gaps
```
---
## Coordinator Role Template
The coordinator role is special and always generated. Its template differs from worker roles:
```markdown
# Role: coordinator
Team coordinator. Orchestrates the pipeline: requirement clarification → task chain creation → dispatch → monitoring → reporting.
## Role Identity
- **Name**: `coordinator`
- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them)
- **Responsibility**: Orchestration
- **Communication**: SendMessage to all teammates
- **Output Tag**: `[coordinator]`
## Role Boundaries
### MUST
- 所有输出SendMessage、team_msg、日志必须带 `[coordinator]` 标识
- 仅负责需求澄清、任务创建/分发、进度监控、结果汇报
- 通过 TaskCreate 创建任务并分配给 worker 角色
- 通过消息总线监控 worker 进度并路由消息
### MUST NOT
-**直接执行任何业务任务**(代码编写、分析、测试、审查等)
- ❌ 直接调用 code-developer、cli-explore-agent 等实现类 subagent
- ❌ 直接修改源代码或生成产物文件
- ❌ 绕过 worker 角色自行完成应委派的工作
- ❌ 在输出中省略 `[coordinator]` 标识
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
## Execution
### Phase 1: Requirement Clarification
Parse $ARGUMENTS, use AskUserQuestion for MVP scope and constraints.
### Phase 2: Create Team + Spawn Teammates
\`\`\`javascript
TeamCreate({ team_name: teamName })
// Spawn each worker role
{{#each worker_roles}}
Task({
subagent_type: "general-purpose",
team_name: teamName,
name: "{{this.name}}",
prompt: \`...Skill(skill="team-{{team_name}}", args="--role={{this.name}}")...\`
})
{{/each}}
\`\`\`
### Phase 3: Create Task Chain
\`\`\`javascript
{{task_chain_creation_code}}
\`\`\`
### Phase 4: Coordination Loop
| Received Message | Action |
|-----------------|--------|
{{#each coordination_handlers}}
| {{this.trigger}} | {{this.action}} |
{{/each}}
### Phase 5: Report + Persist
Summarize results. AskUserQuestion for next requirement or shutdown.
```
---
## Variable Reference
| Variable | Source | Description |
|----------|--------|-------------|
| `{{role_name}}` | config.role_name | Role identifier |
| `{{task_prefix}}` | config.task_prefix | UPPERCASE task prefix |
| `{{responsibility_type}}` | config.responsibility_type | Role type |
| `{{display_name}}` | config.display_name | Human-readable |
| `{{phase2_name}}` | patterns.phase_structure.phase2 | Phase 2 label |
| `{{phase3_name}}` | patterns.phase_structure.phase3 | Phase 3 label |
| `{{phase4_name}}` | patterns.phase_structure.phase4 | Phase 4 label |
| `{{phase2_content}}` | Generated from responsibility template | Phase 2 code |
| `{{phase3_content}}` | Generated from responsibility template | Phase 3 code |
| `{{phase4_content}}` | Generated from responsibility template | Phase 4 code |
| `{{message_types}}` | config.message_types | Array of message types |
| `{{primary_message_type}}` | config.message_types[0].type | Primary type |
| `{{adaptive_routing}}` | config.adaptive_routing | Boolean |
| `{{commands}}` | config.commands | Array of command definitions |
| `{{has_commands}}` | config.commands.length > 0 | Boolean: has extracted commands |
| `{{has_no_commands}}` | config.commands.length === 0 | Boolean: all phases inline |
| `{{subagents}}` | config.subagents | Array of subagent capabilities |
| `{{cli_tools}}` | config.cli_tools | Array of CLI tool capabilities |
| `{{phase2_command}}` | config.phase2_command | Command name for Phase 2 (if extracted) |
| `{{phase3_command}}` | config.phase3_command | Command name for Phase 3 (if extracted) |
| `{{phase4_command}}` | config.phase4_command | Command name for Phase 4 (if extracted) |

View File

@@ -1,289 +0,0 @@
# Skill Router Template
Template for the generated SKILL.md with role-based routing.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand generated SKILL.md structure |
| Phase 3 | Apply with team-specific content |
---
## Template
```markdown
---
name: team-{{team_name}}
description: Unified team skill for {{team_name}} team. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team {{team_name}}".
allowed-tools: {{all_roles_tools_union}}
---
# Team {{team_display_name}}
Unified team skill. All team members invoke this skill with `--role=xxx` to route to role-specific execution.
## Architecture Overview
\`\`\`
┌───────────────────────────────────────────┐
│ Skill(skill="team-{{team_name}}") │
│ args="--role=xxx" │
└───────────────┬───────────────────────────┘
│ Role Router
┌───────────┼───────────┬───────────┐
↓ ↓ ↓ ↓
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│coordinator│ │{{role_1}}│ │{{role_2}}│ │{{role_3}}│
│ roles/ │ │ roles/ │ │ roles/ │ │ roles/ │
└──────────┘ └──────────┘ └──────────┘ └──────────┘
\`\`\`
## Command Architecture
Each role is organized as a folder with a `role.md` orchestrator and optional `commands/` for delegation:
\`\`\`
roles/
{{#each roles}}
├── {{this.name}}/
│ ├── role.md # Orchestrator (Phase 1/5 inline, Phase 2-4 delegate)
│ └── commands/ # Optional: extracted command files
│ └── *.md # Self-contained command modules
{{/each}}
\`\`\`
**Design principle**: role.md keeps Phase 1 (Task Discovery) and Phase 5 (Report) inline. Phases 2-4 either stay inline (simple logic) or delegate to `commands/*.md` via `Read("commands/xxx.md")` when they involve subagent delegation, CLI fan-out, or complex strategies.
**Command files** are self-contained: each includes Strategy, Execution Steps, and Error Handling. Any subagent can `Read()` a command file and execute it independently.
## Role Router
### Input Parsing
Parse `$ARGUMENTS` to extract `--role`:
\`\`\`javascript
const args = "$ARGUMENTS"
const roleMatch = args.match(/--role[=\s]+(\w+)/)
if (!roleMatch) {
// ERROR: --role is required
// This skill must be invoked with: Skill(skill="team-{{team_name}}", args="--role=xxx")
throw new Error("Missing --role argument. Available roles: {{role_list}}")
}
const role = roleMatch[1]
const teamName = "{{team_name}}"
\`\`\`
### Role Dispatch
\`\`\`javascript
const VALID_ROLES = {
{{#each roles}}
"{{this.name}}": { file: "roles/{{this.name}}/role.md", prefix: "{{this.task_prefix}}" },
{{/each}}
}
if (!VALID_ROLES[role]) {
throw new Error(\`Unknown role: \${role}. Available: \${Object.keys(VALID_ROLES).join(', ')}\`)
}
// Read and execute role-specific logic
Read(VALID_ROLES[role].file)
// → Execute the 5-phase process defined in that file
\`\`\`
### Available Roles
| Role | Task Prefix | Responsibility | Role File |
|------|-------------|----------------|-----------|
{{#each roles}}
| `{{this.name}}` | {{this.task_prefix}}-* | {{this.responsibility}} | [roles/{{this.name}}/role.md](roles/{{this.name}}/role.md) |
{{/each}}
## Shared Infrastructure
### Role Isolation Rules
**核心原则**: 每个角色仅能执行自己职责范围内的工作。
#### Output Tagging强制
所有角色的输出必须带 `[role_name]` 标识前缀:
\`\`\`javascript
// SendMessage — content 和 summary 都必须带标识
SendMessage({
content: \`## [\\${role}] ...\`,
summary: \`[\\${role}] ...\`
})
// team_msg — summary 必须带标识
mcp__ccw-tools__team_msg({
summary: \`[\\${role}] ...\`
})
\`\`\`
#### Coordinator 隔离
| 允许 | 禁止 |
|------|------|
| 需求澄清 (AskUserQuestion) | ❌ 直接编写/修改代码 |
| 创建任务链 (TaskCreate) | ❌ 调用实现类 subagent (code-developer 等) |
| 分发任务给 worker | ❌ 直接执行分析/测试/审查 |
| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 |
| 汇报结果给用户 | ❌ 修改源代码或产物文件 |
#### Worker 隔离
| 允许 | 禁止 |
|------|------|
| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 |
| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 |
| 使用 Toolbox 中声明的工具 | ❌ 为其他角色创建任务 (TaskCreate) |
| 委派给 commands/ 中的命令 | ❌ 修改不属于本职责的资源 |
### Team Configuration
\`\`\`javascript
const TEAM_CONFIG = {
name: "{{team_name}}",
sessionDir: ".workflow/.team-plan/{{team_name}}/",
msgDir: ".workflow/.team-msg/{{team_name}}/",
roles: {{roles_json}}
}
\`\`\`
### Message Bus (All Roles)
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
\`\`\`javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "{{team_name}}",
from: role, // current role name
to: "coordinator",
type: "<type>",
summary: "<summary>",
ref: "<file_path>" // optional
})
\`\`\`
**Message types by role**:
| Role | Types |
|------|-------|
{{#each roles}}
| {{this.name}} | {{this.message_types_list}} |
{{/each}}
### CLI 回退
`mcp__ccw-tools__team_msg` MCP 不可用时:
\`\`\`javascript
Bash(\`ccw team log --team "{{team_name}}" --from "\${role}" --to "coordinator" --type "<type>" --summary "<摘要>" --json\`)
\`\`\`
### Task Lifecycle (All Roles)
\`\`\`javascript
// Standard task lifecycle every role follows
// Phase 1: Discovery
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith(\`\${VALID_ROLES[role].prefix}-\`) &&
t.owner === role &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
// Phase 2-4: Role-specific (see roles/{role}.md)
// Phase 5: Report + Loop — 所有输出必须带 [role] 标识
mcp__ccw-tools__team_msg({ operation: "log", team: "{{team_name}}", from: role, to: "coordinator", type: "...", summary: \`[\${role}] ...\` })
SendMessage({ type: "message", recipient: "coordinator", content: \`## [\${role}] ...\`, summary: \`[\${role}] ...\` })
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task → back to Phase 1
\`\`\`
## Pipeline
\`\`\`
{{pipeline_diagram}}
\`\`\`
## Coordinator Spawn Template
When coordinator creates teammates, use this pattern:
\`\`\`javascript
TeamCreate({ team_name: "{{team_name}}" })
{{#each worker_roles}}
// {{this.display_name}}
Task({
subagent_type: "general-purpose",
team_name: "{{../team_name}}",
name: "{{this.name}}",
prompt: \`你是 team "{{../team_name}}" 的 {{this.name_upper}}.
当你收到 {{this.task_prefix}}-* 任务时,调用 Skill(skill="team-{{../team_name}}", args="--role={{this.name}}") 执行。
当前需求: \${taskDescription}
约束: \${constraints}
## 角色准则(强制)
- 你只能处理 {{this.task_prefix}}-* 前缀的任务,不得执行其他角色的工作
- 所有输出SendMessage、team_msg必须带 [{{this.name}}] 标识前缀
- 仅与 coordinator 通信,不得直接联系其他 worker
- 不得使用 TaskCreate 为其他角色创建任务
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
工作流程:
1. TaskList → 找到 {{this.task_prefix}}-* 任务
2. Skill(skill="team-{{../team_name}}", args="--role={{this.name}}") 执行
3. team_msg log + SendMessage 结果给 coordinator带 [{{this.name}}] 标识)
4. TaskUpdate completed → 检查下一个任务\`
})
{{/each}}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Error with usage hint |
| Role file not found | Error with expected path (roles/{name}/role.md) |
| Command file not found | Fall back to inline execution in role.md |
| Task prefix conflict | Log warning, proceed |
```
---
## Variable Reference
| Variable | Source | Description |
|----------|--------|-------------|
| `{{team_name}}` | config.team_name | Team identifier (lowercase) |
| `{{team_display_name}}` | config.team_display_name | Human-readable team name |
| `{{all_roles_tools_union}}` | Union of all roles' allowed-tools | Combined tool list |
| `{{roles}}` | config.roles[] | Array of role definitions |
| `{{role_list}}` | Role names joined by comma | e.g., "coordinator, planner, executor" |
| `{{roles_json}}` | JSON.stringify(roles) | Roles as JSON |
| `{{pipeline_diagram}}` | Generated from task chain | ASCII pipeline |
| `{{worker_roles}}` | config.roles excluding coordinator | Non-coordinator roles |
| `{{role.name}}` | Per-role name | e.g., "planner" |
| `{{role.task_prefix}}` | Per-role task prefix | e.g., "PLAN" |
| `{{role.responsibility}}` | Per-role responsibility | e.g., "Code exploration and planning" |
| `{{role.message_types_list}}` | Per-role message types | e.g., "`plan_ready`, `error`" |

92
.gitignore vendored
View File

@@ -33,3 +33,95 @@ COMMAND_TEMPLATE_ORCHESTRATOR.md
# CCW worktrees for parallel execution
.ccw/worktrees/
.claude/skills_lib/codex-skill-designer/SKILL.md
.claude/skills_lib/codex-skill-designer/phases/01-requirements-analysis.md
.claude/skills_lib/codex-skill-designer/phases/02-orchestrator-design.md
.claude/skills_lib/codex-skill-designer/phases/03-agent-design.md
.claude/skills_lib/codex-skill-designer/phases/04-validation.md
.claude/skills_lib/codex-skill-designer/specs/codex-agent-patterns.md
.claude/skills_lib/codex-skill-designer/specs/conversion-rules.md
.claude/skills_lib/codex-skill-designer/specs/quality-standards.md
.claude/skills_lib/codex-skill-designer/templates/agent-role-template.md
.claude/skills_lib/codex-skill-designer/templates/command-pattern-template.md
.claude/skills_lib/codex-skill-designer/templates/orchestrator-template.md
.claude/skills_lib/copyright-docs/SKILL.md
.claude/skills_lib/copyright-docs/phases/01-metadata-collection.md
.claude/skills_lib/copyright-docs/phases/01.5-project-exploration.md
.claude/skills_lib/copyright-docs/phases/02-deep-analysis.md
.claude/skills_lib/copyright-docs/phases/02.5-consolidation.md
.claude/skills_lib/copyright-docs/phases/04-document-assembly.md
.claude/skills_lib/copyright-docs/phases/05-compliance-refinement.md
.claude/skills_lib/copyright-docs/specs/cpcc-requirements.md
.claude/skills_lib/copyright-docs/templates/agent-base.md
.claude/skills_lib/flow-coordinator/SKILL.md
.claude/skills_lib/flow-coordinator/spec/unified-workflow-spec.md
.claude/skills_lib/flow-coordinator/templates/analyze.json
.claude/skills_lib/flow-coordinator/templates/brainstorm-to-issue.json
.claude/skills_lib/flow-coordinator/templates/brainstorm.json
.claude/skills_lib/flow-coordinator/templates/bugfix-hotfix.json
.claude/skills_lib/flow-coordinator/templates/bugfix.json
.claude/skills_lib/flow-coordinator/templates/coupled.json
.claude/skills_lib/flow-coordinator/templates/debug.json
.claude/skills_lib/flow-coordinator/templates/docs.json
.claude/skills_lib/flow-coordinator/templates/full.json
.claude/skills_lib/flow-coordinator/templates/issue.json
.claude/skills_lib/flow-coordinator/templates/multi-cli-plan.json
.claude/skills_lib/flow-coordinator/templates/rapid-to-issue.json
.claude/skills_lib/flow-coordinator/templates/rapid.json
.claude/skills_lib/flow-coordinator/templates/review.json
.claude/skills_lib/flow-coordinator/templates/tdd.json
.claude/skills_lib/flow-coordinator/templates/test-fix.json
.claude/skills_lib/issue-discover/SKILL.md
.claude/skills_lib/issue-discover/phases/01-issue-new.md
.claude/skills_lib/issue-discover/phases/02-discover.md
.claude/skills_lib/issue-discover/phases/03-discover-by-prompt.md
.claude/skills_lib/issue-resolve/SKILL.md
.claude/skills_lib/issue-resolve/phases/01-issue-plan.md
.claude/skills_lib/issue-resolve/phases/02-convert-to-plan.md
.claude/skills_lib/issue-resolve/phases/03-from-brainstorm.md
.claude/skills_lib/issue-resolve/phases/04-issue-queue.md
.claude/skills_lib/project-analyze/SKILL.md
.claude/skills_lib/project-analyze/phases/01-requirements-discovery.md
.claude/skills_lib/project-analyze/phases/02-project-exploration.md
.claude/skills_lib/project-analyze/phases/03-deep-analysis.md
.claude/skills_lib/project-analyze/phases/03.5-consolidation.md
.claude/skills_lib/project-analyze/phases/04-report-generation.md
.claude/skills_lib/project-analyze/phases/05-iterative-refinement.md
.claude/skills_lib/project-analyze/specs/quality-standards.md
.claude/skills_lib/project-analyze/specs/writing-style.md
.claude/skills_lib/software-manual/SKILL.md
.claude/skills_lib/software-manual/phases/01-requirements-discovery.md
.claude/skills_lib/software-manual/phases/02-project-exploration.md
.claude/skills_lib/software-manual/phases/02.5-api-extraction.md
.claude/skills_lib/software-manual/phases/03-parallel-analysis.md
.claude/skills_lib/software-manual/phases/03.5-consolidation.md
.claude/skills_lib/software-manual/phases/04-screenshot-capture.md
.claude/skills_lib/software-manual/phases/05-html-assembly.md
.claude/skills_lib/software-manual/phases/06-iterative-refinement.md
.claude/skills_lib/software-manual/scripts/api-extractor.md
.claude/skills_lib/software-manual/scripts/assemble_docsify.py
.claude/skills_lib/software-manual/scripts/bundle-libraries.md
.claude/skills_lib/software-manual/scripts/extract_apis.py
.claude/skills_lib/software-manual/scripts/screenshot-helper.md
.claude/skills_lib/software-manual/scripts/swagger-runner.md
.claude/skills_lib/software-manual/scripts/typedoc-runner.md
.claude/skills_lib/software-manual/specs/html-template.md
.claude/skills_lib/software-manual/specs/quality-standards.md
.claude/skills_lib/software-manual/specs/writing-style.md
.claude/skills_lib/software-manual/templates/docsify-shell.html
.claude/skills_lib/software-manual/templates/tiddlywiki-shell.html
.claude/skills_lib/software-manual/templates/css/docsify-base.css
.claude/skills_lib/software-manual/templates/css/wiki-base.css
.claude/skills_lib/software-manual/templates/css/wiki-dark.css
.claude/skills_lib/team-skill-designer/SKILL.md
.claude/skills_lib/team-skill-designer/phases/01-requirements-collection.md
.claude/skills_lib/team-skill-designer/phases/02-pattern-analysis.md
.claude/skills_lib/team-skill-designer/phases/03-skill-generation.md
.claude/skills_lib/team-skill-designer/phases/04-integration-verification.md
.claude/skills_lib/team-skill-designer/phases/05-validation.md
.claude/skills_lib/team-skill-designer/specs/collaboration-patterns.md
.claude/skills_lib/team-skill-designer/specs/quality-standards.md
.claude/skills_lib/team-skill-designer/specs/team-design-patterns.md
.claude/skills_lib/team-skill-designer/templates/role-command-template.md
.claude/skills_lib/team-skill-designer/templates/role-template.md
.claude/skills_lib/team-skill-designer/templates/skill-router-template.md