feat: update empty state messages and hints in English and Chinese locales

refactor: rename variables for clarity in ReviewSessionPage and SessionsPage

fix: update version check logic in SettingsPage

chore: remove unused imports in TeamPage and session-detail components

fix: enhance error handling in MCP server

fix: apply default mode in edit-file tool handler

chore: remove tsbuildinfo file

docs: add Quick Plan & Execute phase documentation for issue discovery

chore: clean up ping output file
This commit is contained in:
catlog22
2026-02-12 23:15:48 +08:00
parent fd6262b78b
commit e44a97e812
32 changed files with 912 additions and 1046 deletions

View File

@@ -65,6 +65,33 @@ Interactive collaborative analysis workflow with **documented discussion process
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude **Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
### Decision Recording Protocol
**⚠️ CRITICAL**: During analysis, the following situations **MUST** trigger immediate recording to discussion.md:
| Trigger | What to Record | Target Section |
|---------|---------------|----------------|
| **Direction choice** | What was chosen, why, what alternatives were discarded | `#### Decision Log` |
| **Key finding** | Finding content, impact scope, confidence level | `#### Key Findings` |
| **Assumption change** | Old assumption → new understanding, reason for change, impact | `#### Corrected Assumptions` |
| **User feedback** | User's original input, rationale for adoption/adjustment | `#### User Input` |
| **Disagreement & trade-off** | Conflicting viewpoints, trade-off basis, final choice | `#### Decision Log` |
| **Scope adjustment** | Before/after scope, trigger reason for adjustment | `#### Decision Log` |
**Decision Record Format**:
```markdown
> **Decision**: [Description of the decision]
> - **Context**: [What triggered this decision]
> - **Options considered**: [Alternatives evaluated]
> - **Chosen**: [Selected approach] — **Reason**: [Rationale]
> - **Impact**: [Effect on analysis direction/conclusions]
```
**Recording Principles**:
- **Immediacy**: Record decisions as they happen, not at the end of a phase
- **Completeness**: Capture context, options, chosen approach, and reason
- **Traceability**: Later phases must be able to trace back why a decision was made
``` ```
┌─────────────────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────────────────────────────┐
│ INTERACTIVE ANALYSIS WORKFLOW │ │ INTERACTIVE ANALYSIS WORKFLOW │
@@ -164,11 +191,18 @@ Interactive collaborative analysis workflow with **documented discussion process
- Add user context: focus areas, analysis depth - Add user context: focus areas, analysis depth
- Add initial understanding: dimensions, scope, key questions - Add initial understanding: dimensions, scope, key questions
- Create empty sections for discussion timeline - Create empty sections for discussion timeline
- **📌 Record initial decisions**: Document dimension selection rationale, excluded dimensions with reasons, intent behind user preferences
4. **📌 Record Phase 1 Decisions**
- Record why these dimensions were selected (keyword match + user confirmation)
- Record the rationale behind analysis depth selection
- If user adjusted recommended focus, record the adjustment reason
**Success Criteria**: **Success Criteria**:
- Session folder created with discussion.md initialized - Session folder created with discussion.md initialized
- Analysis dimensions identified - Analysis dimensions identified
- User preferences captured (focus, depth) - User preferences captured (focus, depth)
- **Phase 1 decisions recorded** with context and rationale
### Phase 2: CLI Exploration ### Phase 2: CLI Exploration
@@ -353,6 +387,8 @@ CONSTRAINTS: ${perspective.constraints}
- explorations.json (single) or perspectives.json (multi) created with findings - explorations.json (single) or perspectives.json (multi) created with findings
- discussion.md updated with Round 1 results - discussion.md updated with Round 1 results
- All agents and CLI calls completed successfully - All agents and CLI calls completed successfully
- **📌 Key findings recorded** with evidence references and confidence levels
- **📌 Exploration decisions recorded** (why chose certain perspectives, tool selection rationale)
### Phase 3: Interactive Discussion ### Phase 3: Interactive Discussion
@@ -381,21 +417,30 @@ CONSTRAINTS: ${perspective.constraints}
3. **Process User Response** 3. **Process User Response**
**📌 Recording Checkpoint**: Regardless of which option the user selects, the following MUST be recorded to discussion.md:
- User's original choice and expression
- Impact of this choice on analysis direction
- If direction changed, record a full Decision Record
**Agree, Deepen**: **Agree, Deepen**:
- Continue analysis in current direction - Continue analysis in current direction
- Use CLI for deeper exploration - Use CLI for deeper exploration
- **📌 Record**: Which assumptions were confirmed, specific angles for deeper exploration
**Adjust Direction**: **Adjust Direction**:
- AskUserQuestion for adjusted focus (code details / architecture / best practices) - AskUserQuestion for adjusted focus (code details / architecture / best practices)
- Launch new CLI exploration with adjusted scope - Launch new CLI exploration with adjusted scope
- **📌 Record Decision**: Trigger reason for direction adjustment, old vs new direction comparison, expected impact
**Specific Questions**: **Specific Questions**:
- Capture user questions - Capture user questions
- Use CLI or direct analysis to answer - Use CLI or direct analysis to answer
- Document Q&A in discussion.md - Document Q&A in discussion.md
- **📌 Record**: Knowledge gaps revealed by the question, new understanding gained from the answer
**Complete**: **Complete**:
- Exit discussion loop, proceed to Phase 4 - Exit discussion loop, proceed to Phase 4
- **📌 Record**: Why concluding at this round (sufficient information / scope fully focused / user satisfied)
4. **Update discussion.md** 4. **Update discussion.md**
- Append Round N section with: - Append Round N section with:
@@ -423,6 +468,8 @@ CONSTRAINTS: ${perspective.constraints}
- discussion.md updated with all discussion rounds - discussion.md updated with all discussion rounds
- Assumptions corrected and documented - Assumptions corrected and documented
- Exit condition reached (user selects "完成" or max rounds) - Exit condition reached (user selects "完成" or max rounds)
- **📌 All decision points recorded** with Decision Record format
- **📌 Direction changes documented** with before/after comparison and rationale
### Phase 4: Synthesis & Conclusion ### Phase 4: Synthesis & Conclusion
@@ -437,10 +484,12 @@ CONSTRAINTS: ${perspective.constraints}
1. **Consolidate Insights** 1. **Consolidate Insights**
- Extract all findings from discussion timeline - Extract all findings from discussion timeline
- **📌 Compile Decision Trail**: Aggregate all Decision Records from Phases 1-3 into a consolidated decision log
- **Key conclusions**: Main points with evidence and confidence levels (high/medium/low) - **Key conclusions**: Main points with evidence and confidence levels (high/medium/low)
- **Recommendations**: Action items with rationale and priority (high/medium/low) - **Recommendations**: Action items with rationale and priority (high/medium/low)
- **Open questions**: Remaining unresolved questions - **Open questions**: Remaining unresolved questions
- **Follow-up suggestions**: Issue/task creation suggestions - **Follow-up suggestions**: Issue/task creation suggestions
- **📌 Decision summary**: How key decisions shaped the final conclusions (link conclusions back to decisions)
- Write to conclusions.json - Write to conclusions.json
2. **Final discussion.md Update** 2. **Final discussion.md Update**
@@ -453,7 +502,11 @@ CONSTRAINTS: ${perspective.constraints}
- **What We Established**: Confirmed points - **What We Established**: Confirmed points
- **What Was Clarified/Corrected**: Important corrections - **What Was Clarified/Corrected**: Important corrections
- **Key Insights**: Valuable learnings - **Key Insights**: Valuable learnings
- Add session statistics: rounds, duration, sources, artifacts - **📌 Add "Decision Trail" section**:
- **Critical Decisions**: List of pivotal decisions that shaped the analysis outcome
- **Direction Changes**: Timeline of scope/focus adjustments with rationale
- **Trade-offs Made**: Key trade-offs and why certain paths were chosen over others
- Add session statistics: rounds, duration, sources, artifacts, **decision count**
3. **Post-Completion Options** (AskUserQuestion) 3. **Post-Completion Options** (AskUserQuestion)
- **创建Issue**: Launch issue:new with conclusions - **创建Issue**: Launch issue:new with conclusions
@@ -471,12 +524,14 @@ CONSTRAINTS: ${perspective.constraints}
- `recommendations[]`: {action, rationale, priority} - `recommendations[]`: {action, rationale, priority}
- `open_questions[]`: Unresolved questions - `open_questions[]`: Unresolved questions
- `follow_up_suggestions[]`: {type, summary} - `follow_up_suggestions[]`: {type, summary}
- `decision_trail[]`: {round, decision, context, options_considered, chosen, reason, impact}
**Success Criteria**: **Success Criteria**:
- conclusions.json created with final synthesis - conclusions.json created with final synthesis
- discussion.md finalized with conclusions - discussion.md finalized with conclusions and decision trail
- User offered next step options - User offered next step options
- Session complete - Session complete
- **📌 Complete decision trail** documented and traceable from initial scoping to final conclusions
## Configuration ## Configuration
@@ -572,12 +627,14 @@ In round 1 we discussed X, then in round 2 user said Y...
## Best Practices ## Best Practices
1. **Clear Topic Definition**: Detailed topics better dimension identification 1. **Clear Topic Definition**: Detailed topics lead to better dimension identification
2. **Agent-First for Complex Tasks**: For code analysis, implementation, or refactoring tasks during discussion, delegate to agents via Task tool (cli-explore-agent, code-developer, universal-executor) or CLI calls (ccw cli). Avoid direct analysis/execution in main process 2. **Agent-First for Complex Tasks**: For code analysis, implementation, or refactoring tasks during discussion, delegate to agents via Task tool (cli-explore-agent, code-developer, universal-executor) or CLI calls (ccw cli). Avoid direct analysis/execution in main process
3. **Review discussion.md**: Check understanding evolution before conclusions 3. **Review discussion.md**: Check understanding evolution before conclusions
4. **Embrace Corrections**: Track wrongright transformations as learnings 4. **Embrace Corrections**: Track wrong-to-right transformations as learnings
5. **Document Evolution**: discussion.md captures full thinking process 5. **Document Evolution**: discussion.md captures full thinking process
6. **Use Continue Mode**: Resume sessions to build on previous analysis 6. **Use Continue Mode**: Resume sessions to build on previous analysis
7. **Record Decisions Immediately**: Never defer recording - capture decisions as they happen using the Decision Record format. A decision not recorded in-the-moment is a decision lost
8. **Link Decisions to Outcomes**: When writing conclusions, explicitly reference which decisions led to which outcomes. This creates an auditable trail from initial scoping to final recommendations
## Templates ## Templates
@@ -587,11 +644,12 @@ In round 1 we discussed X, then in round 2 user said Y...
- **Header**: Session metadata (ID, topic, started, dimensions) - **Header**: Session metadata (ID, topic, started, dimensions)
- **User Context**: Focus areas, analysis depth - **User Context**: Focus areas, analysis depth
- **Discussion Timeline**: Round-by-round findings - **Discussion Timeline**: Round-by-round findings
- Round 1: Initial Understanding + Exploration Results - Round 1: Initial Understanding + Exploration Results + **Initial Decision Log**
- Round 2-N: User feedback, adjusted understanding, corrections, new insights - Round 2-N: User feedback, adjusted understanding, corrections, new insights, **Decision Log per round**
- **Decision Trail**: Consolidated critical decisions across all rounds
- **Conclusions**: Summary, key conclusions, recommendations - **Conclusions**: Summary, key conclusions, recommendations
- **Current Understanding (Final)**: Consolidated insights - **Current Understanding (Final)**: Consolidated insights
- **Session Statistics**: Rounds, duration, sources, artifacts - **Session Statistics**: Rounds, duration, sources, artifacts, decision count
Example sections: Example sections:
@@ -601,6 +659,13 @@ Example sections:
#### User Input #### User Input
User agrees with current direction, wants deeper code analysis User agrees with current direction, wants deeper code analysis
#### Decision Log
> **Decision**: Shift focus from high-level architecture to implementation-level code analysis
> - **Context**: User confirmed architectural understanding is sufficient
> - **Options considered**: Continue architecture analysis / Deep-dive into code patterns / Focus on testing gaps
> - **Chosen**: Deep-dive into code patterns — **Reason**: User explicitly requested code-level analysis
> - **Impact**: Subsequent exploration will target specific modules rather than system overview
#### Updated Understanding #### Updated Understanding
- Identified session management uses database-backed approach - Identified session management uses database-backed approach
- Rate limiting applied at gateway, not application level - Rate limiting applied at gateway, not application level

View File

@@ -644,29 +644,3 @@ Why is config value None during update?
| >5 iterations | Review consolidated understanding, escalate to `/workflow:lite-fix` with full context | | >5 iterations | Review consolidated understanding, escalate to `/workflow:lite-fix` with full context |
| Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights | | Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights |
| Understanding too long | Consolidate aggressively, archive old iterations to separate file | | Understanding too long | Consolidate aggressively, archive old iterations to separate file |
## Comparison with /workflow:debug
| Feature | /workflow:debug | /workflow:debug-with-file |
|---------|-----------------|---------------------------|
| NDJSON debug logging | ✅ | ✅ |
| Hypothesis generation | Manual | Gemini-assisted |
| Exploration documentation | ❌ | ✅ understanding.md |
| Understanding evolution | ❌ | ✅ Timeline + corrections |
| Error correction | ❌ | ✅ Strikethrough + reasoning |
| Consolidated learning | ❌ | ✅ Current understanding section |
| Hypothesis history | ❌ | ✅ hypotheses.json |
| Gemini validation | ❌ | ✅ At key decision points |
## Usage Recommendations (Requires User Confirmation)
**Use `Skill(skill="workflow:debug-with-file", args="\"bug description\"")` when:**
- Complex bugs requiring multiple investigation rounds
- Learning from debugging process is valuable
- Team needs to understand debugging rationale
- Bug might recur, documentation helps prevention
**Use `Skill(skill="ccw-debug", args="--mode cli \"issue\"")` when:**
- Simple, quick bugs
- One-off issues
- Documentation overhead not needed

View File

@@ -1,6 +1,6 @@
--- ---
description: "Interactive pre-flight checklist for ccw-loop. Discovers JSONL from collaborative-plan-with-file, analyze-with-file, brainstorm-to-cycle sessions; validates, transforms to ccw-loop task format, writes prep-package.json + tasks.jsonl, then launches the loop." description: "Interactive pre-flight checklist for ccw-loop. Discovers .task/*.json from collaborative-plan-with-file, analyze-with-file, brainstorm-to-cycle sessions; validates, transforms to ccw-loop task format, writes prep-package.json + .task/*.json, then launches the loop."
argument-hint: '[SOURCE="<path-to-tasks.jsonl-or-session-folder>"] [MAX_ITER=10]' argument-hint: '[SOURCE="<path-to-.task/-dir-or-session-folder>"] [MAX_ITER=10]'
--- ---
# Pre-Flight Checklist for CCW Loop # Pre-Flight Checklist for CCW Loop
@@ -19,24 +19,27 @@ Scan for upstream artifacts from the three supported source skills:
const projectRoot = Bash('git rev-parse --show-toplevel 2>/dev/null || pwd').trim() const projectRoot = Bash('git rev-parse --show-toplevel 2>/dev/null || pwd').trim()
// Source 1: collaborative-plan-with-file // Source 1: collaborative-plan-with-file
const cplanSessions = Glob(`${projectRoot}/.workflow/.planning/CPLAN-*/tasks.jsonl`) const cplanSessions = Glob(`${projectRoot}/.workflow/.planning/CPLAN-*/.task/*.json`)
.map(p => ({ .map(p => ({
path: p, path: p.replace(/\/\.task\/[^/]+$/, '/.task'),
source: 'collaborative-plan-with-file', source: 'collaborative-plan-with-file',
type: 'jsonl', type: 'task-dir',
session: p.match(/CPLAN-[^/]+/)?.[0], session: p.match(/CPLAN-[^/]+/)?.[0],
mtime: fs.statSync(p).mtime mtime: fs.statSync(p).mtime
})) }))
// Deduplicate by session
.filter((v, i, a) => a.findIndex(x => x.session === v.session) === i)
// Source 2: analyze-with-file // Source 2: analyze-with-file
const anlSessions = Glob(`${projectRoot}/.workflow/.analysis/ANL-*/tasks.jsonl`) const anlSessions = Glob(`${projectRoot}/.workflow/.analysis/ANL-*/.task/*.json`)
.map(p => ({ .map(p => ({
path: p, path: p.replace(/\/\.task\/[^/]+$/, '/.task'),
source: 'analyze-with-file', source: 'analyze-with-file',
type: 'jsonl', type: 'task-dir',
session: p.match(/ANL-[^/]+/)?.[0], session: p.match(/ANL-[^/]+/)?.[0],
mtime: fs.statSync(p).mtime mtime: fs.statSync(p).mtime
})) }))
.filter((v, i, a) => a.findIndex(x => x.session === v.session) === i)
// Source 3: brainstorm-to-cycle // Source 3: brainstorm-to-cycle
const bsSessions = Glob(`${projectRoot}/.workflow/.brainstorm/*/cycle-task.md`) const bsSessions = Glob(`${projectRoot}/.workflow/.brainstorm/*/cycle-task.md`)
@@ -59,25 +62,25 @@ const allSources = [...cplanSessions, ...anlSessions, ...bsSessions]
════════════════ ════════════════
collaborative-plan-with-file: collaborative-plan-with-file:
1. CPLAN-auth-redesign-20260208 tasks.jsonl (5 tasks, 2h ago) 1. CPLAN-auth-redesign-20260208 .task/ (5 tasks, 2h ago)
2. CPLAN-api-cleanup-20260205 tasks.jsonl (3 days ago) 2. CPLAN-api-cleanup-20260205 .task/ (3 days ago)
analyze-with-file: analyze-with-file:
3. ANL-perf-audit-20260207 tasks.jsonl (8 tasks, 1d ago) 3. ANL-perf-audit-20260207 .task/ (8 tasks, 1d ago)
brainstorm-to-cycle: brainstorm-to-cycle:
4. BS-notification-system cycle-task.md (1d ago) 4. BS-notification-system cycle-task.md (1d ago)
手动输入: 手动输入:
5. 自定义路径 (输入 JSONL 文件路径或任务描述) 5. 自定义路径 (输入 .task/ 目录路径或任务描述)
``` ```
### 1.3 User Selection ### 1.3 User Selection
Ask the user to select a source: Ask the user to select a source:
> "请选择任务来源(输入编号),或输入 JSONL 文件的完整路径: > "请选择任务来源(输入编号),或输入 .task/ 目录的完整路径:
> 也可以输入 'manual' 手动输入任务描述(不使用上游 JSONL" > 也可以输入 'manual' 手动输入任务描述(不使用上游任务文件"
**If `$SOURCE` argument provided**, skip discovery and use directly: **If `$SOURCE` argument provided**, skip discovery and use directly:
@@ -85,13 +88,13 @@ Ask the user to select a source:
if (options.SOURCE) { if (options.SOURCE) {
// Validate path exists // Validate path exists
if (!fs.existsSync(options.SOURCE)) { if (!fs.existsSync(options.SOURCE)) {
console.error(`文件不存在: ${options.SOURCE}`) console.error(`Path not found: ${options.SOURCE}`)
return return
} }
selectedSource = { selectedSource = {
path: options.SOURCE, path: options.SOURCE,
source: inferSource(options.SOURCE), source: inferSource(options.SOURCE),
type: options.SOURCE.endsWith('.jsonl') ? 'jsonl' : 'markdown' type: fs.statSync(options.SOURCE).isDirectory() ? 'task-dir' : 'markdown'
} }
} }
``` ```
@@ -100,24 +103,24 @@ if (options.SOURCE) {
## Step 2: Source Validation & Task Loading ## Step 2: Source Validation & Task Loading
### 2.1 For JSONL Sources (collaborative-plan / analyze-with-file) ### 2.1 For .task/ Sources (collaborative-plan / analyze-with-file)
```javascript ```javascript
function validateAndLoadJsonl(jsonlPath) { function validateAndLoadTaskDir(taskDirPath) {
const content = Read(jsonlPath) const taskFiles = Glob(`${taskDirPath}/*.json`).sort()
const lines = content.trim().split('\n').filter(l => l.trim())
const tasks = [] const tasks = []
const errors = [] const errors = []
for (let i = 0; i < lines.length; i++) { for (let i = 0; i < taskFiles.length; i++) {
try { try {
const task = JSON.parse(lines[i]) const content = Read(taskFiles[i])
const task = JSON.parse(content)
// Required fields check // Required fields check (task-schema.json: id, title, description, depends_on, convergence)
const requiredFields = ['id', 'title', 'description'] const requiredFields = ['id', 'title', 'description']
const missing = requiredFields.filter(f => !task[f]) const missing = requiredFields.filter(f => !task[f])
if (missing.length > 0) { if (missing.length > 0) {
errors.push(`Line ${i + 1}: missing fields: ${missing.join(', ')}`) errors.push(`${taskFiles[i]}: missing fields: ${missing.join(', ')}`)
continue continue
} }
@@ -126,11 +129,11 @@ function validateAndLoadJsonl(jsonlPath) {
tasks.push(task) tasks.push(task)
} }
} catch (e) { } catch (e) {
errors.push(`Line ${i + 1}: invalid JSON: ${e.message}`) errors.push(`${taskFiles[i]}: invalid JSON: ${e.message}`)
} }
} }
return { tasks, errors, total_lines: lines.length } return { tasks, errors, total_files: taskFiles.length }
} }
``` ```
@@ -139,10 +142,10 @@ Display validation results:
``` ```
JSONL 验证 JSONL 验证
══════════ ══════════
文件: .workflow/.planning/CPLAN-auth-redesign-20260208/tasks.jsonl 目录: .workflow/.planning/CPLAN-auth-redesign-20260208/.task/
来源: collaborative-plan-with-file 来源: collaborative-plan-with-file
✓ 5/5 解析成功 ✓ 5/5 任务文件解析成功
✓ 必需字段完整 (id, title, description) ✓ 必需字段完整 (id, title, description)
✓ 3 个任务含收敛标准 (convergence) ✓ 3 个任务含收敛标准 (convergence)
⚠ 2 个任务缺少收敛标准 (将使用默认) ⚠ 2 个任务缺少收敛标准 (将使用默认)
@@ -228,7 +231,7 @@ If validation has errors:
## Step 3: Task Transformation ## Step 3: Task Transformation
Transform unified JSONL tasks → ccw-loop `develop.tasks[]` format. Transform unified task JSON files -> ccw-loop `develop.tasks[]` format.
```javascript ```javascript
function transformToCcwLoopTasks(sourceTasks) { function transformToCcwLoopTasks(sourceTasks) {
@@ -277,7 +280,7 @@ Display transformed tasks:
``` ```
任务转换 任务转换
════════ ════════
源格式: unified JSONL (collaborative-plan-with-file) 源格式: .task/*.json (collaborative-plan-with-file)
目标格式: ccw-loop develop.tasks 目标格式: ccw-loop develop.tasks
task-001 [P1] Implement JWT token service: Create JWT service... gemini/write pending task-001 [P1] Implement JWT token service: Create JWT service... gemini/write pending
@@ -373,7 +376,7 @@ Write to `{projectRoot}/.workflow/.loop/prep-package.json`:
"source": { "source": {
"tool": "collaborative-plan-with-file", "tool": "collaborative-plan-with-file",
"session_id": "CPLAN-auth-redesign-20260208", "session_id": "CPLAN-auth-redesign-20260208",
"jsonl_path": "{projectRoot}/.workflow/.planning/CPLAN-auth-redesign-20260208/tasks.jsonl", "task_dir": "{projectRoot}/.workflow/.planning/CPLAN-auth-redesign-20260208/.task",
"task_count": 5, "task_count": 5,
"tasks_with_convergence": 3 "tasks_with_convergence": 3
}, },
@@ -393,20 +396,25 @@ Write to `{projectRoot}/.workflow/.loop/prep-package.json`:
} }
``` ```
### 6.2 Write tasks.jsonl ### 6.2 Write .task/*.json
Write transformed tasks to `{projectRoot}/.workflow/.loop/prep-tasks.jsonl` (ccw-loop format): Write transformed tasks to `{projectRoot}/.workflow/.loop/.task/` directory (one file per task, following task-schema.json):
```javascript ```javascript
const jsonlContent = transformedTasks.map(t => JSON.stringify(t)).join('\n') const taskDir = `${projectRoot}/.workflow/.loop/.task`
Write(`${projectRoot}/.workflow/.loop/prep-tasks.jsonl`, jsonlContent) Bash(`mkdir -p ${taskDir}`)
for (const task of transformedTasks) {
const fileName = `TASK-${task.id.replace(/^task-/, '')}.json`
Write(`${taskDir}/${fileName}`, JSON.stringify(task, null, 2))
}
``` ```
Confirm: Confirm:
``` ```
prep-package.json .workflow/.loop/prep-package.json ok prep-package.json -> .workflow/.loop/prep-package.json
✓ prep-tasks.jsonl .workflow/.loop/prep-tasks.jsonl ok .task/ directory -> .workflow/.loop/.task/ (5 task files)
``` ```
--- ---
@@ -422,7 +430,7 @@ $ccw-loop --auto TASK="Execute tasks from {source.tool} session {source.session_
其中: 其中:
- `$ccw-loop` — 展开为 skill 调用 - `$ccw-loop` — 展开为 skill 调用
- `--auto` — 启用全自动模式 - `--auto` — 启用全自动模式
- Skill 端会检测 `prep-package.json` 并加载 `prep-tasks.jsonl` - Skill 端会检测 `prep-package.json` 并加载 `.task/*.json`
**Skill 端会做以下检查**(见 Phase 1 Step 1.1: **Skill 端会做以下检查**(见 Phase 1 Step 1.1:
1. 检测 `prep-package.json` 是否存在 1. 检测 `prep-package.json` 是否存在
@@ -430,15 +438,15 @@ $ccw-loop --auto TASK="Execute tasks from {source.tool} session {source.session_
3. 验证 `target_skill === "ccw-loop"` 3. 验证 `target_skill === "ccw-loop"`
4. 校验 `project_root` 与当前项目一致 4. 校验 `project_root` 与当前项目一致
5. 校验文件时效24h 内生成) 5. 校验文件时效24h 内生成)
6. 验证 `prep-tasks.jsonl` 存在且可读 6. 验证 `.task/` 目录存在且含有效任务文件
7. 全部通过 加载预构建任务列表;任一失败 回退到默认 INIT 行为 7. 全部通过 -> 加载预构建任务列表;任一失败 -> 回退到默认 INIT 行为
Print: Print:
``` ```
启动 ccw-loop (自动模式)... 启动 ccw-loop (自动模式)...
prep-package.json → Phase 1 自动加载并校验 prep-package.json → Phase 1 自动加载并校验
prep-tasks.jsonl → 5 个预构建任务加载到 develop.tasks .task/*.json → 5 个预构建任务加载到 develop.tasks
循环: develop → validate → complete (最多 10 次迭代) 循环: develop → validate → complete (最多 10 次迭代)
``` ```
@@ -450,7 +458,7 @@ Print:
|------|------| |------|------|
| 无可用上游会话 | 提示用户先运行 collaborative-plan / analyze-with-file / brainstorm或选择手动输入 | | 无可用上游会话 | 提示用户先运行 collaborative-plan / analyze-with-file / brainstorm或选择手动输入 |
| JSONL 格式全部无效 | 报告错误,**不启动 loop** | | JSONL 格式全部无效 | 报告错误,**不启动 loop** |
| JSONL 部分无效 | 警告无效,用有效任务继续 | | JSONL 部分无效 | 警告无效文件,用有效任务继续 |
| brainstorm cycle-task.md 为空 | 报告错误,建议完成 brainstorm 流程 | | brainstorm cycle-task.md 为空 | 报告错误,建议完成 brainstorm 流程 |
| 用户取消确认 | 保存 prep-package.json (prep_status="cancelled"),提示可修改后重新运行 | | 用户取消确认 | 保存 prep-package.json (prep_status="cancelled"),提示可修改后重新运行 |
| Skill 端 prep-package 校验失败 | Skill 打印警告,回退到无 prep 的默认 INIT 行为(不阻塞执行) | | Skill 端 prep-package 校验失败 | Skill 打印警告,回退到无 prep 的默认 INIT 行为(不阻塞执行) |

View File

@@ -7,14 +7,16 @@
## Execution Flow ## Execution Flow
``` ```
conclusions.json → tasks.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md conclusions.json → .task/*.json → User Confirmation → Direct Inline Execution → execution.md + execution-events.md
``` ```
--- ---
## Step 1: Generate tasks.jsonl ## Step 1: Generate .task/*.json
Convert `conclusions.json` recommendations directly into unified JSONL task format. Each line is a self-contained task with convergence criteria, compatible with `unified-execute-with-file`. Convert `conclusions.json` recommendations directly into individual task JSON files. Each file is a self-contained task with convergence criteria, compatible with `unified-execute-with-file`.
**Schema**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json`
**Conversion Logic**: **Conversion Logic**:
@@ -51,9 +53,11 @@ const tasks = conclusions.recommendations.map((rec, index) => ({
} }
})) }))
// Write one task per line // Write each task as individual JSON file
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n') Bash(`mkdir -p ${sessionFolder}/.task`)
Write(`${sessionFolder}/tasks.jsonl`, jsonlContent) tasks.forEach(task => {
Write(`${sessionFolder}/.task/${task.id}.json`, JSON.stringify(task, null, 2))
})
``` ```
**Task Type Inference**: **Task Type Inference**:
@@ -106,13 +110,39 @@ tasks.forEach(task => {
}) })
``` ```
**Output**: `${sessionFolder}/tasks.jsonl` **Output**: `${sessionFolder}/.task/TASK-*.json`
**JSONL Schema** (one task per line): **Task JSON Schema** (one file per task, e.g. `.task/TASK-001.json`):
```jsonl ```json
{"id":"TASK-001","title":"Fix authentication token refresh","description":"Token refresh fails silently when...","type":"fix","priority":"high","effort":"large","files":[{"path":"src/auth/token.ts","action":"modify"},{"path":"src/middleware/auth.ts","action":"modify"}],"depends_on":[],"convergence":{"criteria":["Token refresh returns new valid token","Expired token triggers refresh automatically","Failed refresh redirects to login"],"verification":"jest --testPathPattern=token.test.ts","definition_of_done":"Users remain logged in across token expiration without manual re-login"},"evidence":[...],"source":{"tool":"analyze-with-file","session_id":"ANL-xxx","original_id":"TASK-001"}} {
{"id":"TASK-002","title":"Add input validation to user endpoints","description":"Missing validation allows...","type":"enhancement","priority":"medium","effort":"medium","files":[{"path":"src/routes/user.ts","action":"modify"},{"path":"src/validators/user.ts","action":"create"}],"depends_on":["TASK-001"],"convergence":{"criteria":["All user inputs validated against schema","Invalid inputs return 400 with specific error message","SQL injection patterns rejected"],"verification":"jest --testPathPattern=user.validation.test.ts","definition_of_done":"All user-facing inputs are validated with clear error feedback"},"evidence":[...],"source":{"tool":"analyze-with-file","session_id":"ANL-xxx","original_id":"TASK-002"}} "id": "TASK-001",
"title": "Fix authentication token refresh",
"description": "Token refresh fails silently when...",
"type": "fix",
"priority": "high",
"effort": "large",
"files": [
{ "path": "src/auth/token.ts", "action": "modify" },
{ "path": "src/middleware/auth.ts", "action": "modify" }
],
"depends_on": [],
"convergence": {
"criteria": [
"Token refresh returns new valid token",
"Expired token triggers refresh automatically",
"Failed refresh redirects to login"
],
"verification": "jest --testPathPattern=token.test.ts",
"definition_of_done": "Users remain logged in across token expiration without manual re-login"
},
"evidence": [],
"source": {
"tool": "analyze-with-file",
"session_id": "ANL-xxx",
"original_id": "TASK-001"
}
}
``` ```
--- ---
@@ -124,8 +154,8 @@ Validate feasibility before starting execution. Reference: unified-execute-with-
##### Step 2.1: Build Execution Order ##### Step 2.1: Build Execution Order
```javascript ```javascript
const tasks = Read(`${sessionFolder}/tasks.jsonl`) const taskFiles = Glob(`${sessionFolder}/.task/*.json`)
.split('\n').filter(l => l.trim()).map(l => JSON.parse(l)) const tasks = taskFiles.map(f => JSON.parse(Read(f)))
// 1. Dependency validation // 1. Dependency validation
const taskIds = new Set(tasks.map(t => t.id)) const taskIds = new Set(tasks.map(t => t.id))
@@ -220,7 +250,7 @@ const executionMd = `# Execution Overview
## Session Info ## Session Info
- **Session ID**: ${sessionId} - **Session ID**: ${sessionId}
- **Plan Source**: tasks.jsonl (from analysis conclusions) - **Plan Source**: .task/*.json (from analysis conclusions)
- **Started**: ${getUtc8ISOString()} - **Started**: ${getUtc8ISOString()}
- **Total Tasks**: ${tasks.length} - **Total Tasks**: ${tasks.length}
- **Execution Mode**: Direct inline (serial) - **Execution Mode**: Direct inline (serial)
@@ -270,7 +300,7 @@ const eventsHeader = `# Execution Events
**Session**: ${sessionId} **Session**: ${sessionId}
**Started**: ${getUtc8ISOString()} **Started**: ${getUtc8ISOString()}
**Source**: tasks.jsonl **Source**: .task/*.json
--- ---
@@ -302,11 +332,11 @@ if (!autoYes) {
options: [ options: [
{ label: "Start Execution", description: "Execute all tasks serially" }, { label: "Start Execution", description: "Execute all tasks serially" },
{ label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" }, { label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" },
{ label: "Cancel", description: "Cancel execution, keep tasks.jsonl" } { label: "Cancel", description: "Cancel execution, keep .task/" }
] ]
}] }]
}) })
// "Adjust Tasks": display task list, user deselects/reorders, regenerate tasks.jsonl // "Adjust Tasks": display task list, user deselects/reorders, regenerate .task/*.json
// "Cancel": end workflow, keep artifacts // "Cancel": end workflow, keep artifacts
} }
``` ```
@@ -321,7 +351,7 @@ Execute tasks one by one directly using tools (Read, Edit, Write, Grep, Glob, Ba
``` ```
For each taskId in executionOrder: For each taskId in executionOrder:
├─ Load task from tasks.jsonl ├─ Load task from .task/{taskId}.json
├─ Check dependencies satisfied (all deps completed) ├─ Check dependencies satisfied (all deps completed)
├─ Record START event to execution-events.md ├─ Record START event to execution-events.md
├─ Execute task directly: ├─ Execute task directly:
@@ -506,7 +536,7 @@ ${[...failedTasks].map(id => {
}).join('\n')} }).join('\n')}
` : ''} ` : ''}
### Artifacts ### Artifacts
- **Execution Plan**: ${sessionFolder}/tasks.jsonl - **Execution Plan**: ${sessionFolder}/.task/
- **Execution Overview**: ${sessionFolder}/execution.md - **Execution Overview**: ${sessionFolder}/execution.md
- **Execution Events**: ${sessionFolder}/execution-events.md - **Execution Events**: ${sessionFolder}/execution-events.md
` `
@@ -530,16 +560,16 @@ appendToEvents(`
`) `)
``` ```
##### Step 6.3: Update tasks.jsonl ##### Step 6.3: Update .task/*.json
Rewrite JSONL with `_execution` state per task: Write back `_execution` state to each task file:
```javascript ```javascript
const updatedJsonl = tasks.map(task => JSON.stringify({ tasks.forEach(task => {
...task, const updatedTask = {
_execution: { ...task,
status: task._status, // "completed" | "failed" | "skipped" | "pending" status: task._status, // "completed" | "failed" | "skipped" | "pending"
executed_at: task._executed_at, // ISO timestamp executed_at: task._executed_at, // ISO timestamp
result: { result: {
success: task._status === 'completed', success: task._status === 'completed',
files_modified: task._result?.files_modified || [], files_modified: task._result?.files_modified || [],
@@ -548,8 +578,8 @@ const updatedJsonl = tasks.map(task => JSON.stringify({
convergence_verified: task._result?.convergence_verified || [] convergence_verified: task._result?.convergence_verified || []
} }
} }
})).join('\n') Write(`${sessionFolder}/.task/${task.id}.json`, JSON.stringify(updatedTask, null, 2))
Write(`${sessionFolder}/tasks.jsonl`, updatedJsonl) })
``` ```
--- ---
@@ -589,7 +619,7 @@ if (!autoYes) {
- Filter tasks with `_execution.status === 'failed'` - Filter tasks with `_execution.status === 'failed'`
- Re-execute in original dependency order - Re-execute in original dependency order
- Append retry events to execution-events.md with `[RETRY]` prefix - Append retry events to execution-events.md with `[RETRY]` prefix
- Update execution.md and tasks.jsonl - Update execution.md and .task/*.json
--- ---
@@ -600,14 +630,17 @@ When Quick Execute is activated, session folder expands with:
``` ```
{projectRoot}/.workflow/.analysis/ANL-{slug}-{date}/ {projectRoot}/.workflow/.analysis/ANL-{slug}-{date}/
├── ... # Phase 1-4 artifacts ├── ... # Phase 1-4 artifacts
├── tasks.jsonl # ⭐ Unified JSONL (one task per line, with convergence + source) ├── .task/ # Individual task JSON files (one per task, with convergence + source)
│ ├── TASK-001.json
│ ├── TASK-002.json
│ └── ...
├── execution.md # Plan overview + task table + execution summary ├── execution.md # Plan overview + task table + execution summary
└── execution-events.md # ⭐ Unified event log (all task executions with details) └── execution-events.md # ⭐ Unified event log (all task executions with details)
``` ```
| File | Purpose | | File | Purpose |
|------|---------| |------|---------|
| `tasks.jsonl` | Unified task list from conclusions, each line has convergence criteria and source provenance | | `.task/*.json` | Individual task files from conclusions, each with convergence criteria and source provenance |
| `execution.md` | Overview: plan source, task table, pre-execution analysis, execution timeline, final summary | | `execution.md` | Overview: plan source, task table, pre-execution analysis, execution timeline, final summary |
| `execution-events.md` | Chronological event stream: task start/complete/fail with details, changes, verification results | | `execution-events.md` | Chronological event stream: task start/complete/fail with details, changes, verification results |
@@ -620,7 +653,7 @@ When Quick Execute is activated, session folder expands with:
**Session**: ANL-xxx-2025-01-21 **Session**: ANL-xxx-2025-01-21
**Started**: 2025-01-21T10:00:00+08:00 **Started**: 2025-01-21T10:00:00+08:00
**Source**: tasks.jsonl **Source**: .task/*.json
--- ---
@@ -681,9 +714,9 @@ When Quick Execute is activated, session folder expands with:
|-----------|--------|----------| |-----------|--------|----------|
| Task execution fails | Record failure in execution-events.md, ask user | Retry, skip, or abort | | Task execution fails | Record failure in execution-events.md, ask user | Retry, skip, or abort |
| Verification command fails | Mark criterion as unverified, continue | Note in events, manual check needed | | Verification command fails | Mark criterion as unverified, continue | Note in events, manual check needed |
| No recommendations in conclusions | Cannot generate tasks.jsonl | Inform user, suggest lite-plan | | No recommendations in conclusions | Cannot generate .task/*.json | Inform user, suggest lite-plan |
| File conflict during execution | Document in execution-events.md | Resolve in dependency order | | File conflict during execution | Document in execution-events.md | Resolve in dependency order |
| Circular dependencies detected | Stop, report error | Fix dependencies in tasks.jsonl | | Circular dependencies detected | Stop, report error | Fix dependencies in .task/*.json |
| All tasks fail | Record all failures, suggest analysis review | Re-run analysis or manual intervention | | All tasks fail | Record all failures, suggest analysis review | Re-run analysis or manual intervention |
| Missing target file | Attempt to create if task.type is "feature" | Log as warning for other types | | Missing target file | Attempt to create if task.type is "feature" | Log as warning for other types |
@@ -691,10 +724,10 @@ When Quick Execute is activated, session folder expands with:
## Success Criteria ## Success Criteria
- `tasks.jsonl` generated with convergence criteria and source provenance per task - `.task/*.json` generated with convergence criteria and source provenance per task
- `execution.md` contains plan overview, task table, pre-execution analysis, final summary - `execution.md` contains plan overview, task table, pre-execution analysis, final summary
- `execution-events.md` contains chronological event stream with convergence verification - `execution-events.md` contains chronological event stream with convergence verification
- All tasks executed (or explicitly skipped) via direct inline execution - All tasks executed (or explicitly skipped) via direct inline execution
- Each task's convergence criteria checked and recorded - Each task's convergence criteria checked and recorded
- `_execution` state written back to tasks.jsonl after completion - `_execution` state written back to .task/*.json after completion
- User informed of results and next steps - User informed of results and next steps

View File

@@ -14,10 +14,38 @@ Interactive collaborative analysis workflow with **documented discussion process
**Key features**: **Key features**:
- **Documented discussion timeline**: Captures understanding evolution across all phases - **Documented discussion timeline**: Captures understanding evolution across all phases
- **Decision recording at every critical point**: Mandatory recording of key findings, direction changes, and trade-offs
- **Multi-perspective analysis**: Supports up to 4 analysis perspectives (serial, inline) - **Multi-perspective analysis**: Supports up to 4 analysis perspectives (serial, inline)
- **Interactive discussion**: Multi-round Q&A with user feedback and direction adjustments - **Interactive discussion**: Multi-round Q&A with user feedback and direction adjustments
- **Quick execute**: Convert conclusions directly to executable tasks - **Quick execute**: Convert conclusions directly to executable tasks
### Decision Recording Protocol
**CRITICAL**: During analysis, the following situations **MUST** trigger immediate recording to discussion.md:
| Trigger | What to Record | Target Section |
|---------|---------------|----------------|
| **Direction choice** | What was chosen, why, what alternatives were discarded | `#### Decision Log` |
| **Key finding** | Finding content, impact scope, confidence level | `#### Key Findings` |
| **Assumption change** | Old assumption → new understanding, reason, impact | `#### Corrected Assumptions` |
| **User feedback** | User's original input, rationale for adoption/adjustment | `#### User Input` |
| **Disagreement & trade-off** | Conflicting viewpoints, trade-off basis, final choice | `#### Decision Log` |
| **Scope adjustment** | Before/after scope, trigger reason | `#### Decision Log` |
**Decision Record Format**:
```markdown
> **Decision**: [Description of the decision]
> - **Context**: [What triggered this decision]
> - **Options considered**: [Alternatives evaluated]
> - **Chosen**: [Selected approach] — **Reason**: [Rationale]
> - **Impact**: [Effect on analysis direction/conclusions]
```
**Recording Principles**:
- **Immediacy**: Record decisions as they happen, not at the end of a phase
- **Completeness**: Capture context, options, chosen approach, and reason
- **Traceability**: Later phases must be able to trace back why a decision was made
## Auto Mode ## Auto Mode
When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analysis angles, skip interactive scoping. When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analysis angles, skip interactive scoping.
@@ -82,7 +110,7 @@ Step 4: Synthesis & Conclusion
└─ Offer options: quick execute / create issue / generate task / export / done └─ Offer options: quick execute / create issue / generate task / export / done
Step 5: Quick Execute (Optional - user selects) Step 5: Quick Execute (Optional - user selects)
├─ Convert conclusions.recommendations → tasks.jsonl (unified JSONL with convergence) ├─ Convert conclusions.recommendations → .task/TASK-*.json (individual task files with convergence)
├─ Pre-execution analysis (dependencies, file conflicts, execution order) ├─ Pre-execution analysis (dependencies, file conflicts, execution order)
├─ User confirmation ├─ User confirmation
├─ Direct inline execution (Read/Edit/Write/Grep/Glob/Bash) ├─ Direct inline execution (Read/Edit/Write/Grep/Glob/Bash)
@@ -215,11 +243,21 @@ const discussionMd = `# Analysis Discussion
## Initial Questions ## Initial Questions
${generateInitialQuestions(topic, dimensions).map(q => `- ${q}`).join('\n')} ${generateInitialQuestions(topic, dimensions).map(q => `- ${q}`).join('\n')}
## Initial Decisions
> Record why these dimensions and focus areas were selected.
--- ---
## Discussion Timeline ## Discussion Timeline
> Rounds will be appended below as analysis progresses. > Rounds will be appended below as analysis progresses.
> Each round MUST include a Decision Log section for any decisions made.
---
## Decision Trail
> Consolidated critical decisions across all rounds (populated in Phase 4).
--- ---
@@ -234,6 +272,7 @@ Write(`${sessionFolder}/discussion.md`, discussionMd)
- Session folder created with discussion.md initialized - Session folder created with discussion.md initialized
- Analysis dimensions identified - Analysis dimensions identified
- User preferences captured (focus, perspectives, depth) - User preferences captured (focus, perspectives, depth)
- **Initial decisions recorded**: Dimension selection rationale, excluded dimensions with reasons, user preference intent
### Phase 2: Exploration ### Phase 2: Exploration
@@ -390,6 +429,8 @@ Append Round 1 with exploration results:
- explorations.json (single) or perspectives.json (multi) created with findings - explorations.json (single) or perspectives.json (multi) created with findings
- discussion.md updated with Round 1 results - discussion.md updated with Round 1 results
- Ready for interactive discussion - Ready for interactive discussion
- **Key findings recorded** with evidence references and confidence levels
- **Exploration decisions recorded** (why certain perspectives/search strategies were chosen)
### Phase 3: Interactive Discussion ### Phase 3: Interactive Discussion
@@ -424,6 +465,11 @@ if (!autoYes) {
##### Step 3.2: Process User Response ##### Step 3.2: Process User Response
**Recording Checkpoint**: Regardless of which option the user selects, the following MUST be recorded to discussion.md:
- User's original choice and expression
- Impact of this choice on analysis direction
- If direction changed, record a full Decision Record
**Deepen** — continue analysis in current direction: **Deepen** — continue analysis in current direction:
```javascript ```javascript
// Deeper inline analysis using search tools // Deeper inline analysis using search tools
@@ -432,6 +478,7 @@ if (!autoYes) {
// Suggest improvement approaches // Suggest improvement approaches
// Provide risk/impact assessments // Provide risk/impact assessments
// Update explorations.json with deepening findings // Update explorations.json with deepening findings
// Record: Which assumptions were confirmed, specific angles for deeper exploration
``` ```
**Adjust Direction** — new focus area: **Adjust Direction** — new focus area:
@@ -454,6 +501,7 @@ const adjustedFocus = AskUserQuestion({
// Compare new insights with prior analysis // Compare new insights with prior analysis
// Identify what was missed and why // Identify what was missed and why
// Update explorations.json with adjusted findings // Update explorations.json with adjusted findings
// Record Decision: Trigger reason for direction adjustment, old vs new direction, expected impact
``` ```
**Specific Questions** — answer directly: **Specific Questions** — answer directly:
@@ -463,9 +511,13 @@ const adjustedFocus = AskUserQuestion({
// Provide evidence and file references // Provide evidence and file references
// Rate confidence for each answer (high/medium/low) // Rate confidence for each answer (high/medium/low)
// Document Q&A in discussion.md // Document Q&A in discussion.md
// Record: Knowledge gaps revealed by the question, new understanding from the answer
``` ```
**Analysis Complete** — exit loop, proceed to Phase 4. **Analysis Complete** — exit loop, proceed to Phase 4.
```javascript
// Record: Why concluding at this round (sufficient information / scope fully focused / user satisfied)
```
##### Step 3.3: Document Each Round ##### Step 3.3: Document Each Round
@@ -474,6 +526,7 @@ Update discussion.md with results from each discussion round:
| Section | Content | | Section | Content |
|---------|---------| |---------|---------|
| User Direction | Action taken (deepen/adjust/questions) and focus area | | User Direction | Action taken (deepen/adjust/questions) and focus area |
| Decision Log | Decisions made this round using Decision Record format |
| Analysis Results | Key findings, insights, evidence with file references | | Analysis Results | Key findings, insights, evidence with file references |
| Insights | New learnings or clarifications from this round | | Insights | New learnings or clarifications from this round |
| Corrected Assumptions | Important wrong→right transformations with explanation | | Corrected Assumptions | Important wrong→right transformations with explanation |
@@ -491,6 +544,8 @@ Update discussion.md with results from each discussion round:
- discussion.md updated with all discussion rounds - discussion.md updated with all discussion rounds
- Assumptions documented and corrected - Assumptions documented and corrected
- Exit condition reached (user selects complete or max rounds) - Exit condition reached (user selects complete or max rounds)
- **All decision points recorded** with Decision Record format
- **Direction changes documented** with before/after comparison and rationale
### Phase 4: Synthesis & Conclusion ### Phase 4: Synthesis & Conclusion
@@ -514,6 +569,9 @@ const conclusions = {
open_questions: [...], // Unresolved questions open_questions: [...], // Unresolved questions
follow_up_suggestions: [ // Next steps follow_up_suggestions: [ // Next steps
{ type: 'issue|task|research', summary: '...' } { type: 'issue|task|research', summary: '...' }
],
decision_trail: [ // Consolidated decisions from all phases
{ round: 1, decision: '...', context: '...', options_considered: [...], chosen: '...', reason: '...', impact: '...' }
] ]
} }
Write(`${sessionFolder}/conclusions.json`, JSON.stringify(conclusions, null, 2)) Write(`${sessionFolder}/conclusions.json`, JSON.stringify(conclusions, null, 2))
@@ -537,7 +595,15 @@ Append conclusions section and finalize:
| What Was Clarified | Important corrections (~~wrong→right~~) | | What Was Clarified | Important corrections (~~wrong→right~~) |
| Key Insights | Valuable learnings for future reference | | Key Insights | Valuable learnings for future reference |
**Session Statistics**: Total discussion rounds, key findings count, dimensions covered, artifacts generated. **Decision Trail Section**:
| Subsection | Content |
|------------|---------|
| Critical Decisions | Pivotal decisions that shaped the analysis outcome |
| Direction Changes | Timeline of scope/focus adjustments with rationale |
| Trade-offs Made | Key trade-offs and why certain paths were chosen |
**Session Statistics**: Total discussion rounds, key findings count, dimensions covered, artifacts generated, **decision count**.
##### Step 4.3: Post-Completion Options ##### Step 4.3: Post-Completion Options
@@ -570,24 +636,27 @@ if (!autoYes) {
**Success Criteria**: **Success Criteria**:
- conclusions.json created with complete synthesis - conclusions.json created with complete synthesis
- discussion.md finalized with conclusions - discussion.md finalized with conclusions and decision trail
- User offered meaningful next step options - User offered meaningful next step options
- **Complete decision trail** documented and traceable from initial scoping to final conclusions
### Phase 5: Quick Execute (Optional) ### Phase 5: Quick Execute (Optional)
**Objective**: Convert analysis conclusions into JSONL execution list with convergence criteria, then execute tasks directly inline. **Objective**: Convert analysis conclusions into individual task JSON files with convergence criteria, then execute tasks directly inline.
**Trigger**: User selects "Quick Execute" in Phase 4. **Trigger**: User selects "Quick Execute" in Phase 4.
**Key Principle**: No additional exploration — analysis phase has already collected all necessary context. No CLI delegation — execute directly using tools. **Key Principle**: No additional exploration — analysis phase has already collected all necessary context. No CLI delegation — execute directly using tools.
**Flow**: `conclusions.json → tasks.jsonl → User Confirmation → Direct Inline Execution → execution.md + execution-events.md` **Flow**: `conclusions.json → .task/*.json → User Confirmation → Direct Inline Execution → execution.md + execution-events.md`
**Full specification**: See `EXECUTE.md` for detailed step-by-step implementation. **Full specification**: See `EXECUTE.md` for detailed step-by-step implementation.
##### Step 5.1: Generate tasks.jsonl **Schema**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json`
Convert `conclusions.recommendations` into unified JSONL task format. Each line is a self-contained task with convergence criteria: ##### Step 5.1: Generate .task/*.json
Convert `conclusions.recommendations` into individual task JSON files. Each file is a self-contained task with convergence criteria:
```javascript ```javascript
const conclusions = JSON.parse(Read(`${sessionFolder}/conclusions.json`)) const conclusions = JSON.parse(Read(`${sessionFolder}/conclusions.json`))
@@ -623,8 +692,11 @@ const tasks = conclusions.recommendations.map((rec, index) => ({
})) }))
// Validate convergence quality (same as req-plan-with-file) // Validate convergence quality (same as req-plan-with-file)
// Write one task per line // Write each task as individual JSON file
Write(`${sessionFolder}/tasks.jsonl`, tasks.map(t => JSON.stringify(t)).join('\n')) Bash(`mkdir -p ${sessionFolder}/.task`)
tasks.forEach(task => {
Write(`${sessionFolder}/.task/${task.id}.json`, JSON.stringify(task, null, 2))
})
``` ```
##### Step 5.2: Pre-Execution Analysis ##### Step 5.2: Pre-Execution Analysis
@@ -647,7 +719,7 @@ if (!autoYes) {
options: [ options: [
{ label: "Start Execution", description: "Execute all tasks serially" }, { label: "Start Execution", description: "Execute all tasks serially" },
{ label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" }, { label: "Adjust Tasks", description: "Modify, reorder, or remove tasks" },
{ label: "Cancel", description: "Cancel execution, keep tasks.jsonl" } { label: "Cancel", description: "Cancel execution, keep .task/" }
] ]
}] }]
}) })
@@ -670,7 +742,7 @@ For each task in execution order:
- Update `execution.md` with final summary (statistics, task results table) - Update `execution.md` with final summary (statistics, task results table)
- Finalize `execution-events.md` with session footer - Finalize `execution-events.md` with session footer
- Update `tasks.jsonl` with `_execution` state per task - Update `.task/*.json` with `_execution` state per task
```javascript ```javascript
if (!autoYes) { if (!autoYes) {
@@ -691,7 +763,7 @@ if (!autoYes) {
``` ```
**Success Criteria**: **Success Criteria**:
- `tasks.jsonl` generated with convergence criteria and source provenance per task - `.task/*.json` generated with convergence criteria and source provenance per task
- `execution.md` contains plan overview, task table, pre-execution analysis, final summary - `execution.md` contains plan overview, task table, pre-execution analysis, final summary
- `execution-events.md` contains chronological event stream with convergence verification - `execution-events.md` contains chronological event stream with convergence verification
- All tasks executed (or explicitly skipped) via direct inline execution - All tasks executed (or explicitly skipped) via direct inline execution
@@ -710,7 +782,10 @@ if (!autoYes) {
├── explorations.json # Phase 2: Single perspective aggregated findings ├── explorations.json # Phase 2: Single perspective aggregated findings
├── perspectives.json # Phase 2: Multi-perspective findings with synthesis ├── perspectives.json # Phase 2: Multi-perspective findings with synthesis
├── conclusions.json # Phase 4: Final synthesis with recommendations ├── conclusions.json # Phase 4: Final synthesis with recommendations
├── tasks.jsonl # Phase 5: Unified JSONL with convergence + source (if quick execute) ├── .task/ # Phase 5: Individual task JSON files (if quick execute)
│ ├── TASK-001.json # One file per task with convergence + source
│ ├── TASK-002.json
│ └── ...
├── execution.md # Phase 5: Execution overview + task table + summary (if quick execute) ├── execution.md # Phase 5: Execution overview + task table + summary (if quick execute)
└── execution-events.md # Phase 5: Chronological event log (if quick execute) └── execution-events.md # Phase 5: Chronological event log (if quick execute)
``` ```
@@ -723,7 +798,7 @@ if (!autoYes) {
| `explorations.json` | 2 | Single perspective aggregated findings | | `explorations.json` | 2 | Single perspective aggregated findings |
| `perspectives.json` | 2 | Multi-perspective findings with cross-perspective synthesis | | `perspectives.json` | 2 | Multi-perspective findings with cross-perspective synthesis |
| `conclusions.json` | 4 | Final synthesis: conclusions, recommendations, open questions | | `conclusions.json` | 4 | Final synthesis: conclusions, recommendations, open questions |
| `tasks.jsonl` | 5 | Unified JSONL from recommendations, each line with convergence criteria and source provenance | | `.task/*.json` | 5 | Individual task files from recommendations, each with convergence criteria and source provenance |
| `execution.md` | 5 | Execution overview: plan source, task table, pre-execution analysis, final summary | | `execution.md` | 5 | Execution overview: plan source, task table, pre-execution analysis, final summary |
| `execution-events.md` | 5 | Chronological event stream with task details and convergence verification | | `execution-events.md` | 5 | Chronological event stream with task details and convergence verification |
@@ -822,12 +897,14 @@ The discussion.md file evolves through the analysis:
- **Header**: Session ID, topic, start time, identified dimensions - **Header**: Session ID, topic, start time, identified dimensions
- **Analysis Context**: Focus areas, perspectives, depth level - **Analysis Context**: Focus areas, perspectives, depth level
- **Initial Questions**: Key questions to guide the analysis - **Initial Questions**: Key questions to guide the analysis
- **Initial Decisions**: Why these dimensions and focus areas were selected
- **Discussion Timeline**: Round-by-round findings - **Discussion Timeline**: Round-by-round findings
- Round 1: Initial Understanding + Exploration Results - Round 1: Initial Understanding + Exploration Results + **Initial Decision Log**
- Round 2-N: User feedback + direction adjustments + new insights - Round 2-N: User feedback + direction adjustments + new insights + **Decision Log per round**
- **Decision Trail**: Consolidated critical decisions across all rounds
- **Synthesis & Conclusions**: Summary, key conclusions, recommendations - **Synthesis & Conclusions**: Summary, key conclusions, recommendations
- **Current Understanding (Final)**: Consolidated insights - **Current Understanding (Final)**: Consolidated insights
- **Session Statistics**: Rounds completed, findings count, artifacts generated - **Session Statistics**: Rounds completed, findings count, artifacts generated, decision count
### Round Documentation Pattern ### Round Documentation Pattern
@@ -839,6 +916,13 @@ Each discussion round follows a consistent structure:
#### User Input #### User Input
What the user indicated they wanted to focus on What the user indicated they wanted to focus on
#### Decision Log
> **Decision**: [Description of direction/scope/approach decision made this round]
> - **Context**: [What triggered this decision]
> - **Options considered**: [Alternatives evaluated]
> - **Chosen**: [Selected approach] — **Reason**: [Rationale]
> - **Impact**: [Effect on analysis direction/conclusions]
#### Analysis Results #### Analysis Results
New findings from this round's analysis New findings from this round's analysis
- Finding 1 (evidence: file:line) - Finding 1 (evidence: file:line)
@@ -867,7 +951,7 @@ Remaining questions or areas for investigation
| Session folder conflict | Append timestamp suffix | Create unique folder and continue | | Session folder conflict | Append timestamp suffix | Create unique folder and continue |
| Quick execute: task fails | Record failure in execution-events.md | User can retry, skip, or abort | | Quick execute: task fails | Record failure in execution-events.md | User can retry, skip, or abort |
| Quick execute: verification fails | Mark criterion as unverified, continue | Note in events, manual check | | Quick execute: verification fails | Mark criterion as unverified, continue | Note in events, manual check |
| Quick execute: no recommendations | Cannot generate tasks.jsonl | Suggest using lite-plan instead | | Quick execute: no recommendations | Cannot generate .task/*.json | Suggest using lite-plan instead |
## Best Practices ## Best Practices
@@ -889,6 +973,7 @@ Remaining questions or areas for investigation
3. **Use Continue Mode**: Resume sessions to build on previous findings rather than starting over 3. **Use Continue Mode**: Resume sessions to build on previous findings rather than starting over
4. **Embrace Corrections**: Track wrong→right transformations as valuable learnings 4. **Embrace Corrections**: Track wrong→right transformations as valuable learnings
5. **Iterate Thoughtfully**: Each discussion round should meaningfully refine understanding 5. **Iterate Thoughtfully**: Each discussion round should meaningfully refine understanding
6. **Record Decisions Immediately**: Never defer recording — capture decisions as they happen using the Decision Record format. A decision not recorded in-the-moment is a decision lost
### Documentation Practices ### Documentation Practices
@@ -898,6 +983,7 @@ Remaining questions or areas for investigation
4. **Evolution Tracking**: Document how understanding changed across rounds 4. **Evolution Tracking**: Document how understanding changed across rounds
5. **Action Items**: Generate specific, actionable recommendations 5. **Action Items**: Generate specific, actionable recommendations
6. **Multi-Perspective Synthesis**: When using multiple perspectives, document convergent/conflicting themes 6. **Multi-Perspective Synthesis**: When using multiple perspectives, document convergent/conflicting themes
7. **Link Decisions to Outcomes**: When writing conclusions, explicitly reference which decisions led to which outcomes — this creates an auditable trail from initial scoping to final recommendations
## When to Use ## When to Use
@@ -911,7 +997,7 @@ Remaining questions or areas for investigation
**Use Quick Execute (Phase 5) when:** **Use Quick Execute (Phase 5) when:**
- Analysis conclusions contain clear, actionable recommendations - Analysis conclusions contain clear, actionable recommendations
- Context is already sufficient — no additional exploration needed - Context is already sufficient — no additional exploration needed
- Want a streamlined analyze → JSONL plan → direct execute pipeline - Want a streamlined analyze → .task/*.json plan → direct execute pipeline
- Tasks are relatively independent and can be executed serially - Tasks are relatively independent and can be executed serially
**Consider alternatives when:** **Consider alternatives when:**

View File

@@ -1,463 +0,0 @@
---
name: brainstorm-to-cycle
description: Convert brainstorm session output to parallel-dev-cycle input with idea selection and context enrichment. Unified parameter format.
argument-hint: "--session=<id> [--idea=<index>] [--auto] [--launch]"
---
# Brainstorm to Cycle Adapter
## Overview
Bridge workflow that converts **brainstorm-with-file** output to **parallel-dev-cycle** input. Reads synthesis.json, allows user to select an idea, and formats it as an enriched TASK description.
**Core workflow**: Load Session → Select Idea → Format Task → Launch Cycle
## Inputs
| Argument | Required | Description |
|----------|----------|-------------|
| --session | Yes | Brainstorm session ID (e.g., `BS-rate-limiting-2025-01-28`) |
| --idea | No | Pre-select idea by index (0-based, from top_ideas) |
| --auto | No | Auto-select top-scored idea without confirmation |
| --launch | No | Auto-launch parallel-dev-cycle without preview |
## Output
Launches `/parallel-dev-cycle` with enriched TASK containing:
- Primary recommendation or selected idea
- Key strengths and challenges
- Suggested implementation steps
- Alternative approaches for reference
## Execution Process
```
Phase 1: Session Loading
├─ Validate session folder exists
├─ Read synthesis.json
├─ Parse top_ideas and recommendations
└─ Validate data structure
Phase 2: Idea Selection
├─ --auto mode → Select highest scored idea
├─ --idea=N → Select specified index
└─ Interactive → Present options, await selection
Phase 3: Task Formatting
├─ Build enriched task description
├─ Include context from brainstorm
└─ Generate parallel-dev-cycle command
Phase 4: Cycle Launch
├─ Confirm with user (unless --auto)
└─ Execute parallel-dev-cycle
```
## Implementation
### Phase 1: Session Loading
##### Step 0: Determine Project Root
检测项目根目录,确保 `.workflow/` 产物位置正确:
```bash
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
```
优先通过 git 获取仓库根目录;非 git 项目回退到 `pwd` 取当前绝对路径。
存储为 `{projectRoot}`,后续所有 `.workflow/` 路径必须以此为前缀。
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const projectRoot = bash('git rev-parse --show-toplevel 2>/dev/null || pwd').trim()
// Parse arguments
const args = "$ARGUMENTS"
const sessionId = "$SESSION"
const ideaIndexMatch = args.match(/--idea=(\d+)/)
const preSelectedIdea = ideaIndexMatch ? parseInt(ideaIndexMatch[1]) : null
const isAutoMode = args.includes('--auto')
// Validate session
const sessionFolder = `${projectRoot}/.workflow/.brainstorm/${sessionId}`
const synthesisPath = `${sessionFolder}/synthesis.json`
const brainstormPath = `${sessionFolder}/brainstorm.md`
function fileExists(p) {
try { return bash(`test -f "${p}" && echo "yes"`).includes('yes') } catch { return false }
}
if (!fileExists(synthesisPath)) {
console.error(`
## Error: Session Not Found
Session ID: ${sessionId}
Expected path: ${synthesisPath}
**Available sessions**:
`)
bash(`ls -1 ${projectRoot}/.workflow/.brainstorm/ 2>/dev/null | head -10`)
return { status: 'error', message: 'Session not found' }
}
// Load synthesis
const synthesis = JSON.parse(Read(synthesisPath))
// Validate structure
if (!synthesis.top_ideas || synthesis.top_ideas.length === 0) {
console.error(`
## Error: No Ideas Found
The brainstorm session has no top_ideas.
Please complete the brainstorm workflow first.
`)
return { status: 'error', message: 'No ideas in synthesis' }
}
console.log(`
## Brainstorm Session Loaded
**Session**: ${sessionId}
**Topic**: ${synthesis.topic}
**Completed**: ${synthesis.completed}
**Ideas Found**: ${synthesis.top_ideas.length}
`)
```
---
### Phase 2: Idea Selection
```javascript
let selectedIdea = null
let selectionSource = ''
// Auto mode: select highest scored
if (isAutoMode) {
selectedIdea = synthesis.top_ideas.reduce((best, idea) =>
idea.score > best.score ? idea : best
)
selectionSource = 'auto (highest score)'
console.log(`
**Auto-selected**: ${selectedIdea.title} (Score: ${selectedIdea.score}/10)
`)
}
// Pre-selected by index
else if (preSelectedIdea !== null) {
if (preSelectedIdea >= synthesis.top_ideas.length) {
console.error(`
## Error: Invalid Idea Index
Requested: --idea=${preSelectedIdea}
Available: 0 to ${synthesis.top_ideas.length - 1}
`)
return { status: 'error', message: 'Invalid idea index' }
}
selectedIdea = synthesis.top_ideas[preSelectedIdea]
selectionSource = `index ${preSelectedIdea}`
console.log(`
**Pre-selected**: ${selectedIdea.title} (Index: ${preSelectedIdea})
`)
}
// Interactive selection
else {
// Display options
console.log(`
## Select Idea for Development
| # | Title | Score | Feasibility |
|---|-------|-------|-------------|
${synthesis.top_ideas.map((idea, i) =>
`| ${i} | ${idea.title.substring(0, 40)} | ${idea.score}/10 | ${idea.feasibility || 'N/A'} |`
).join('\n')}
**Primary Recommendation**: ${synthesis.recommendations?.primary?.substring(0, 60) || 'N/A'}
`)
// Build options for AskUser
const ideaOptions = synthesis.top_ideas.slice(0, 4).map((idea, i) => ({
label: `#${i}: ${idea.title.substring(0, 30)}`,
description: `Score: ${idea.score}/10 - ${idea.description?.substring(0, 50) || ''}`
}))
// Add primary recommendation option if different
if (synthesis.recommendations?.primary) {
ideaOptions.unshift({
label: "Primary Recommendation",
description: synthesis.recommendations.primary.substring(0, 60)
})
}
const selection = ASK_USER([{
id: "idea", type: "select",
prompt: "Which idea should be developed?",
options: ideaOptions
}]) // BLOCKS (wait for user response)
// Parse selection
if (selection.idea === "Primary Recommendation") {
// Use primary recommendation as task
selectedIdea = {
title: "Primary Recommendation",
description: synthesis.recommendations.primary,
key_strengths: synthesis.key_insights || [],
main_challenges: [],
next_steps: synthesis.follow_up?.filter(f => f.type === 'implementation').map(f => f.summary) || []
}
selectionSource = 'primary recommendation'
} else {
const match = selection.idea.match(/^#(\d+):/)
const idx = match ? parseInt(match[1]) : 0
selectedIdea = synthesis.top_ideas[idx]
selectionSource = `user selected #${idx}`
}
}
console.log(`
### Selected Idea
**Title**: ${selectedIdea.title}
**Source**: ${selectionSource}
**Description**: ${selectedIdea.description?.substring(0, 200) || 'N/A'}
`)
```
---
### Phase 3: Task Formatting
```javascript
// Build enriched task description
function formatTask(idea, synthesis) {
const sections = []
// Main objective
sections.push(`# Main Objective\n\n${idea.title}`)
// Description
if (idea.description) {
sections.push(`# Description\n\n${idea.description}`)
}
// Key strengths
if (idea.key_strengths?.length > 0) {
sections.push(`# Key Strengths\n\n${idea.key_strengths.map(s => `- ${s}`).join('\n')}`)
}
// Main challenges (important for RA agent)
if (idea.main_challenges?.length > 0) {
sections.push(`# Main Challenges to Address\n\n${idea.main_challenges.map(c => `- ${c}`).join('\n')}`)
}
// Recommended steps
if (idea.next_steps?.length > 0) {
sections.push(`# Recommended Implementation Steps\n\n${idea.next_steps.map((s, i) => `${i + 1}. ${s}`).join('\n')}`)
}
// Alternative approaches (for RA consideration)
if (synthesis.recommendations?.alternatives?.length > 0) {
sections.push(`# Alternative Approaches (for reference)\n\n${synthesis.recommendations.alternatives.map(a => `- ${a}`).join('\n')}`)
}
// Key insights from brainstorm
if (synthesis.key_insights?.length > 0) {
const relevantInsights = synthesis.key_insights.slice(0, 3)
sections.push(`# Key Insights from Brainstorm\n\n${relevantInsights.map(i => `- ${i}`).join('\n')}`)
}
// Source reference
sections.push(`# Source\n\nBrainstorm Session: ${synthesis.session_id}\nTopic: ${synthesis.topic}`)
return sections.join('\n\n')
}
const enrichedTask = formatTask(selectedIdea, synthesis)
// Display formatted task
console.log(`
## Formatted Task for parallel-dev-cycle
\`\`\`markdown
${enrichedTask}
\`\`\`
`)
// Save task to session folder for reference
Write(`${sessionFolder}/cycle-task.md`, `# Generated Task\n\n**Generated**: ${getUtc8ISOString()}\n**Idea**: ${selectedIdea.title}\n**Selection**: ${selectionSource}\n\n---\n\n${enrichedTask}`)
```
---
### Phase 4: Cycle Launch
```javascript
// Confirm launch (unless auto mode)
let shouldLaunch = isAutoMode
if (!isAutoMode) {
const confirmation = ASK_USER([{
id: "launch", type: "select",
prompt: "Launch parallel-dev-cycle with this task?",
options: [
{ label: "Yes, launch cycle (Recommended)", description: "Start parallel-dev-cycle with enriched task" },
{ label: "No, just save task", description: "Save formatted task for manual use" }
]
}]) // BLOCKS (wait for user response)
shouldLaunch = confirmation.launch.includes("Yes")
}
if (shouldLaunch) {
console.log(`
## Launching parallel-dev-cycle
**Task**: ${selectedIdea.title}
**Source Session**: ${sessionId}
`)
// Escape task for command line
const escapedTask = enrichedTask
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`')
// Launch parallel-dev-cycle
// Note: In actual execution, this would invoke the skill
console.log(`
### Cycle Command
\`\`\`bash
/parallel-dev-cycle TASK="${escapedTask.substring(0, 100)}..."
\`\`\`
**Full task saved to**: ${sessionFolder}/cycle-task.md
`)
// Return success with cycle trigger
return {
status: 'success',
action: 'launch_cycle',
session_id: sessionId,
idea: selectedIdea.title,
task_file: `${sessionFolder}/cycle-task.md`,
cycle_command: `/parallel-dev-cycle TASK="${enrichedTask}"`
}
} else {
console.log(`
## Task Saved (Not Launched)
**Task file**: ${sessionFolder}/cycle-task.md
To launch manually:
\`\`\`bash
/parallel-dev-cycle TASK="$(cat ${sessionFolder}/cycle-task.md)"
\`\`\`
`)
return {
status: 'success',
action: 'saved_only',
session_id: sessionId,
task_file: `${sessionFolder}/cycle-task.md`
}
}
```
---
## Session Files
After execution:
```
{projectRoot}/.workflow/.brainstorm/{session-id}/
├── brainstorm.md # Original brainstorm
├── synthesis.json # Synthesis data (input)
├── perspectives.json # Perspectives data
├── ideas/ # Idea deep-dives
└── cycle-task.md # ⭐ Generated task (output)
```
## Task Format
The generated task includes:
| Section | Purpose | Used By |
|---------|---------|---------|
| Main Objective | Clear goal statement | RA: Primary requirement |
| Description | Detailed explanation | RA: Requirement context |
| Key Strengths | Why this approach | RA: Design decisions |
| Main Challenges | Known issues to address | RA: Edge cases, risks |
| Implementation Steps | Suggested approach | EP: Planning guidance |
| Alternatives | Other valid approaches | RA: Fallback options |
| Key Insights | Learnings from brainstorm | RA: Domain context |
## Error Handling
| Situation | Action |
|-----------|--------|
| Session not found | List available sessions, abort |
| synthesis.json missing | Suggest completing brainstorm first |
| No top_ideas | Report error, abort |
| Invalid --idea index | Show valid range, abort |
| Task too long | Truncate with reference to file |
## Examples
### Auto Mode (Quick Launch)
```bash
/brainstorm-to-cycle SESSION="BS-rate-limiting-2025-01-28" --auto
# → Selects highest-scored idea
# → Launches parallel-dev-cycle immediately
```
### Pre-Selected Idea
```bash
/brainstorm-to-cycle SESSION="BS-auth-system-2025-01-28" --idea=2
# → Selects top_ideas[2]
# → Confirms before launch
```
### Interactive Selection
```bash
/brainstorm-to-cycle SESSION="BS-caching-2025-01-28"
# → Displays all ideas with scores
# → User selects from options
# → Confirms and launches
```
## Integration Flow
```
brainstorm-with-file
synthesis.json
brainstorm-to-cycle ◄─── This command
enriched TASK
parallel-dev-cycle
RA → EP → CD → VAS
```
---
**Now execute brainstorm-to-cycle** with session: $SESSION

View File

@@ -21,7 +21,7 @@ Stateless iterative development loop using Codex single-agent deep interaction p
| loop-v2-routes.ts (Control Plane) | | loop-v2-routes.ts (Control Plane) |
| | | |
| State: {projectRoot}/.workflow/.loop/{loopId}.json (MASTER) | | State: {projectRoot}/.workflow/.loop/{loopId}.json (MASTER) |
| Tasks: {projectRoot}/.workflow/.loop/{loopId}.tasks.jsonl | | Tasks: {projectRoot}/.workflow/.loop/{loopId}/.task/*.json |
| | | |
| /start -> Trigger ccw-loop skill with --loop-id | | /start -> Trigger ccw-loop skill with --loop-id |
| /pause -> Set status='paused' (skill checks before action) | | /pause -> Set status='paused' (skill checks before action) |
@@ -60,13 +60,13 @@ Stateless iterative development loop using Codex single-agent deep interaction p
## Prep Package Integration ## Prep Package Integration
When `prep-package.json` exists at `{projectRoot}/.workflow/.loop/prep-package.json`, Phase 1 consumes it to: When `prep-package.json` exists at `{projectRoot}/.workflow/.loop/prep-package.json`, Phase 1 consumes it to:
- Load pre-built task list from `prep-tasks.jsonl` instead of generating tasks from scratch - Load pre-built task list from `.task/*.json` files instead of generating tasks from scratch
- Apply auto-loop config (max_iterations, timeout) - Apply auto-loop config (max_iterations, timeout)
- Preserve source provenance and convergence criteria from upstream planning/analysis skills - Preserve source provenance and convergence criteria from upstream planning/analysis skills
Prep packages are generated by the interactive prompt `/prompts:prep-loop`, which accepts JSONL from: Prep packages are generated by the interactive prompt `/prompts:prep-loop`, which accepts JSONL from:
- `collaborative-plan-with-file` (tasks.jsonl) - `collaborative-plan-with-file` (.task/*.json)
- `analyze-with-file` (tasks.jsonl) - `analyze-with-file` (.task/*.json)
- `brainstorm-to-cycle` (cycle-task.md → converted to task format) - `brainstorm-to-cycle` (cycle-task.md → converted to task format)
See [phases/00-prep-checklist.md](phases/00-prep-checklist.md) for schema and validation rules. See [phases/00-prep-checklist.md](phases/00-prep-checklist.md) for schema and validation rules.
@@ -151,7 +151,8 @@ close_agent → return finalState
``` ```
{projectRoot}/.workflow/.loop/ {projectRoot}/.workflow/.loop/
├── {loopId}.json # Master state file (API + Skill shared) ├── {loopId}.json # Master state file (API + Skill shared)
├── {loopId}.tasks.jsonl # Task list (API managed) ├── {loopId}/.task/ # Task files directory (API managed)
│ └── TASK-{id}.json # Individual task files (task-schema.json)
└── {loopId}.progress/ # Skill progress files └── {loopId}.progress/ # Skill progress files
├── develop.md # Development progress timeline ├── develop.md # Development progress timeline
├── debug.md # Understanding evolution document ├── debug.md # Understanding evolution document

View File

@@ -20,7 +20,7 @@ Schema reference for `prep-package.json` consumed by ccw-loop Phase 1. Generated
"source": { "source": {
"tool": "collaborative-plan-with-file | analyze-with-file | brainstorm-to-cycle | manual", "tool": "collaborative-plan-with-file | analyze-with-file | brainstorm-to-cycle | manual",
"session_id": "string", "session_id": "string",
"jsonl_path": "absolute path to original JSONL", "task_dir": "absolute path to source .task/ directory",
"task_count": "number", "task_count": "number",
"tasks_with_convergence": "number" "tasks_with_convergence": "number"
}, },
@@ -40,9 +40,9 @@ Schema reference for `prep-package.json` consumed by ccw-loop Phase 1. Generated
} }
``` ```
## prep-tasks.jsonl Schema ## .task/*.json Schema (task-schema.json)
One task per line, each in ccw-loop `develop.tasks[]` format with extended fields: One task per file in `.task/` directory, each following `task-schema.json` with ccw-loop extended fields:
```json ```json
{ {
@@ -71,8 +71,8 @@ One task per line, each in ccw-loop `develop.tasks[]` format with extended field
| 2 | target_skill | `=== "ccw-loop"` | Skip prep, use default INIT | | 2 | target_skill | `=== "ccw-loop"` | Skip prep, use default INIT |
| 3 | project_root | Matches current `projectRoot` | Skip prep, warn mismatch | | 3 | project_root | Matches current `projectRoot` | Skip prep, warn mismatch |
| 4 | freshness | `generated_at` within 24h | Skip prep, warn stale | | 4 | freshness | `generated_at` within 24h | Skip prep, warn stale |
| 5 | tasks file | `prep-tasks.jsonl` exists and readable | Skip prep, use default INIT | | 5 | tasks dir | `.task/` directory exists with *.json files | Skip prep, use default INIT |
| 6 | tasks content | At least 1 valid task line in JSONL | Skip prep, use default INIT | | 6 | tasks content | At least 1 valid task JSON in `.task/` | Skip prep, use default INIT |
## Integration Points ## Integration Points
@@ -89,9 +89,9 @@ if (fs.existsSync(prepPath)) {
if (checks.valid) { if (checks.valid) {
prepPackage = raw prepPackage = raw
// Load pre-built tasks from prep-tasks.jsonl // Load pre-built tasks from .task/*.json
const tasksPath = `${projectRoot}/.workflow/.loop/prep-tasks.jsonl` const taskDir = `${projectRoot}/.workflow/.loop/.task`
const prepTasks = loadPrepTasks(tasksPath) const prepTasks = loadPrepTasks(taskDir)
// → Inject into state.skill_state.develop.tasks // → Inject into state.skill_state.develop.tasks
// → Set max_iterations from auto_loop config // → Set max_iterations from auto_loop config
} else { } else {

View File

@@ -46,14 +46,14 @@ if (fs.existsSync(prepPath)) {
prepPackage = raw prepPackage = raw
// Load pre-built tasks // Load pre-built tasks
const tasksPath = `${projectRoot}/.workflow/.loop/prep-tasks.jsonl` const taskDir = `${projectRoot}/.workflow/.loop/.task`
prepTasks = loadPrepTasks(tasksPath) prepTasks = loadPrepTasks(taskDir)
if (prepTasks && prepTasks.length > 0) { if (prepTasks && prepTasks.length > 0) {
console.log(`✓ Prep package loaded: ${prepTasks.length} tasks from ${prepPackage.source.tool}`) console.log(`✓ Prep package loaded: ${prepTasks.length} tasks from ${prepPackage.source.tool}`)
console.log(` Checks passed: ${checks.passed.join(', ')}`) console.log(` Checks passed: ${checks.passed.join(', ')}`)
} else { } else {
console.warn(` Prep tasks file empty or invalid, falling back to default INIT`) console.warn(`Warning: Prep tasks directory empty or invalid, falling back to default INIT`)
prepPackage = null prepPackage = null
prepTasks = null prepTasks = null
} }
@@ -103,12 +103,13 @@ function validateLoopPrepPackage(prep, projectRoot) {
failures.push(`prep-package is ${Math.round(hoursSince)}h old (max 24h), may be stale`) failures.push(`prep-package is ${Math.round(hoursSince)}h old (max 24h), may be stale`)
} }
// Check 5: prep-tasks.jsonl must exist // Check 5: .task/ directory must exist with task files
const tasksPath = `${projectRoot}/.workflow/.loop/prep-tasks.jsonl` const taskDir = `${projectRoot}/.workflow/.loop/.task`
if (fs.existsSync(tasksPath)) { const taskFiles = Glob(`${taskDir}/*.json`)
passed.push('prep-tasks.jsonl exists') if (fs.existsSync(taskDir) && taskFiles.length > 0) {
passed.push(`.task/ exists (${taskFiles.length} files)`)
} else { } else {
failures.push('prep-tasks.jsonl not found') failures.push('.task/ directory not found or empty')
} }
// Check 6: task count > 0 // Check 6: task count > 0
@@ -126,24 +127,24 @@ function validateLoopPrepPackage(prep, projectRoot) {
} }
/** /**
* Load pre-built tasks from prep-tasks.jsonl. * Load pre-built tasks from .task/*.json directory.
* Returns array of task objects or null on failure. * Returns array of task objects or null on failure.
*/ */
function loadPrepTasks(tasksPath) { function loadPrepTasks(taskDir) {
if (!fs.existsSync(tasksPath)) return null if (!fs.existsSync(taskDir)) return null
const content = Read(tasksPath) const taskFiles = Glob(`${taskDir}/*.json`).sort()
const lines = content.trim().split('\n').filter(l => l.trim())
const tasks = [] const tasks = []
for (const line of lines) { for (const filePath of taskFiles) {
try { try {
const task = JSON.parse(line) const content = Read(filePath)
const task = JSON.parse(content)
if (task.id && task.description) { if (task.id && task.description) {
tasks.push(task) tasks.push(task)
} }
} catch (e) { } catch (e) {
console.warn(` Skipping invalid task line: ${e.message}`) console.warn(`Warning: Skipping invalid task file ${filePath}: ${e.message}`)
} }
} }

View File

@@ -29,11 +29,23 @@ Unified issue discovery and creation skill covering three entry points: manual i
Issue Discoveries Discoveries │ Issue Discoveries Discoveries │
(registered) (export) (export) │ (registered) (export) (export) │
│ │ │ │ │ │ │ │
└───────────┴─────────── │ ├───────────
↓ │
issue-resolve (plan/queue) ┌───────────┐
Phase 4 │
/issue:execute │Quick Plan │
│ │& Execute │ │
│ └─────┬─────┘ │
│ ↓ │
│ .task/*.json │
│ ↓ │
│ Direct Execution │
│ │ │
└───────────┴──────────────────────┘
↓ (fallback/remaining)
issue-resolve (plan/queue)
/issue:execute
``` ```
## Key Design Principles ## Key Design Principles
@@ -107,6 +119,7 @@ Post-Phase:
| Phase 1 | [phases/01-issue-new.md](phases/01-issue-new.md) | Action = Create New | Create issue from GitHub URL or text description | | Phase 1 | [phases/01-issue-new.md](phases/01-issue-new.md) | Action = Create New | Create issue from GitHub URL or text description |
| Phase 2 | [phases/02-discover.md](phases/02-discover.md) | Action = Discover | Multi-perspective issue discovery (bug, security, test, etc.) | | Phase 2 | [phases/02-discover.md](phases/02-discover.md) | Action = Discover | Multi-perspective issue discovery (bug, security, test, etc.) |
| Phase 3 | [phases/03-discover-by-prompt.md](phases/03-discover-by-prompt.md) | Action = Discover by Prompt | Prompt-driven iterative exploration with Gemini planning | | Phase 3 | [phases/03-discover-by-prompt.md](phases/03-discover-by-prompt.md) | Action = Discover by Prompt | Prompt-driven iterative exploration with Gemini planning |
| Phase 4 | [phases/04-quick-execute.md](phases/04-quick-execute.md) | Post-Phase = Quick Plan & Execute | Convert high-confidence findings to tasks and execute directly |
## Core Rules ## Core Rules
@@ -321,13 +334,15 @@ ASK_USER([{
ASK_USER([{ ASK_USER([{
id: "next_after_discover", id: "next_after_discover",
type: "select", type: "select",
prompt: "Discovery complete. What next?", prompt: `Discovery complete: ${findings.length} findings, ${executableFindings.length} executable. What next?`,
options: [ options: [
{ label: "Quick Plan & Execute (Recommended)", description: `Fix ${executableFindings.length} high-confidence findings directly` },
{ label: "Export to Issues", description: "Convert discoveries to issues" }, { label: "Export to Issues", description: "Convert discoveries to issues" },
{ label: "Plan Solutions", description: "Plan solutions for exported issues via issue-resolve" }, { label: "Plan Solutions", description: "Plan solutions for exported issues via issue-resolve" },
{ label: "Done", description: "Exit workflow" } { label: "Done", description: "Exit workflow" }
] ]
}]); // BLOCKS (wait for user response) }]); // BLOCKS (wait for user response)
// If "Quick Plan & Execute" → Read phases/04-quick-execute.md, execute
``` ```
## Related Skills & Commands ## Related Skills & Commands

View File

@@ -0,0 +1,241 @@
# Phase 4: Quick Plan & Execute
> 来源: 分析会话 `ANL-issue-discover规划执行能力-2026-02-11`
## Overview
直接将高置信度 discovery findings 转换为 `.task/*.json` 并内联执行。
跳过 issue 注册和完整规划流程,适用于明确可修复的问题。
**Core workflow**: Load Findings → Filter → Convert to Tasks → Pre-Execution → User Confirmation → Execute → Finalize
**Trigger**: Phase 2/3 完成后,用户选择 "Quick Plan & Execute"
**Output Directory**: 继承 discovery session 的 `{outputDir}`
**Filter**: `confidence ≥ 0.7 AND priority ∈ {critical, high}`
## Prerequisites
- Phase 2 (Discover) 或 Phase 3 (Discover by Prompt) 已完成
- `{outputDir}` 下存在 discovery 输出 (perspectives/*.json 或 discovery-issues.jsonl)
## Auto Mode
When `--yes` or `-y`: 自动过滤 → 自动生成任务 → 自动确认执行 → 失败自动跳过 → 自动 Done。
## Execution Steps
### Step 4.1: Load & Filter Findings
**加载优先级** (按顺序尝试):
```
1. perspectives/*.json — Phase 2 多视角发现 (每个文件含 findings[])
2. discovery-issues.jsonl — Phase 2/3 聚合输出 (每行一个 JSON finding)
3. iterations/*.json — Phase 3 迭代输出 (每个文件含 findings[])
→ 如果全部为空: 报错 "No discoveries found. Run discover first." 并退出
```
**过滤规则**:
```
executableFindings = allFindings.filter(f =>
(f.confidence || 0) >= 0.7 &&
['critical', 'high'].includes(f.priority)
)
```
- 如果 0 个可执行 findings → 提示 "No executable findings (all below threshold)",建议用户走 "Export to Issues" 路径
- 如果超过 10 个 findings → ASK_USER 确认是否全部执行或选择子集 (Auto mode: 全部执行)
**同文件聚合**:
```
按 finding.file 聚合:
- 同文件 1 个 finding → 生成 1 个独立 task
- 同文件 2+ findings → 合并为 1 个 task (mergeFindingsToTask)
```
### Step 4.2: Generate .task/*.json
对每个 filtered finding (或 file group),生成 task-schema.json 格式的任务文件。
#### 单 Finding 转换 (convertFindingToTask)
```
Finding 字段 → Task-Schema 字段 → 转换逻辑
─────────────────────────────────────────────────────────────
id (dsc-bug-001-...) → id (TASK-001) → 重新编号: TASK-{sequential:3}
title → title → 直接使用
description+impact+rec → description → 拼接: "{description}\n\nImpact: {impact}\nRecommendation: {recommendation}"
(无) → depends_on → 默认 []
(推导) → convergence → 按 perspective/category 模板推导 (见下表)
suggested_issue.type → type → 映射: bug→fix, feature→feature, enhancement→enhancement, refactor→refactor, test→testing
priority → priority → 直接使用 (已匹配 enum)
file + line → files[] → [{path: file, action: "modify", changes: [recommendation], target: "line:{line}"}]
snippet + file:line → evidence[] → ["{file}:{line}", snippet]
recommendation → implementation[] → [recommendation]
(固定) → source → {tool: "issue-discover", session_id: discoveryId, original_id: finding.id}
```
**Type 映射**:
```
suggested_issue.type → task type:
bug → fix, feature → feature, enhancement → enhancement,
refactor → refactor, test → testing, docs → enhancement
perspective fallback (无 suggested_issue.type 时):
bug/security → fix, test → testing, quality/maintainability/best-practices → refactor,
performance/ux → enhancement
```
**Effort 推导**:
```
critical priority → large
high priority → medium
其他 → small
```
#### 合并 Finding 转换 (mergeFindingsToTask)
同文件 2+ findings 合并为一个 task:
```
1. 按 priority 排序: critical > high > medium > low
2. 取最高优先级 finding 的 priority 作为 task priority
3. 取最高优先级 finding 的 type 作为 task type
4. title: "Fix {findings.length} issues in {basename(file)}"
5. description: 按 finding 编号逐条列出 (### Finding N: title + description + impact + recommendation + line)
6. convergence.criteria: 每个 finding 独立生成 criterion
7. verification: 选择最严格的验证命令 (jest > eslint > tsc > Manual)
8. definition_of_done: "修复 {file} 中的 {N} 个问题: {categories.join(', ')}"
9. effort: 1个=原始, 2个=medium, 3+=large
10. source.original_id: findings.map(f => f.id).join(',')
```
#### Convergence 模板 (按 perspective/category 推导)
| Perspective | criteria 模板 | verification | definition_of_done |
|-------------|--------------|-------------|-------------------|
| **bug** | "修复 {file}:{line} 的 {category} 问题", "相关模块测试通过" | `npx tsc --noEmit` | "消除 {impact} 风险" |
| **security** | "修复 {file} 的 {category} 漏洞", "安全检查通过" | `npx eslint {file} --rule 'security/*'` | "消除 {impact} 安全风险" |
| **test** | "新增测试覆盖 {file}:{line} 场景", "新增测试通过" | `npx jest --testPathPattern={testFile}` | "提升 {file} 模块的测试覆盖" |
| **quality** | "重构 {file}:{line} 降低 {category}", "lint 检查通过" | `npx eslint {file}` | "改善代码 {category}" |
| **performance** | "优化 {file}:{line} 的 {category} 问题", "无性能回退" | `npx tsc --noEmit` | "改善 {impact} 的性能表现" |
| **maintainability** | "重构 {file}:{line} 改善 {category}", "构建通过" | `npx tsc --noEmit` | "降低模块间的 {category}" |
| **ux** | "改善 {file}:{line} 的 {category}", "界面测试验证" | `Manual: 检查 UI 行为` | "改善用户感知的 {category}" |
| **best-practices** | "修正 {file}:{line} 的 {category}", "lint 通过" | `npx eslint {file}` | "符合 {category} 最佳实践" |
**低置信度处理**: confidence < 0.8 的 findingsverification 前缀 `Manual: `
**输出**: 写入 `{outputDir}/.task/TASK-{seq}.json`,验证 convergence 非空且非 vague。
### Step 4.3: Pre-Execution Analysis
> Reference: analyze-with-file/EXECUTE.md Step 2-3
复用 EXECUTE.md 的 Pre-Execution 逻辑:
1. **依赖检测**: 检查 `depends_on` 引用是否存在
2. **循环检测**: 无环 → 拓扑排序确定执行顺序
3. **文件冲突分析**: 检查多个 tasks 是否修改同一文件 (同文件已聚合,此处检测跨 task 冲突)
4. **生成 execution.md**: 任务列表、执行顺序、冲突报告
5. **生成 execution-events.md**: 空事件日志,后续记录执行过程
### Step 4.4: User Confirmation
展示任务概要:
```
Quick Execute Summary:
- Total findings: {allFindings.length}
- Executable (filtered): {executableFindings.length}
- Tasks generated: {tasks.length}
- File conflicts: {conflicts.length}
```
ASK_USER:
```javascript
ASK_USER([{
id: "confirm_execute",
type: "select",
prompt: `${tasks.length} tasks ready. Start execution?`,
options: [
{ label: "Start Execution", description: "Execute all tasks" },
{ label: "Adjust Filter", description: "Change confidence/priority threshold" },
{ label: "Cancel", description: "Skip execution, return to post-phase options" }
]
}]);
// Auto mode: Start Execution
```
- "Adjust Filter" → 重新 ASK_USER 输入 confidence 和 priority 阈值,返回 Step 4.1
- "Cancel" → 退出 Phase 4
### Step 4.5: Direct Inline Execution
> Reference: analyze-with-file/EXECUTE.md Step 5
逐任务执行 (按拓扑排序):
```
for each task in sortedTasks:
1. Read target file(s)
2. Analyze current state vs task.description
3. Apply changes (Edit/Write)
4. Verify convergence:
- Execute task.convergence.verification command
- Check criteria fulfillment
5. Record event to execution-events.md:
- TASK_START → TASK_COMPLETE / TASK_FAILED
6. Update .task/TASK-{id}.json _execution status
7. If failed:
- Auto mode: Skip & Continue
- Interactive: ASK_USER → Retry / Skip / Abort
```
**可选 auto-commit**: 每个成功 task 后 `git add {files} && git commit -m "fix: {task.title}"`
### Step 4.6: Finalize
> Reference: analyze-with-file/EXECUTE.md Step 6-7
1. **更新 execution.md**: 执行统计 (成功/失败/跳过)
2. **更新 .task/*.json**: `_execution.status` = completed/failed/skipped
3. **Post-Execute 选项**:
```javascript
// 计算未执行 findings
const remainingFindings = allFindings.filter(f => !executedFindingIds.has(f.id))
ASK_USER([{
id: "post_quick_execute",
type: "select",
prompt: `Quick Execute: ${completedCount}/${tasks.length} succeeded. ${remainingFindings.length} findings not executed.`,
options: [
{ label: "Retry Failed", description: `Re-execute ${failedCount} failed tasks` },
{ label: "Export Remaining", description: `Export ${remainingFindings.length} remaining findings to issues` },
{ label: "View Events", description: "Display execution-events.md" },
{ label: "Done", description: "End workflow" }
]
}]);
// Auto mode: Done
```
**"Export Remaining" 逻辑**: 将未执行的 findings 通过现有 Phase 2/3 的 "Export to Issues" 流程注册为 issues进入 issue-resolve 完整管道。
## Edge Cases
| 边界情况 | 处理策略 |
|---------|---------|
| 0 个可执行 findings | 提示 "No executable findings",建议 Export to Issues |
| 只有 1 个 finding | 正常生成 1 个 TASK-001.json简化确认对话 |
| 超过 10 个 findings | ASK_USER 确认全部执行或选择子集 |
| finding 缺少 recommendation | criteria 退化为 "Review and fix {category} in {file}:{line}" |
| finding 缺少 confidence | 默认 confidence=0.5,不满足过滤阈值 → 排除 |
| discovery 输出不存在 | 报错 "No discoveries found. Run discover first." |
| .task/ 目录已存在 | ASK_USER 追加 (TASK-{max+1}) 或覆盖 |
| 执行中文件被外部修改 | convergence verification 检测到差异,标记为 FAIL |
| 所有 tasks 执行失败 | 建议 "Export to Issues → issue-resolve" 完整路径 |
| finding 来自不同 perspective 但同文件 | 仍合并为一个 taskconvergence.criteria 保留各自标准 |

View File

@@ -2,7 +2,7 @@
## Overview ## Overview
Serial lightweight planning with CLI-powered exploration and search verification. Produces unified JSONL (`tasks.jsonl`) compatible with `collaborative-plan-with-file` output format, consumable by `unified-execute-with-file`. Serial lightweight planning with CLI-powered exploration and search verification. Produces `.task/TASK-*.json` (one file per task) compatible with `collaborative-plan-with-file` output format, consumable by `unified-execute-with-file`.
**Core capabilities:** **Core capabilities:**
- Intelligent task analysis with automatic exploration detection - Intelligent task analysis with automatic exploration detection
@@ -10,7 +10,7 @@ Serial lightweight planning with CLI-powered exploration and search verification
- Search verification after each CLI exploration (ACE search, Grep, Glob) - Search verification after each CLI exploration (ACE search, Grep, Glob)
- Interactive clarification after exploration to gather missing information - Interactive clarification after exploration to gather missing information
- Direct planning by Claude (all complexity levels, no agent delegation) - Direct planning by Claude (all complexity levels, no agent delegation)
- Unified JSONL output (`tasks.jsonl`) with convergence criteria - Unified multi-file task output (`.task/TASK-*.json`) with convergence criteria
## Parameters ## Parameters
@@ -28,7 +28,7 @@ Serial lightweight planning with CLI-powered exploration and search verification
| `explorations-manifest.json` | Index of all exploration files | | `explorations-manifest.json` | Index of all exploration files |
| `exploration-notes.md` | Synthesized exploration notes (all angles combined) | | `exploration-notes.md` | Synthesized exploration notes (all angles combined) |
| `requirement-analysis.json` | Complexity assessment and session metadata | | `requirement-analysis.json` | Complexity assessment and session metadata |
| `tasks.jsonl` | ⭐ Unified JSONL (collaborative-plan-with-file compatible) | | `.task/TASK-*.json` | Multi-file task output (one JSON file per task) |
| `plan.md` | Human-readable summary with execution command | | `plan.md` | Human-readable summary with execution command |
**Output Directory**: `{projectRoot}/.workflow/.lite-plan/{session-id}/` **Output Directory**: `{projectRoot}/.workflow/.lite-plan/{session-id}/`
@@ -62,10 +62,10 @@ Phase 2: Clarification (optional, multi-round)
├─ Deduplicate similar questions ├─ Deduplicate similar questions
└─ ASK_USER (max 4 questions per round, multiple rounds) └─ ASK_USER (max 4 questions per round, multiple rounds)
Phase 3: Planning → tasks.jsonl (NO CODE EXECUTION) Phase 3: Planning → .task/*.json (NO CODE EXECUTION)
├─ Load exploration notes + clarifications + project context ├─ Load exploration notes + clarifications + project context
├─ Direct Claude planning (following unified JSONL schema) ├─ Direct Claude planning (following unified task JSON schema)
├─ Generate tasks.jsonl (one task per line) ├─ Generate .task/TASK-*.json (one file per task)
└─ Generate plan.md (human-readable summary) └─ Generate plan.md (human-readable summary)
Phase 4: Confirmation Phase 4: Confirmation
@@ -73,7 +73,7 @@ Phase 4: Confirmation
└─ ASK_USER: Allow / Modify / Cancel └─ ASK_USER: Allow / Modify / Cancel
Phase 5: Handoff Phase 5: Handoff
└─ → unified-execute-with-file with tasks.jsonl └─ → unified-execute-with-file with .task/ directory
``` ```
## Implementation ## Implementation
@@ -334,7 +334,7 @@ Aggregated from all exploration angles, deduplicated
--- ---
### Phase 3: Planning → tasks.jsonl ### Phase 3: Planning → .task/*.json
**IMPORTANT**: Phase 3 is **planning only** — NO code execution. All implementation happens via unified-execute-with-file. **IMPORTANT**: Phase 3 is **planning only** — NO code execution. All implementation happens via unified-execute-with-file.
@@ -358,9 +358,9 @@ Write(`${sessionFolder}/requirement-analysis.json`, JSON.stringify({
}, null, 2)) }, null, 2))
``` ```
#### Step 3.3: Generate tasks.jsonl #### Step 3.3: Generate .task/*.json
Direct Claude planning — synthesize exploration findings and clarifications into unified JSONL tasks: Direct Claude planning — synthesize exploration findings and clarifications into individual task JSON files:
**Task Grouping Rules**: **Task Grouping Rules**:
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files) 1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
@@ -370,7 +370,7 @@ Direct Claude planning — synthesize exploration findings and clarifications in
5. **True dependencies only**: Only use depends_on when Task B cannot start without Task A's output 5. **True dependencies only**: Only use depends_on when Task B cannot start without Task A's output
6. **Prefer parallel**: Most tasks should be independent (no depends_on) 6. **Prefer parallel**: Most tasks should be independent (no depends_on)
**Unified JSONL Task Format** (one JSON object per line): **Unified Task JSON Format** (one JSON file per task, stored in `.task/` directory):
```javascript ```javascript
{ {
@@ -406,10 +406,15 @@ Direct Claude planning — synthesize exploration findings and clarifications in
} }
``` ```
**Write tasks.jsonl**: **Write .task/*.json**:
```javascript ```javascript
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n') // Create .task/ directory
Write(`${sessionFolder}/tasks.jsonl`, jsonlContent) Bash(`mkdir -p ${sessionFolder}/.task`)
// Write each task as an individual JSON file
tasks.forEach(task => {
Write(`${sessionFolder}/.task/${task.id}.json`, JSON.stringify(task, null, 2))
})
``` ```
#### Step 3.4: Generate plan.md #### Step 3.4: Generate plan.md
@@ -449,7 +454,7 @@ ${t.convergence.criteria.map(c => \` - ${c}\`).join('\n')}
## 执行 ## 执行
\`\`\`bash \`\`\`bash
$unified-execute-with-file PLAN="${sessionFolder}/tasks.jsonl" $unified-execute-with-file PLAN="${sessionFolder}/.task/"
\`\`\` \`\`\`
**Session artifacts**: \`${sessionFolder}/\` **Session artifacts**: \`${sessionFolder}/\`
@@ -463,7 +468,7 @@ Write(`${sessionFolder}/plan.md`, planMd)
#### Step 4.1: Display Plan #### Step 4.1: Display Plan
Read `{sessionFolder}/tasks.jsonl` and display summary: Read `{sessionFolder}/.task/` directory and display summary:
- **Summary**: Overall approach (from requirement understanding) - **Summary**: Overall approach (from requirement understanding)
- **Tasks**: Numbered list with ID, title, type, effort - **Tasks**: Numbered list with ID, title, type, effort
@@ -488,7 +493,7 @@ Read `{sessionFolder}/tasks.jsonl` and display summary:
**Output**: `userSelection` — `{ confirmation: "Allow" | "Modify" | "Cancel" }` **Output**: `userSelection` — `{ confirmation: "Allow" | "Modify" | "Cancel" }`
**Modify Loop**: If "Modify" selected, display current tasks.jsonl content, accept user edits (max 3 rounds), regenerate plan.md, re-confirm. **Modify Loop**: If "Modify" selected, display current `.task/*.json` content, accept user edits (max 3 rounds), regenerate plan.md, re-confirm.
--- ---
@@ -509,7 +514,10 @@ Read `{sessionFolder}/tasks.jsonl` and display summary:
├── explorations-manifest.json # Exploration index ├── explorations-manifest.json # Exploration index
├── exploration-notes.md # Synthesized exploration notes ├── exploration-notes.md # Synthesized exploration notes
├── requirement-analysis.json # Complexity assessment ├── requirement-analysis.json # Complexity assessment
├── tasks.jsonl # ⭐ Unified JSONL output ├── .task/ # ⭐ Task JSON files (one per task)
│ ├── TASK-001.json
│ ├── TASK-002.json
│ └── ...
└── plan.md # Human-readable summary └── plan.md # Human-readable summary
``` ```
@@ -522,7 +530,10 @@ Read `{sessionFolder}/tasks.jsonl` and display summary:
├── explorations-manifest.json ├── explorations-manifest.json
├── exploration-notes.md ├── exploration-notes.md
├── requirement-analysis.json ├── requirement-analysis.json
├── tasks.jsonl ├── .task/
│ ├── TASK-001.json
│ ├── TASK-002.json
│ └── ...
└── plan.md └── plan.md
``` ```
@@ -543,7 +554,7 @@ Read `{sessionFolder}/tasks.jsonl` and display summary:
## Post-Phase Update ## Post-Phase Update
After Phase 1 (Lite Plan) completes: After Phase 1 (Lite Plan) completes:
- **Output Created**: `tasks.jsonl` + `plan.md` + exploration artifacts in session folder - **Output Created**: `.task/TASK-*.json` + `plan.md` + exploration artifacts in session folder
- **Session Artifacts**: All files in `{projectRoot}/.workflow/.lite-plan/{session-id}/` - **Session Artifacts**: All files in `{projectRoot}/.workflow/.lite-plan/{session-id}/`
- **Next Action**: Auto-continue to [Phase 2: Execution Handoff](02-lite-execute.md) - **Next Action**: Auto-continue to [Phase 2: Execution Handoff](02-lite-execute.md)
- **TodoWrite**: Mark "Lite Plan - Planning" as completed, start "Execution (unified-execute)" - **TodoWrite**: Mark "Lite Plan - Planning" as completed, start "Execution (unified-execute)"

View File

@@ -2,12 +2,12 @@
## Overview ## Overview
消费 Phase 1 产出的统一 JSONL (`tasks.jsonl`),串行执行任务并进行收敛验证,通过 `execution.md` + `execution-events.md` 跟踪进度。 消费 Phase 1 产出的 `.task/*.json` (multi-file task definitions),串行执行任务并进行收敛验证,通过 `execution.md` + `execution-events.md` 跟踪进度。
**Core workflow**: Load JSONL → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress **Core workflow**: Load .task/*.json → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress
**Key features**: **Key features**:
- **Single format**: 只消费统一 JSONL (`tasks.jsonl`) - **Single format**: 只消费 `.task/*.json` (one JSON file per task)
- **Convergence-driven**: 每个任务执行后验证收敛标准 - **Convergence-driven**: 每个任务执行后验证收敛标准
- **Serial execution**: 按拓扑序串行执行,依赖跟踪 - **Serial execution**: 按拓扑序串行执行,依赖跟踪
- **Dual progress tracking**: `execution.md` (概览) + `execution-events.md` (事件流) - **Dual progress tracking**: `execution.md` (概览) + `execution-events.md` (事件流)
@@ -17,11 +17,11 @@
## Invocation ## Invocation
```javascript ```javascript
$unified-execute-with-file PLAN="${sessionFolder}/tasks.jsonl" $unified-execute-with-file PLAN="${sessionFolder}/.task/"
// With options // With options
$unified-execute-with-file PLAN="${sessionFolder}/tasks.jsonl" --auto-commit $unified-execute-with-file PLAN="${sessionFolder}/.task/" --auto-commit
$unified-execute-with-file PLAN="${sessionFolder}/tasks.jsonl" --dry-run $unified-execute-with-file PLAN="${sessionFolder}/.task/" --dry-run
``` ```
## Output Structure ## Output Structure
@@ -32,7 +32,7 @@ ${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/
└── execution-events.md # Unified event log (single source of truth) └── execution-events.md # Unified event log (single source of truth)
``` ```
Additionally, the source `tasks.jsonl` is updated in-place with `_execution` states. Additionally, the source `.task/*.json` files are updated in-place with execution states (`status`, `executed_at`, `result`).
--- ---
@@ -51,11 +51,11 @@ let planPath = planMatch ? planMatch[1] : null
// Auto-detect if no PLAN specified // Auto-detect if no PLAN specified
if (!planPath) { if (!planPath) {
// Search in order (most recent first): // Search in order (most recent first):
// .workflow/.lite-plan/*/tasks.jsonl // .workflow/.lite-plan/*/.task/
// .workflow/.req-plan/*/tasks.jsonl // .workflow/.req-plan/*/.task/
// .workflow/.planning/*/tasks.jsonl // .workflow/.planning/*/.task/
// .workflow/.analysis/*/tasks.jsonl // .workflow/.analysis/*/.task/
// .workflow/.brainstorm/*/tasks.jsonl // .workflow/.brainstorm/*/.task/
} }
// Resolve path // Resolve path
@@ -75,20 +75,19 @@ Bash(`mkdir -p ${sessionFolder}`)
## Phase 1: Load & Validate ## Phase 1: Load & Validate
**Objective**: Parse unified JSONL, validate schema and dependencies, build execution order. **Objective**: Parse `.task/*.json` files, validate schema and dependencies, build execution order.
### Step 1.1: Parse Unified JSONL ### Step 1.1: Parse Task JSON Files
```javascript ```javascript
const content = Read(planPath) // Read all JSON files from .task/ directory
const tasks = content.split('\n') const taskFiles = Glob(`${planPath}/*.json`).sort()
.filter(line => line.trim()) const tasks = taskFiles.map((file, i) => {
.map((line, i) => { try { return JSON.parse(Read(file)) }
try { return JSON.parse(line) } catch (e) { throw new Error(`File ${file}: Invalid JSON — ${e.message}`) }
catch (e) { throw new Error(`Line ${i + 1}: Invalid JSON — ${e.message}`) } })
})
if (tasks.length === 0) throw new Error('No tasks found in JSONL file') if (tasks.length === 0) throw new Error('No task files found in .task/ directory')
``` ```
### Step 1.2: Validate Schema ### Step 1.2: Validate Schema
@@ -300,8 +299,9 @@ for (const taskId of executionOrder) {
if (unmetDeps.length) { if (unmetDeps.length) {
appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`) appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`)
skippedTasks.add(task.id) skippedTasks.add(task.id)
task._execution = { status: 'skipped', executed_at: startTime, task.status = 'skipped'
result: { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` } } task.executed_at = startTime
task.result = { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` }
continue continue
} }
@@ -321,8 +321,9 @@ ${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')}
if (dryRun) { if (dryRun) {
// Simulate: mark as completed without changes // Simulate: mark as completed without changes
appendToEvents(`\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n`) appendToEvents(`\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n`)
task._execution = { status: 'completed', executed_at: startTime, task.status = 'completed'
result: { success: true, summary: 'Dry run — no changes made' } } task.executed_at = startTime
task.result = { success: true, summary: 'Dry run — no changes made' }
completedTasks.add(task.id) completedTasks.add(task.id)
continue continue
} }
@@ -358,15 +359,14 @@ ${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ?
--- ---
`) `)
task._execution = { task.status = 'completed'
status: 'completed', executed_at: endTime, task.executed_at = endTime
result: { task.result = {
success: true, success: true,
files_modified: filesModified, files_modified: filesModified,
summary: changeSummary, summary: changeSummary,
convergence_verified: convergenceResults.verified convergence_verified: convergenceResults.verified
} }
}
completedTasks.add(task.id) completedTasks.add(task.id)
} else { } else {
// 5b. Record FAILURE // 5b. Record FAILURE
@@ -374,7 +374,7 @@ ${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ?
} }
// 6. Auto-commit if enabled // 6. Auto-commit if enabled
if (autoCommit && task._execution.status === 'completed') { if (autoCommit && task.status === 'completed') {
autoCommitTask(task, filesModified) autoCommitTask(task, filesModified)
} }
} }
@@ -440,14 +440,13 @@ ${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ?
--- ---
`) `)
task._execution = { task.status = 'failed'
status: 'failed', executed_at: endTime, task.executed_at = endTime
result: { task.result = {
success: false, success: false,
error: 'Convergence verification failed', error: 'Convergence verification failed',
convergence_verified: convergenceResults.verified convergence_verified: convergenceResults.verified
} }
}
failedTasks.add(task.id) failedTasks.add(task.id)
// Ask user // Ask user
@@ -518,7 +517,7 @@ const summary = `
| ID | Title | Status | Convergence | Files Modified | | ID | Title | Status | Convergence | Files Modified |
|----|-------|--------|-------------|----------------| |----|-------|--------|-------------|----------------|
${tasks.map(t => { ${tasks.map(t => {
const ex = t._execution || {} const ex = t || {}
const convergenceStatus = ex.result?.convergence_verified const convergenceStatus = ex.result?.convergence_verified
? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}` ? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}`
: '-' : '-'
@@ -529,7 +528,7 @@ ${failedTasks.size > 0 ? `### Failed Tasks
${[...failedTasks].map(id => { ${[...failedTasks].map(id => {
const t = tasks.find(t => t.id === id) const t = tasks.find(t => t.id === id)
return `- **${t.id}**: ${t.title}${t._execution?.result?.error || 'Unknown'}` return `- **${t.id}**: ${t.title}${t.result?.error || 'Unknown'}`
}).join('\n')} }).join('\n')}
` : ''} ` : ''}
### Artifacts ### Artifacts
@@ -555,31 +554,31 @@ appendToEvents(`
`) `)
``` ```
### Step 4.3: Write Back tasks.jsonl with _execution ### Step 4.3: Write Back .task/*.json with Execution State
Update the source JSONL file with execution states: Update each task JSON file in-place with execution state:
```javascript ```javascript
const updatedJsonl = tasks.map(task => JSON.stringify(task)).join('\n') tasks.forEach(task => {
Write(planPath, updatedJsonl) const taskFile = `${planPath}/${task.id}.json`
// Each task now has _execution: { status, executed_at, result } Write(taskFile, JSON.stringify(task, null, 2))
})
// Each task now has status, executed_at, result fields
``` ```
**_execution State** (added to each task): **Execution State** (added to each task JSON file):
```javascript ```javascript
{ {
// ... original task fields ... // ... original task fields ...
_execution: { status: "completed" | "failed" | "skipped",
status: "completed" | "failed" | "skipped", executed_at: "ISO timestamp",
executed_at: "ISO timestamp", result: {
result: { success: boolean,
success: boolean, files_modified: string[], // list of modified file paths
files_modified: string[], // list of modified file paths summary: string, // change description
summary: string, // change description convergence_verified: boolean[], // per criterion
convergence_verified: boolean[], // per criterion error: string // if failed
error: string // if failed
}
} }
} }
``` ```
@@ -604,7 +603,7 @@ AskUserQuestion({
| Selection | Action | | Selection | Action |
|-----------|--------| |-----------|--------|
| Retry Failed | Filter tasks with `_execution.status === 'failed'`, re-execute, append `[RETRY]` events | | Retry Failed | Filter tasks with `status === 'failed'`, re-execute, append `[RETRY]` events |
| View Events | Display execution-events.md content | | View Events | Display execution-events.md content |
| Create Issue | `$issue:new` from failed task details | | Create Issue | `$issue:new` from failed task details |
| Done | Display artifact paths, end workflow | | Done | Display artifact paths, end workflow |
@@ -615,10 +614,10 @@ AskUserQuestion({
| Situation | Action | Recovery | | Situation | Action | Recovery |
|-----------|--------|----------| |-----------|--------|----------|
| JSONL file not found | Report error with path | Check path, verify planning phase output | | .task/ directory not found | Report error with path | Check path, verify planning phase output |
| Invalid JSON line | Report line number and error | Fix JSONL file manually | | Invalid JSON file | Report filename and error | Fix task JSON file manually |
| Missing convergence | Report validation error | Add convergence fields to tasks | | Missing convergence | Report validation error | Add convergence fields to tasks |
| Circular dependency | Stop, report cycle path | Fix dependencies in JSONL | | Circular dependency | Stop, report cycle path | Fix dependencies in task files |
| Task execution fails | Record in events, ask user | Retry, skip, accept, or abort | | Task execution fails | Record in events, ask user | Retry, skip, accept, or abort |
| Convergence verification fails | Mark task failed, ask user | Fix code and retry, or accept | | Convergence verification fails | Mark task failed, ask user | Fix code and retry, or accept |
| Verification command timeout | Mark as unverified | Manual verification needed | | Verification command timeout | Mark as unverified | Manual verification needed |

View File

@@ -3,18 +3,21 @@
// ======================================== // ========================================
// Combined dashboard widget: project info + stats + workflow status + orchestrator + task carousel // Combined dashboard widget: project info + stats + workflow status + orchestrator + task carousel
import { memo, useMemo, useState, useEffect } from 'react'; import { memo, useMemo, useState, useEffect, useCallback } from 'react';
import { useNavigate } from 'react-router-dom';
import { useIntl } from 'react-intl'; import { useIntl } from 'react-intl';
import { PieChart, Pie, Cell, ResponsiveContainer } from 'recharts'; import { PieChart, Pie, Cell, ResponsiveContainer } from 'recharts';
import { Card } from '@/components/ui/Card'; import { Card, CardContent } from '@/components/ui/Card';
import { Progress } from '@/components/ui/Progress'; import { Progress } from '@/components/ui/Progress';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
import { Sparkline } from '@/components/charts/Sparkline'; import { Sparkline } from '@/components/charts/Sparkline';
import { useWorkflowStatusCounts, generateMockWorkflowStatusCounts } from '@/hooks/useWorkflowStatusCounts'; import { useWorkflowStatusCounts } from '@/hooks/useWorkflowStatusCounts';
import { useDashboardStats } from '@/hooks/useDashboardStats'; import { useDashboardStats } from '@/hooks/useDashboardStats';
import { useProjectOverview } from '@/hooks/useProjectOverview'; import { useProjectOverview } from '@/hooks/useProjectOverview';
import { useIndexStatus } from '@/hooks/useIndex'; import { useIndexStatus } from '@/hooks/useIndex';
import { useSessions } from '@/hooks/useSessions';
import { cn } from '@/lib/utils'; import { cn } from '@/lib/utils';
import type { TaskData } from '@/types/store';
import { import {
ListChecks, ListChecks,
Clock, Clock,
@@ -26,7 +29,6 @@ import {
ChevronRight, ChevronRight,
ChevronDown, ChevronDown,
ChevronUp, ChevronUp,
Tag,
Calendar, Calendar,
Code2, Code2,
Server, Server,
@@ -46,12 +48,13 @@ export interface WorkflowTaskWidgetProps {
} }
// ---- Workflow Status section ---- // ---- Workflow Status section ----
const statusColors: Record<string, { bg: string; text: string; dot: string }> = { // Unified color configuration for workflow status
completed: { bg: 'bg-success', text: 'text-success', dot: 'bg-emerald-500' }, const statusColors: Record<string, { bg: string; text: string; dot: string; fill: string }> = {
in_progress: { bg: 'bg-warning', text: 'text-warning', dot: 'bg-amber-500' }, completed: { bg: 'bg-success', text: 'text-success', dot: 'bg-emerald-500', fill: '#10b981' },
planning: { bg: 'bg-violet-500', text: 'text-violet-600', dot: 'bg-violet-500' }, in_progress: { bg: 'bg-warning', text: 'text-warning', dot: 'bg-amber-500', fill: '#f59e0b' },
paused: { bg: 'bg-slate-400', text: 'text-slate-500', dot: 'bg-slate-400' }, planning: { bg: 'bg-violet-500', text: 'text-violet-600', dot: 'bg-violet-500', fill: '#8b5cf6' },
archived: { bg: 'bg-slate-300', text: 'text-slate-400', dot: 'bg-slate-300' }, paused: { bg: 'bg-slate-400', text: 'text-slate-500', dot: 'bg-slate-400', fill: '#94a3b8' },
archived: { bg: 'bg-slate-300', text: 'text-slate-400', dot: 'bg-slate-300', fill: '#cbd5e1' },
}; };
const statusLabelKeys: Record<string, string> = { const statusLabelKeys: Record<string, string> = {
@@ -63,87 +66,54 @@ const statusLabelKeys: Record<string, string> = {
}; };
// ---- Task List section ---- // ---- Task List section ----
interface TaskItem { // Task status colors for the task list display
id: string; type TaskStatusDisplay = 'pending' | 'completed' | 'in_progress' | 'blocked' | 'skipped';
name: string;
status: 'pending' | 'completed';
}
// Session with its tasks
interface SessionWithTasks {
id: string;
name: string;
description?: string;
status: 'planning' | 'in_progress' | 'completed' | 'paused';
tags: string[];
createdAt: string;
updatedAt: string;
tasks: TaskItem[];
}
// Mock sessions with their tasks
const MOCK_SESSIONS: SessionWithTasks[] = [
{
id: 'WFS-auth-001',
name: 'User Authentication System',
description: 'Implement OAuth2 and JWT based authentication with role-based access control',
status: 'in_progress',
tags: ['auth', 'security', 'backend'],
createdAt: '2024-01-15',
updatedAt: '2024-01-20',
tasks: [
{ id: '1', name: 'Implement user authentication', status: 'pending' },
{ id: '2', name: 'Design database schema', status: 'completed' },
{ id: '3', name: 'Setup CI/CD pipeline', status: 'pending' },
],
},
{
id: 'WFS-api-002',
name: 'API Documentation',
description: 'Create comprehensive API documentation with OpenAPI 3.0 specification',
status: 'planning',
tags: ['docs', 'api'],
createdAt: '2024-01-18',
updatedAt: '2024-01-19',
tasks: [
{ id: '4', name: 'Write API documentation', status: 'pending' },
{ id: '5', name: 'Create OpenAPI spec', status: 'pending' },
],
},
{
id: 'WFS-perf-003',
name: 'Performance Optimization',
description: 'Optimize database queries and implement caching strategies',
status: 'completed',
tags: ['performance', 'optimization', 'database'],
createdAt: '2024-01-10',
updatedAt: '2024-01-17',
tasks: [
{ id: '6', name: 'Performance optimization', status: 'completed' },
{ id: '7', name: 'Security audit', status: 'completed' },
],
},
{
id: 'WFS-test-004',
name: 'Integration Testing',
description: 'Setup E2E testing framework and write integration tests',
status: 'in_progress',
tags: ['testing', 'e2e', 'ci'],
createdAt: '2024-01-19',
updatedAt: '2024-01-20',
tasks: [
{ id: '8', name: 'Integration testing', status: 'completed' },
{ id: '9', name: 'Deploy to staging', status: 'pending' },
{ id: '10', name: 'E2E test setup', status: 'pending' },
],
},
];
const taskStatusColors: Record<string, { bg: string; text: string; icon: typeof CheckCircle2 }> = { const taskStatusColors: Record<string, { bg: string; text: string; icon: typeof CheckCircle2 }> = {
pending: { bg: 'bg-muted', text: 'text-muted-foreground', icon: Clock }, pending: { bg: 'bg-muted', text: 'text-muted-foreground', icon: Clock },
completed: { bg: 'bg-success/20', text: 'text-success', icon: CheckCircle2 }, completed: { bg: 'bg-success/20', text: 'text-success', icon: CheckCircle2 },
in_progress: { bg: 'bg-warning/20', text: 'text-warning', icon: Clock },
blocked: { bg: 'bg-destructive/20', text: 'text-destructive', icon: XCircle },
skipped: { bg: 'bg-slate-400/20', text: 'text-slate-500', icon: Clock },
}; };
// ---- Empty State Component ----
interface HomeEmptyStateProps {
className?: string;
}
function HomeEmptyState({ className }: HomeEmptyStateProps) {
const { formatMessage } = useIntl();
return (
<div className={cn('flex items-center justify-center h-full', className)}>
<Card className="max-w-sm w-full border-dashed">
<CardContent className="flex flex-col items-center gap-4 py-8">
<div className="w-14 h-14 rounded-full bg-muted flex items-center justify-center">
<ListChecks className="w-7 h-7 text-muted-foreground" />
</div>
<div className="text-center space-y-2">
<h3 className="text-base font-semibold">
{formatMessage({ id: 'home.emptyState.noSessions.title' })}
</h3>
<p className="text-sm text-muted-foreground">
{formatMessage({ id: 'home.emptyState.noSessions.message' })}
</p>
</div>
<div className="flex flex-col gap-2 w-full">
<code className="px-3 py-2 bg-muted rounded text-xs font-mono text-center">
/workflow:plan
</code>
<p className="text-xs text-muted-foreground text-center">
{formatMessage({ id: 'home.emptyState.noSessions.hint' })}
</p>
</div>
</CardContent>
</Card>
</div>
);
}
const sessionStatusColors: Record<string, { bg: string; text: string }> = { const sessionStatusColors: Record<string, { bg: string; text: string }> = {
planning: { bg: 'bg-violet-500/20', text: 'text-violet-600' }, planning: { bg: 'bg-violet-500/20', text: 'text-violet-600' },
in_progress: { bg: 'bg-warning/20', text: 'text-warning' }, in_progress: { bg: 'bg-warning/20', text: 'text-warning' },
@@ -209,13 +179,20 @@ function generateSparklineData(currentValue: number, variance = 0.3): number[] {
function WorkflowTaskWidgetComponent({ className }: WorkflowTaskWidgetProps) { function WorkflowTaskWidgetComponent({ className }: WorkflowTaskWidgetProps) {
const { formatMessage } = useIntl(); const { formatMessage } = useIntl();
const navigate = useNavigate();
const { data, isLoading } = useWorkflowStatusCounts(); const { data, isLoading } = useWorkflowStatusCounts();
const { stats, isLoading: statsLoading } = useDashboardStats({ refetchInterval: 60000 }); const { stats, isLoading: statsLoading } = useDashboardStats({ refetchInterval: 60000 });
const { projectOverview, isLoading: projectLoading } = useProjectOverview(); const { projectOverview, isLoading: projectLoading } = useProjectOverview();
const { status: indexStatus } = useIndexStatus({ refetchInterval: 30000 }); const { status: indexStatus } = useIndexStatus({ refetchInterval: 30000 });
const chartData = data || generateMockWorkflowStatusCounts(); // Fetch real sessions data
const { activeSessions, isLoading: sessionsLoading } = useSessions({
filter: { location: 'active' },
});
const chartData = data || [];
const total = chartData.reduce((sum, item) => sum + item.count, 0); const total = chartData.reduce((sum, item) => sum + item.count, 0);
const hasChartData = chartData.length > 0;
// Generate sparkline data for each stat // Generate sparkline data for each stat
const sparklines = useMemo(() => ({ const sparklines = useMemo(() => ({
@@ -230,24 +207,47 @@ function WorkflowTaskWidgetComponent({ className }: WorkflowTaskWidgetProps) {
// Project info expanded state // Project info expanded state
const [projectExpanded, setProjectExpanded] = useState(false); const [projectExpanded, setProjectExpanded] = useState(false);
// Session carousel state // Session carousel state - use real sessions
const [currentSessionIndex, setCurrentSessionIndex] = useState(0); const [currentSessionIndex, setCurrentSessionIndex] = useState(0);
const currentSession = MOCK_SESSIONS[currentSessionIndex]; const sessionsCount = activeSessions.length;
const currentSession = activeSessions[currentSessionIndex];
// Auto-rotate carousel every 5 seconds // Format relative time
const formatRelativeTime = useCallback((dateStr: string | undefined): string => {
if (!dateStr) return '';
const date = new Date(dateStr);
if (isNaN(date.getTime())) return '';
return date.toLocaleDateString();
}, []);
// Auto-rotate carousel every 5 seconds (only if more than one session)
useEffect(() => { useEffect(() => {
if (sessionsCount <= 1) return;
const timer = setInterval(() => { const timer = setInterval(() => {
setCurrentSessionIndex((prev) => (prev + 1) % MOCK_SESSIONS.length); setCurrentSessionIndex((prev) => (prev + 1) % sessionsCount);
}, 5000); }, 5000);
return () => clearInterval(timer); return () => clearInterval(timer);
}, []); }, [sessionsCount]);
// Manual navigation // Manual navigation
const handlePrevSession = () => { const handlePrevSession = () => {
setCurrentSessionIndex((prev) => (prev === 0 ? MOCK_SESSIONS.length - 1 : prev - 1)); setCurrentSessionIndex((prev) => (prev === 0 ? sessionsCount - 1 : prev - 1));
}; };
const handleNextSession = () => { const handleNextSession = () => {
setCurrentSessionIndex((prev) => (prev + 1) % MOCK_SESSIONS.length); setCurrentSessionIndex((prev) => (prev + 1) % sessionsCount);
};
// Navigate to session detail
const handleSessionClick = (sessionId: string) => {
navigate(`/sessions/${sessionId}`);
};
// Map task status to display status
const mapTaskStatus = (status: TaskData['status']): TaskStatusDisplay => {
if (status === 'in_progress') return 'in_progress';
if (status === 'blocked') return 'blocked';
if (status === 'skipped') return 'skipped';
return status;
}; };
return ( return (
@@ -551,6 +551,15 @@ function WorkflowTaskWidgetComponent({ className }: WorkflowTaskWidgetProps) {
<div className="flex-1 flex items-center justify-center"> <div className="flex-1 flex items-center justify-center">
<div className="w-24 h-24 rounded-full bg-muted animate-pulse" /> <div className="w-24 h-24 rounded-full bg-muted animate-pulse" />
</div> </div>
) : !hasChartData ? (
<div className="flex-1 flex items-center justify-center">
<div className="text-center">
<PieChartIcon className="w-12 h-12 mx-auto text-muted-foreground/30 mb-2" />
<p className="text-xs text-muted-foreground">
{formatMessage({ id: 'home.emptyState.noSessions.message' })}
</p>
</div>
</div>
) : ( ) : (
<div className="flex-1 flex flex-col"> <div className="flex-1 flex flex-col">
{/* Mini Donut Chart */} {/* Mini Donut Chart */}
@@ -568,16 +577,8 @@ function WorkflowTaskWidgetComponent({ className }: WorkflowTaskWidgetProps) {
> >
{chartData.map((item) => { {chartData.map((item) => {
const colors = statusColors[item.status] || statusColors.completed; const colors = statusColors[item.status] || statusColors.completed;
const fillColor = colors.dot.replace('bg-', '');
const colorMap: Record<string, string> = {
'emerald-500': '#10b981',
'amber-500': '#f59e0b',
'violet-500': '#8b5cf6',
'slate-400': '#94a3b8',
'slate-300': '#cbd5e1',
};
return ( return (
<Cell key={item.status} fill={colorMap[fillColor] || '#94a3b8'} /> <Cell key={item.status} fill={colors.fill} />
); );
})} })}
</Pie> </Pie>
@@ -615,31 +616,48 @@ function WorkflowTaskWidgetComponent({ className }: WorkflowTaskWidgetProps) {
<ListChecks className="h-4 w-4" /> <ListChecks className="h-4 w-4" />
{formatMessage({ id: 'home.sections.taskDetails' })} {formatMessage({ id: 'home.sections.taskDetails' })}
</h3> </h3>
<div className="flex items-center gap-1.5"> {sessionsCount > 0 && (
<Button variant="ghost" size="sm" className="h-6 w-6 p-0" onClick={handlePrevSession}> <div className="flex items-center gap-1.5">
<ChevronLeft className="h-4 w-4" /> <Button variant="ghost" size="sm" className="h-6 w-6 p-0" onClick={handlePrevSession} disabled={sessionsCount <= 1}>
</Button> <ChevronLeft className="h-4 w-4" />
<span className="text-xs text-muted-foreground min-w-[45px] text-center"> </Button>
{currentSessionIndex + 1} / {MOCK_SESSIONS.length} <span className="text-xs text-muted-foreground min-w-[45px] text-center">
</span> {currentSessionIndex + 1} / {sessionsCount}
<Button variant="ghost" size="sm" className="h-6 w-6 p-0" onClick={handleNextSession}> </span>
<ChevronRight className="h-4 w-4" /> <Button variant="ghost" size="sm" className="h-6 w-6 p-0" onClick={handleNextSession} disabled={sessionsCount <= 1}>
</Button> <ChevronRight className="h-4 w-4" />
</div> </Button>
</div>
)}
</div> </div>
{/* Session Card (Carousel Item) */} {/* Loading State */}
{currentSession && ( {sessionsLoading ? (
<div className="flex-1 flex flex-col min-h-0 rounded-lg border border-border bg-accent/20 p-3 overflow-hidden"> <div className="flex-1 flex items-center justify-center">
<div className="w-full max-w-sm space-y-3">
<div className="h-8 bg-muted rounded animate-pulse" />
<div className="h-4 bg-muted rounded animate-pulse w-3/4" />
<div className="h-20 bg-muted rounded animate-pulse" />
</div>
</div>
) : sessionsCount === 0 ? (
/* Empty State */
<HomeEmptyState />
) : currentSession ? (
/* Session Card (Carousel Item) */
<div
className="flex-1 flex flex-col min-h-0 rounded-lg border border-border bg-accent/20 p-3 overflow-hidden cursor-pointer hover:border-primary/30 transition-colors"
onClick={() => handleSessionClick(currentSession.session_id)}
>
{/* Session Header */} {/* Session Header */}
<div className="mb-2 pb-2 border-b border-border shrink-0"> <div className="mb-2 pb-2 border-b border-border shrink-0">
<div className="flex items-start gap-2"> <div className="flex items-start gap-2">
<div className={cn('px-2 py-1 rounded text-xs font-medium shrink-0', sessionStatusColors[currentSession.status].bg, sessionStatusColors[currentSession.status].text)}> <div className={cn('px-2 py-1 rounded text-xs font-medium shrink-0', sessionStatusColors[currentSession.status]?.bg || 'bg-muted', sessionStatusColors[currentSession.status]?.text || 'text-muted-foreground')}>
{formatMessage({ id: `common.status.${currentSession.status === 'in_progress' ? 'inProgress' : currentSession.status}` })} {formatMessage({ id: `common.status.${currentSession.status === 'in_progress' ? 'inProgress' : currentSession.status}` })}
</div> </div>
<div className="flex-1 min-w-0"> <div className="flex-1 min-w-0">
<p className="text-sm font-medium text-foreground truncate">{currentSession.name}</p> <p className="text-sm font-medium text-foreground truncate">{currentSession.title || currentSession.session_id}</p>
<p className="text-xs text-muted-foreground">{currentSession.id}</p> <p className="text-xs text-muted-foreground">{currentSession.session_id}</p>
</div> </div>
</div> </div>
{/* Description */} {/* Description */}
@@ -649,78 +667,85 @@ function WorkflowTaskWidgetComponent({ className }: WorkflowTaskWidgetProps) {
</p> </p>
)} )}
{/* Progress bar */} {/* Progress bar */}
<div className="mt-2.5 space-y-1"> {currentSession.tasks && currentSession.tasks.length > 0 && (
<div className="flex items-center justify-between text-xs"> <div className="mt-2.5 space-y-1">
<span className="text-muted-foreground"> <div className="flex items-center justify-between text-xs">
{formatMessage({ id: 'common.labels.progress' })} <span className="text-muted-foreground">
</span> {formatMessage({ id: 'common.labels.progress' })}
<span className="font-medium text-foreground"> </span>
{currentSession.tasks.filter(t => t.status === 'completed').length}/{currentSession.tasks.length} <span className="font-medium text-foreground">
</span> {currentSession.tasks.filter(t => t.status === 'completed').length}/{currentSession.tasks.length}
</span>
</div>
<Progress
value={currentSession.tasks.length > 0 ? (currentSession.tasks.filter(t => t.status === 'completed').length / currentSession.tasks.length) * 100 : 0}
className="h-1.5 bg-muted"
indicatorClassName="bg-success"
/>
</div> </div>
<Progress )}
value={currentSession.tasks.length > 0 ? (currentSession.tasks.filter(t => t.status === 'completed').length / currentSession.tasks.length) * 100 : 0} {/* Date */}
className="h-1.5 bg-muted"
indicatorClassName="bg-success"
/>
</div>
{/* Tags and Date */}
<div className="flex items-center gap-2 mt-2 flex-wrap"> <div className="flex items-center gap-2 mt-2 flex-wrap">
{currentSession.tags.map((tag) => (
<span key={tag} className="inline-flex items-center gap-1 px-2 py-0.5 rounded bg-primary/10 text-primary text-[10px]">
<Tag className="h-2.5 w-2.5" />
{tag}
</span>
))}
<span className="inline-flex items-center gap-1 text-[10px] text-muted-foreground ml-auto"> <span className="inline-flex items-center gap-1 text-[10px] text-muted-foreground ml-auto">
<Calendar className="h-3 w-3" /> <Calendar className="h-3 w-3" />
{currentSession.updatedAt} {formatRelativeTime(currentSession.updated_at || currentSession.created_at)}
</span> </span>
</div> </div>
</div> </div>
{/* Task List for this Session - Two columns */} {/* Task List for this Session - Two columns */}
<div className="flex-1 overflow-auto min-h-0"> {currentSession.tasks && currentSession.tasks.length > 0 ? (
<div className="grid grid-cols-2 gap-2 w-full"> <div className="flex-1 overflow-auto min-h-0">
{currentSession.tasks.map((task, index) => { <div className="grid grid-cols-2 gap-2 w-full">
const config = taskStatusColors[task.status]; {currentSession.tasks.map((task, index) => {
const StatusIcon = config.icon; const displayStatus = mapTaskStatus(task.status);
const isLastOdd = currentSession.tasks.length % 2 === 1 && index === currentSession.tasks.length - 1; const config = taskStatusColors[displayStatus] || taskStatusColors.pending;
return ( const StatusIcon = config.icon;
<div const isLastOdd = currentSession.tasks!.length % 2 === 1 && index === currentSession.tasks!.length - 1;
key={task.id} return (
className={cn( <div
'flex items-center gap-2 p-2 rounded hover:bg-background/50 transition-colors cursor-pointer', key={task.task_id}
isLastOdd && 'col-span-2' className={cn(
)} 'flex items-center gap-2 p-2 rounded hover:bg-background/50 transition-colors',
> isLastOdd && 'col-span-2'
<div className={cn('p-1 rounded shrink-0', config.bg)}> )}
<StatusIcon className={cn('h-3 w-3', config.text)} /> >
<div className={cn('p-1 rounded shrink-0', config.bg)}>
<StatusIcon className={cn('h-3 w-3', config.text)} />
</div>
<p className={cn('flex-1 text-xs font-medium truncate', task.status === 'completed' ? 'text-muted-foreground line-through' : 'text-foreground')}>
{task.title || task.task_id}
</p>
</div> </div>
<p className={cn('flex-1 text-xs font-medium truncate', task.status === 'completed' ? 'text-muted-foreground line-through' : 'text-foreground')}> );
{task.name} })}
</p> </div>
</div>
);
})}
</div> </div>
</div> ) : (
<div className="flex-1 flex items-center justify-center">
<p className="text-xs text-muted-foreground">
{formatMessage({ id: 'home.emptyState.noTasks.message' })}
</p>
</div>
)}
</div>
) : null}
{/* Carousel dots - only show if more than one session */}
{sessionsCount > 1 && (
<div className="flex items-center justify-center gap-1 mt-2">
{activeSessions.map((_, idx) => (
<button
key={idx}
onClick={() => setCurrentSessionIndex(idx)}
className={cn(
'w-1.5 h-1.5 rounded-full transition-colors',
idx === currentSessionIndex ? 'bg-primary' : 'bg-muted hover:bg-muted-foreground/50'
)}
/>
))}
</div> </div>
)} )}
{/* Carousel dots */}
<div className="flex items-center justify-center gap-1 mt-2">
{MOCK_SESSIONS.map((_, idx) => (
<button
key={idx}
onClick={() => setCurrentSessionIndex(idx)}
className={cn(
'w-1.5 h-1.5 rounded-full transition-colors',
idx === currentSessionIndex ? 'bg-primary' : 'bg-muted hover:bg-muted-foreground/50'
)}
/>
))}
</div>
</div> </div>
</Card> </Card>
</div> </div>

View File

@@ -44,12 +44,13 @@
}, },
"emptyState": { "emptyState": {
"noSessions": { "noSessions": {
"title": "No Sessions Found", "title": "No Active Sessions",
"message": "No workflow sessions match your current filter." "message": "Start your first workflow session to begin tracking tasks and progress.",
"hint": "Create a planning session to get started"
}, },
"noTasks": { "noTasks": {
"title": "No Tasks", "title": "No Tasks",
"message": "No tasks match your current filter." "message": "No tasks available in this session."
}, },
"noLoops": { "noLoops": {
"title": "No Active Loops", "title": "No Active Loops",

View File

@@ -44,12 +44,13 @@
}, },
"emptyState": { "emptyState": {
"noSessions": { "noSessions": {
"title": "未找到会话", "title": "暂无活跃会话",
"message": "没有符合当前筛选条件的工作流会话。" "message": "开始你的第一个工作流会话,追踪任务和进度。",
"hint": "创建一个规划会话以开始使用"
}, },
"noTasks": { "noTasks": {
"title": "暂无任务", "title": "暂无任务",
"message": "没有符合当前筛选条件的任务。" "message": "此会话中没有可用任务。"
}, },
"noLoops": { "noLoops": {
"title": "无活跃循环", "title": "无活跃循环",

View File

@@ -42,9 +42,9 @@ import { Button } from '@/components/ui/Button';
import { Badge } from '@/components/ui/Badge'; import { Badge } from '@/components/ui/Badge';
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/Card'; import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/Card';
import { Tabs, TabsContent } from '@/components/ui/Tabs'; import { Tabs, TabsContent } from '@/components/ui/Tabs';
import { TabsNavigation, type TabItem } from '@/components/ui/TabsNavigation'; import { TabsNavigation } from '@/components/ui/TabsNavigation';
import { Collapsible, CollapsibleTrigger, CollapsibleContent } from '@/components/ui/Collapsible'; import { Collapsible, CollapsibleTrigger, CollapsibleContent } from '@/components/ui/Collapsible';
import type { LiteTask, LiteTaskSession } from '@/lib/api'; import type { LiteTask } from '@/lib/api';
// ======================================== // ========================================
// Type Definitions // Type Definitions
@@ -57,22 +57,6 @@ type MultiCliTab = 'tasks' | 'discussion' | 'context';
type TaskTabValue = 'task' | 'context'; type TaskTabValue = 'task' | 'context';
// Context Package Structure
interface ContextPackage {
task_description?: string;
constraints?: string[];
focus_paths?: string[];
relevant_files?: Array<string | { path: string; reason?: string }>;
dependencies?: string[] | Array<{ name: string; type: string; version: string }>;
conflict_risks?: string[] | Array<{ description: string; severity: string }>;
session_id?: string;
metadata?: {
created_at: string;
version: string;
source: string;
};
}
// Exploration Structure // Exploration Structure
interface Exploration { interface Exploration {
name: string; name: string;
@@ -80,22 +64,6 @@ interface Exploration {
content?: string; content?: string;
} }
interface ExplorationData {
manifest?: {
task_description: string;
complexity: 'low' | 'medium' | 'high';
exploration_count: number;
created_at: string;
};
data?: {
architecture?: ExplorationAngle;
dependencies?: ExplorationAngle;
patterns?: ExplorationAngle;
'integration-points'?: ExplorationAngle;
testing?: ExplorationAngle;
};
}
interface ExplorationAngle { interface ExplorationAngle {
findings: string[]; findings: string[];
recommendations: string[]; recommendations: string[];
@@ -103,44 +71,6 @@ interface ExplorationAngle {
risks: string[]; risks: string[];
} }
// Diagnosis Structure
interface Diagnosis {
symptom: string;
root_cause: string;
issues: Array<{
file: string;
line: number;
severity: 'high' | 'medium' | 'low';
message: string;
}>;
affected_files: string[];
fix_hints: string[];
recommendations: string[];
}
// Discussion/Round Structure
interface DiscussionRound {
metadata: {
roundId: number;
timestamp: string;
durationSeconds: number;
contributingAgents: Array<{ name: string; id: string }>;
};
solutions: DiscussionSolution[];
_internal: {
convergence: {
score: number;
recommendation: 'proceed' | 'continue' | 'pause';
reasoning: string;
};
cross_verification: {
agreements: string[];
disagreements: string[];
resolution: string;
};
};
}
interface ImplementationTask { interface ImplementationTask {
id: string; id: string;
title: string; title: string;
@@ -171,56 +101,6 @@ interface DiscussionSolution {
}; };
} }
// Synthesis Structure
interface Synthesis {
convergence: {
summary: string | { en: string; zh: string };
score: number;
recommendation: 'proceed' | 'continue' | 'pause' | 'complete' | 'halt';
};
cross_verification: {
agreements: string[];
disagreements: string[];
resolution: string;
};
final_solution: DiscussionSolution;
alternative_solutions: DiscussionSolution[];
}
// ========================================
// Helper Functions
// ========================================
/**
* Get i18n text (handles both string and {en, zh} object)
*/
function getI18nText(text: string | { en?: string; zh?: string } | undefined, locale: string = 'zh'): string {
if (!text) return '';
if (typeof text === 'string') return text;
return text[locale as keyof typeof text] || text.en || text.zh || '';
}
/**
* Get task status badge configuration
*/
function getTaskStatusBadge(
status: LiteTask['status'],
formatMessage: (key: { id: string }) => string
) {
switch (status) {
case 'completed':
return { variant: 'success' as const, label: formatMessage({ id: 'sessionDetail.status.completed' }), icon: CheckCircle };
case 'in_progress':
return { variant: 'warning' as const, label: formatMessage({ id: 'sessionDetail.status.inProgress' }), icon: Loader2 };
case 'blocked':
return { variant: 'destructive' as const, label: formatMessage({ id: 'sessionDetail.status.blocked' }), icon: XCircle };
case 'failed':
return { variant: 'destructive' as const, label: formatMessage({ id: 'fixSession.status.failed' }), icon: XCircle };
default:
return { variant: 'secondary' as const, label: formatMessage({ id: 'sessionDetail.status.pending' }), icon: Clock };
}
}
// ======================================== // ========================================
// Main Component // Main Component
// ======================================== // ========================================
@@ -237,7 +117,7 @@ function getTaskStatusBadge(
export function LiteTaskDetailPage() { export function LiteTaskDetailPage() {
const { sessionId } = useParams<{ sessionId: string }>(); const { sessionId } = useParams<{ sessionId: string }>();
const navigate = useNavigate(); const navigate = useNavigate();
const { formatMessage, locale } = useIntl(); const { formatMessage } = useIntl();
// Session type state // Session type state
const [sessionType, setSessionType] = React.useState<SessionType>('lite-plan'); const [sessionType, setSessionType] = React.useState<SessionType>('lite-plan');

View File

@@ -35,8 +35,7 @@ import { useLiteTasks } from '@/hooks/useLiteTasks';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
import { Badge } from '@/components/ui/Badge'; import { Badge } from '@/components/ui/Badge';
import { Card, CardContent } from '@/components/ui/Card'; import { Card, CardContent } from '@/components/ui/Card';
import { Tabs, TabsContent } from '@/components/ui/Tabs'; import { TabsNavigation } from '@/components/ui/TabsNavigation';
import { TabsNavigation, type TabItem } from '@/components/ui/TabsNavigation';
import { TaskDrawer } from '@/components/shared/TaskDrawer'; import { TaskDrawer } from '@/components/shared/TaskDrawer';
import { fetchLiteSessionContext, type LiteTask, type LiteTaskSession, type LiteSessionContext } from '@/lib/api'; import { fetchLiteSessionContext, type LiteTask, type LiteTaskSession, type LiteSessionContext } from '@/lib/api';
import { LiteContextContent } from '@/components/lite-tasks/LiteContextContent'; import { LiteContextContent } from '@/components/lite-tasks/LiteContextContent';

View File

@@ -739,7 +739,7 @@ export function McpManagerPage() {
</h3> </h3>
</div> </div>
<Card className="p-4"> <Card className="p-4">
<CrossCliSyncPanel onSuccess={(count, direction) => refetch()} /> <CrossCliSyncPanel onSuccess={() => refetch()} />
</Card> </Card>
</section> </section>

View File

@@ -72,7 +72,7 @@ export function PromptHistoryPage() {
// Insight detail state // Insight detail state
const [selectedInsight, setSelectedInsight] = React.useState<InsightHistory | null>(null); const [selectedInsight, setSelectedInsight] = React.useState<InsightHistory | null>(null);
const [insightDetailOpen, setInsightDetailOpen] = React.useState(false); const [, setInsightDetailOpen] = React.useState(false);
// Batch operations state // Batch operations state
const [selectedPromptIds, setSelectedPromptIds] = React.useState<Set<string>>(new Set()); const [selectedPromptIds, setSelectedPromptIds] = React.useState<Set<string>>(new Set());

View File

@@ -11,8 +11,8 @@ import type { IssueQueue } from '@/lib/api';
// Mock queue data // Mock queue data
const mockQueueData = { const mockQueueData = {
tasks: ['task1', 'task2'], tasks: [] as any[],
solutions: ['solution1'], solutions: [] as any[],
conflicts: [], conflicts: [],
execution_groups: ['group-1'], execution_groups: ['group-1'],
grouped_items: { 'parallel-group': [] as any[] }, grouped_items: { 'parallel-group': [] as any[] },

View File

@@ -15,7 +15,6 @@ import {
Info, Info,
FileText, FileText,
Download, Download,
ChevronDown,
ChevronRight, ChevronRight,
ChevronLeft as ChevronLeftIcon, ChevronLeft as ChevronLeftIcon,
ChevronRight as ChevronRightIcon, ChevronRight as ChevronRightIcon,
@@ -291,7 +290,6 @@ export function ReviewSessionPage() {
const [sortField, setSortField] = React.useState<SortField>('severity'); const [sortField, setSortField] = React.useState<SortField>('severity');
const [sortOrder, setSortOrder] = React.useState<SortOrder>('desc'); const [sortOrder, setSortOrder] = React.useState<SortOrder>('desc');
const [selectedFindings, setSelectedFindings] = React.useState<Set<string>>(new Set()); const [selectedFindings, setSelectedFindings] = React.useState<Set<string>>(new Set());
const [expandedFindings, setExpandedFindings] = React.useState<Set<string>>(new Set());
const [selectedFindingId, setSelectedFindingId] = React.useState<string | null>(null); const [selectedFindingId, setSelectedFindingId] = React.useState<string | null>(null);
const handleBack = () => { const handleBack = () => {
@@ -353,18 +351,6 @@ export function ReviewSessionPage() {
setSelectedFindings(new Set()); setSelectedFindings(new Set());
}; };
const toggleExpandFinding = (findingId: string) => {
setExpandedFindings(prev => {
const next = new Set(prev);
if (next.has(findingId)) {
next.delete(findingId);
} else {
next.add(findingId);
}
return next;
});
};
const handleFindingClick = (findingId: string) => { const handleFindingClick = (findingId: string) => {
setSelectedFindingId(findingId); setSelectedFindingId(findingId);
}; };

View File

@@ -40,7 +40,7 @@ import {
DropdownMenuSeparator, DropdownMenuSeparator,
DropdownMenuLabel, DropdownMenuLabel,
} from '@/components/ui/Dropdown'; } from '@/components/ui/Dropdown';
import { TabsNavigation, type TabItem } from '@/components/ui/TabsNavigation'; import { TabsNavigation } from '@/components/ui/TabsNavigation';
import { cn } from '@/lib/utils'; import { cn } from '@/lib/utils';
import type { SessionMetadata } from '@/types/store'; import type { SessionMetadata } from '@/types/store';

View File

@@ -712,7 +712,7 @@ function VersionCheckSection() {
{formatMessage({ id: 'settings.versionCheck.latestVersion' })} {formatMessage({ id: 'settings.versionCheck.latestVersion' })}
</span> </span>
<Badge <Badge
variant={versionData?.updateAvailable ? 'default' : 'secondary'} variant={versionData?.hasUpdate ? 'default' : 'secondary'}
className="font-mono text-xs" className="font-mono text-xs"
> >
{versionData?.latestVersion ?? '...'} {versionData?.latestVersion ?? '...'}

View File

@@ -31,11 +31,11 @@ export function TeamPage() {
// Data hooks // Data hooks
const { teams, isLoading: teamsLoading } = useTeams(); const { teams, isLoading: teamsLoading } = useTeams();
const { messages, total: messageTotal, isLoading: messagesLoading } = useTeamMessages( const { messages, total: messageTotal } = useTeamMessages(
selectedTeam, selectedTeam,
messageFilter messageFilter
); );
const { members, totalMessages, isLoading: statusLoading } = useTeamStatus(selectedTeam); const { members, totalMessages } = useTeamStatus(selectedTeam);
// Auto-select first team if none selected // Auto-select first team if none selected
useEffect(() => { useEffect(() => {

View File

@@ -1224,7 +1224,6 @@ function SaveAsTemplateButton({ nodeId, nodeLabel }: { nodeId: string; nodeLabel
const [name, setName] = useState(''); const [name, setName] = useState('');
const [desc, setDesc] = useState(''); const [desc, setDesc] = useState('');
const [color, setColor] = useState('bg-blue-500'); const [color, setColor] = useState('bg-blue-500');
const saveNodeAsTemplate = useFlowStore((s) => s.saveNodeAsTemplate);
const addCustomTemplate = useFlowStore((s) => s.addCustomTemplate); const addCustomTemplate = useFlowStore((s) => s.addCustomTemplate);
const nodes = useFlowStore((s) => s.nodes); const nodes = useFlowStore((s) => s.nodes);

View File

@@ -8,7 +8,6 @@ import { useIntl } from 'react-intl';
import { FileText, Eye } from 'lucide-react'; import { FileText, Eye } from 'lucide-react';
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/Card'; import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/Card';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
import { Badge } from '@/components/ui/Badge';
import MarkdownModal from '@/components/shared/MarkdownModal'; import MarkdownModal from '@/components/shared/MarkdownModal';
// ======================================== // ========================================

View File

@@ -7,13 +7,11 @@ import { useState } from 'react';
import { useIntl } from 'react-intl'; import { useIntl } from 'react-intl';
import { import {
ListChecks, ListChecks,
Code,
GitBranch, GitBranch,
Calendar, Calendar,
FileCode, FileCode,
Layers, Layers,
} from 'lucide-react'; } from 'lucide-react';
import { Badge } from '@/components/ui/Badge';
import { Card, CardContent } from '@/components/ui/Card'; import { Card, CardContent } from '@/components/ui/Card';
import { TaskStatsBar, TaskStatusDropdown } from '@/components/session-detail/tasks'; import { TaskStatsBar, TaskStatusDropdown } from '@/components/session-detail/tasks';
import type { SessionMetadata, TaskData } from '@/types/store'; import type { SessionMetadata, TaskData } from '@/types/store';

View File

@@ -15,9 +15,6 @@ import {
OrchestratorPage, OrchestratorPage,
LoopMonitorPage, LoopMonitorPage,
IssueHubPage, IssueHubPage,
IssueManagerPage,
QueuePage,
DiscoveryPage,
SkillsManagerPage, SkillsManagerPage,
CommandsManagerPage, CommandsManagerPage,
MemoryPage, MemoryPage,

View File

@@ -155,7 +155,16 @@ async function main(): Promise<void> {
// Connect server to transport // Connect server to transport
await server.connect(transport); await server.connect(transport);
// Error handling // Error handling - prevent process crashes from closing transport
process.on('uncaughtException', (error) => {
console.error(`[${SERVER_NAME}] Uncaught exception:`, error.message);
console.error(error.stack);
});
process.on('unhandledRejection', (reason) => {
console.error(`[${SERVER_NAME}] Unhandled rejection:`, reason);
});
process.on('SIGINT', async () => { process.on('SIGINT', async () => {
await server.close(); await server.close();
process.exit(0); process.exit(0);

View File

@@ -570,7 +570,9 @@ interface CompactEditResult {
// Handler function // Handler function
export async function handler(params: Record<string, unknown>): Promise<ToolResult<CompactEditResult>> { export async function handler(params: Record<string, unknown>): Promise<ToolResult<CompactEditResult>> {
const parsed = ParamsSchema.safeParse(params); // Apply default mode before discriminatedUnion check (Zod doesn't apply defaults on discriminator)
const normalizedParams = params.mode === undefined ? { ...params, mode: 'update' } : params;
const parsed = ParamsSchema.safeParse(normalizedParams);
if (!parsed.success) { if (!parsed.success) {
return { success: false, error: `Invalid params: ${parsed.error.message}` }; return { success: false, error: `Invalid params: ${parsed.error.message}` };
} }

File diff suppressed because one or more lines are too long