mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-28 09:23:08 +08:00
feat: add Discuss and Explore subagents for dynamic critique and code exploration
- Implement Discuss Subagent for multi-perspective critique with dynamic perspectives. - Create Explore Subagent for shared codebase exploration with centralized caching. - Add tests for CcwToolsMcpCard component to ensure enabled tools are preserved on config save. - Introduce SessionPreviewPanel component for previewing and selecting sessions for Memory V2 extraction. - Develop CommandCreateDialog component for creating/importing commands with import and CLI generate modes.
This commit is contained in:
@@ -30,6 +30,7 @@ RULES: [templates | additional constraints]
|
||||
|
||||
## Execution Flow
|
||||
|
||||
0. **Load Project Specs** - MANDATORY first step: run `ccw spec load` to retrieve project specifications and constraints before any analysis. Adapt analysis scope and standards based on loaded specs
|
||||
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||
2. **Read** and analyze CONTEXT files thoroughly
|
||||
3. **Identify** patterns, issues, and dependencies
|
||||
@@ -40,6 +41,7 @@ RULES: [templates | additional constraints]
|
||||
## Core Requirements
|
||||
|
||||
**ALWAYS**:
|
||||
- Run `ccw spec load` FIRST to obtain project specifications before starting any work
|
||||
- Analyze ALL CONTEXT files completely
|
||||
- Apply RULES (templates + constraints) exactly
|
||||
- Provide code evidence with `file:line` references
|
||||
|
||||
@@ -24,6 +24,7 @@ RULES: [templates | additional constraints]
|
||||
## Execution Flow
|
||||
|
||||
### MODE: write
|
||||
0. **Load Project Specs** - MANDATORY first step: run `ccw spec load` to retrieve project specifications and constraints before any implementation. Apply loaded specs to guide coding standards, architecture decisions, and quality gates
|
||||
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||
2. **Read** CONTEXT files, find 3+ similar patterns
|
||||
3. **Plan** implementation following RULES
|
||||
@@ -34,6 +35,7 @@ RULES: [templates | additional constraints]
|
||||
## Core Requirements
|
||||
|
||||
**ALWAYS**:
|
||||
- Run `ccw spec load` FIRST to obtain project specifications before starting any work
|
||||
- Study CONTEXT files - find 3+ similar patterns before implementing
|
||||
- Apply RULES exactly
|
||||
- Test continuously (auto mode)
|
||||
|
||||
190
.claude/skills/command-generator/SKILL.md
Normal file
190
.claude/skills/command-generator/SKILL.md
Normal file
@@ -0,0 +1,190 @@
|
||||
---
|
||||
name: command-generator
|
||||
description: Command file generator - 5 phase workflow for creating Claude Code command files with YAML frontmatter. Generates .md command files for project or user scope. Triggers on "create command", "new command", "command generator".
|
||||
allowed-tools: Read, Write, Edit, Bash, Glob
|
||||
---
|
||||
|
||||
# Command Generator
|
||||
|
||||
CLI-based command file generator producing Claude Code command .md files through a structured 5-phase workflow. Supports both project-level (`.claude/commands/`) and user-level (`~/.claude/commands/`) command locations.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
+-----------------------------------------------------------+
|
||||
| Command Generator |
|
||||
| |
|
||||
| Input: skillName, description, location, [group], [hint] |
|
||||
| | |
|
||||
| +-------------------------------------------------+ |
|
||||
| | Phase 1-5: Sequential Pipeline | |
|
||||
| | | |
|
||||
| | [P1] --> [P2] --> [P3] --> [P4] --> [P5] | |
|
||||
| | Param Target Template Content File | |
|
||||
| | Valid Path Loading Format Gen | |
|
||||
| +-------------------------------------------------+ |
|
||||
| | |
|
||||
| Output: {scope}/.claude/commands/{group}/{name}.md |
|
||||
| |
|
||||
+-----------------------------------------------------------+
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Single Responsibility**: Generates one command file per invocation
|
||||
2. **Scope Awareness**: Supports project and user-level command locations
|
||||
3. **Template-Driven**: Uses consistent template for all generated commands
|
||||
4. **Validation First**: Validates all required parameters before file operations
|
||||
5. **Non-Destructive**: Warns if command file already exists
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Parameter Validation
|
||||
- Ref: phases/01-parameter-validation.md
|
||||
- Validate: skillName (required), description (required), location (required)
|
||||
- Optional: group, argumentHint
|
||||
- Output: validated params object
|
||||
|
||||
Phase 2: Target Path Resolution
|
||||
- Ref: phases/02-target-path-resolution.md
|
||||
- Resolve: location -> target commands directory
|
||||
- Support: project (.claude/commands/) vs user (~/.claude/commands/)
|
||||
- Handle: group subdirectory if provided
|
||||
- Output: targetPath string
|
||||
|
||||
Phase 3: Template Loading
|
||||
- Ref: phases/03-template-loading.md
|
||||
- Load: templates/command-md.md
|
||||
- Template contains YAML frontmatter with placeholders
|
||||
- Output: templateContent string
|
||||
|
||||
Phase 4: Content Formatting
|
||||
- Ref: phases/04-content-formatting.md
|
||||
- Substitute: {{name}}, {{description}}, {{group}}, {{argumentHint}}
|
||||
- Handle: optional fields (group, argumentHint)
|
||||
- Output: formattedContent string
|
||||
|
||||
Phase 5: File Generation
|
||||
- Ref: phases/05-file-generation.md
|
||||
- Check: file existence (warn if exists)
|
||||
- Write: formatted content to target path
|
||||
- Output: success confirmation with file path
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Command (Project Scope)
|
||||
```javascript
|
||||
Skill(skill="command-generator", args={
|
||||
skillName: "deploy",
|
||||
description: "Deploy application to production environment",
|
||||
location: "project"
|
||||
})
|
||||
// Output: .claude/commands/deploy.md
|
||||
```
|
||||
|
||||
### Grouped Command with Argument Hint
|
||||
```javascript
|
||||
Skill(skill="command-generator", args={
|
||||
skillName: "create",
|
||||
description: "Create new issue from GitHub URL or text",
|
||||
location: "project",
|
||||
group: "issue",
|
||||
argumentHint: "[-y|--yes] <github-url | text-description> [--priority 1-5]"
|
||||
})
|
||||
// Output: .claude/commands/issue/create.md
|
||||
```
|
||||
|
||||
### User-Level Command
|
||||
```javascript
|
||||
Skill(skill="command-generator", args={
|
||||
skillName: "global-status",
|
||||
description: "Show global Claude Code status",
|
||||
location: "user"
|
||||
})
|
||||
// Output: ~/.claude/commands/global-status.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reference Documents by Phase
|
||||
|
||||
### Phase 1: Parameter Validation
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/01-parameter-validation.md](phases/01-parameter-validation.md) | Validate required parameters | Phase 1 execution |
|
||||
|
||||
### Phase 2: Target Path Resolution
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/02-target-path-resolution.md](phases/02-target-path-resolution.md) | Resolve target directory | Phase 2 execution |
|
||||
|
||||
### Phase 3: Template Loading
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/03-template-loading.md](phases/03-template-loading.md) | Load command template | Phase 3 execution |
|
||||
| [templates/command-md.md](templates/command-md.md) | Command file template | Template reference |
|
||||
|
||||
### Phase 4: Content Formatting
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/04-content-formatting.md](phases/04-content-formatting.md) | Format content with params | Phase 4 execution |
|
||||
|
||||
### Phase 5: File Generation
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/05-file-generation.md](phases/05-file-generation.md) | Write final file | Phase 5 execution |
|
||||
|
||||
### Design Specifications
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [specs/command-design-spec.md](specs/command-design-spec.md) | Command design guidelines | Understanding best practices |
|
||||
|
||||
---
|
||||
|
||||
## Output Structure
|
||||
|
||||
### Generated Command File
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: {skillName}
|
||||
description: {description}
|
||||
{group} {argumentHint}
|
||||
---
|
||||
|
||||
# {skillName} Command
|
||||
|
||||
## Overview
|
||||
{Auto-generated placeholder for command overview}
|
||||
|
||||
## Usage
|
||||
{Auto-generated placeholder for usage examples}
|
||||
|
||||
## Execution Flow
|
||||
{Auto-generated placeholder for execution steps}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Stage | Action |
|
||||
|-------|-------|--------|
|
||||
| Missing skillName | Phase 1 | Error: "skillName is required" |
|
||||
| Missing description | Phase 1 | Error: "description is required" |
|
||||
| Missing location | Phase 1 | Error: "location is required (project or user)" |
|
||||
| Invalid location | Phase 2 | Error: "location must be 'project' or 'user'" |
|
||||
| Template not found | Phase 3 | Error: "Command template not found" |
|
||||
| File exists | Phase 5 | Warning: "Command file already exists, will overwrite" |
|
||||
| Write failure | Phase 5 | Error: "Failed to write command file" |
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **skill-generator**: Create complete skills with phases, templates, and specs
|
||||
- **flow-coordinator**: Orchestrate multi-step command workflows
|
||||
@@ -0,0 +1,174 @@
|
||||
# Phase 1: Parameter Validation
|
||||
|
||||
Validate all required parameters for command generation.
|
||||
|
||||
## Objective
|
||||
|
||||
Ensure all required parameters are provided before proceeding with command generation:
|
||||
- **skillName**: Command identifier (required)
|
||||
- **description**: Command description (required)
|
||||
- **location**: Target scope - "project" or "user" (required)
|
||||
- **group**: Optional grouping subdirectory
|
||||
- **argumentHint**: Optional argument hint string
|
||||
|
||||
## Input
|
||||
|
||||
Parameters received from skill invocation:
|
||||
- `skillName`: string (required)
|
||||
- `description`: string (required)
|
||||
- `location`: "project" | "user" (required)
|
||||
- `group`: string (optional)
|
||||
- `argumentHint`: string (optional)
|
||||
|
||||
## Validation Rules
|
||||
|
||||
### Required Parameters
|
||||
|
||||
```javascript
|
||||
const requiredParams = {
|
||||
skillName: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
pattern: /^[a-z][a-z0-9-]*$/, // lowercase, alphanumeric, hyphens
|
||||
error: 'skillName must be lowercase alphanumeric with hyphens, starting with a letter'
|
||||
},
|
||||
description: {
|
||||
type: 'string',
|
||||
minLength: 10,
|
||||
error: 'description must be at least 10 characters'
|
||||
},
|
||||
location: {
|
||||
type: 'string',
|
||||
enum: ['project', 'user'],
|
||||
error: 'location must be "project" or "user"'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Optional Parameters
|
||||
|
||||
```javascript
|
||||
const optionalParams = {
|
||||
group: {
|
||||
type: 'string',
|
||||
pattern: /^[a-z][a-z0-9-]*$/,
|
||||
default: null,
|
||||
error: 'group must be lowercase alphanumeric with hyphens'
|
||||
},
|
||||
argumentHint: {
|
||||
type: 'string',
|
||||
default: '',
|
||||
error: 'argumentHint must be a string'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Extract Parameters
|
||||
|
||||
```javascript
|
||||
// Extract from skill args
|
||||
const params = {
|
||||
skillName: args.skillName,
|
||||
description: args.description,
|
||||
location: args.location,
|
||||
group: args.group || null,
|
||||
argumentHint: args.argumentHint || ''
|
||||
};
|
||||
```
|
||||
|
||||
### Step 2: Validate Required Parameters
|
||||
|
||||
```javascript
|
||||
function validateRequired(params, rules) {
|
||||
const errors = [];
|
||||
|
||||
for (const [key, rule] of Object.entries(rules)) {
|
||||
const value = params[key];
|
||||
|
||||
// Check existence
|
||||
if (value === undefined || value === null || value === '') {
|
||||
errors.push(`${key} is required`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check type
|
||||
if (typeof value !== rule.type) {
|
||||
errors.push(`${key} must be a ${rule.type}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check minLength
|
||||
if (rule.minLength && value.length < rule.minLength) {
|
||||
errors.push(`${key} must be at least ${rule.minLength} characters`);
|
||||
}
|
||||
|
||||
// Check pattern
|
||||
if (rule.pattern && !rule.pattern.test(value)) {
|
||||
errors.push(rule.error);
|
||||
}
|
||||
|
||||
// Check enum
|
||||
if (rule.enum && !rule.enum.includes(value)) {
|
||||
errors.push(`${key} must be one of: ${rule.enum.join(', ')}`);
|
||||
}
|
||||
}
|
||||
|
||||
return errors;
|
||||
}
|
||||
|
||||
const requiredErrors = validateRequired(params, requiredParams);
|
||||
if (requiredErrors.length > 0) {
|
||||
throw new Error(`Validation failed:\n${requiredErrors.join('\n')}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Validate Optional Parameters
|
||||
|
||||
```javascript
|
||||
function validateOptional(params, rules) {
|
||||
const warnings = [];
|
||||
|
||||
for (const [key, rule] of Object.entries(rules)) {
|
||||
const value = params[key];
|
||||
|
||||
if (value !== null && value !== undefined && value !== '') {
|
||||
if (rule.pattern && !rule.pattern.test(value)) {
|
||||
warnings.push(`${key}: ${rule.error}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return warnings;
|
||||
}
|
||||
|
||||
const optionalWarnings = validateOptional(params, optionalParams);
|
||||
// Log warnings but continue
|
||||
```
|
||||
|
||||
### Step 4: Normalize Parameters
|
||||
|
||||
```javascript
|
||||
const validatedParams = {
|
||||
skillName: params.skillName.trim().toLowerCase(),
|
||||
description: params.description.trim(),
|
||||
location: params.location.trim().toLowerCase(),
|
||||
group: params.group ? params.group.trim().toLowerCase() : null,
|
||||
argumentHint: params.argumentHint ? params.argumentHint.trim() : ''
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```javascript
|
||||
{
|
||||
status: 'validated',
|
||||
params: validatedParams,
|
||||
warnings: optionalWarnings
|
||||
}
|
||||
```
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Target Path Resolution](02-target-path-resolution.md) with `validatedParams`.
|
||||
@@ -0,0 +1,171 @@
|
||||
# Phase 2: Target Path Resolution
|
||||
|
||||
Resolve the target commands directory based on location parameter.
|
||||
|
||||
## Objective
|
||||
|
||||
Determine the correct target path for the command file based on:
|
||||
- **location**: "project" or "user" scope
|
||||
- **group**: Optional subdirectory for command organization
|
||||
- **skillName**: Command filename (with .md extension)
|
||||
|
||||
## Input
|
||||
|
||||
From Phase 1 validation:
|
||||
```javascript
|
||||
{
|
||||
skillName: string, // e.g., "create"
|
||||
description: string,
|
||||
location: "project" | "user",
|
||||
group: string | null, // e.g., "issue"
|
||||
argumentHint: string
|
||||
}
|
||||
```
|
||||
|
||||
## Path Resolution Rules
|
||||
|
||||
### Location Mapping
|
||||
|
||||
```javascript
|
||||
const locationMap = {
|
||||
project: '.claude/commands',
|
||||
user: '~/.claude/commands' // Expands to user home directory
|
||||
};
|
||||
```
|
||||
|
||||
### Path Construction
|
||||
|
||||
```javascript
|
||||
function resolveTargetPath(params) {
|
||||
const baseDir = locationMap[params.location];
|
||||
|
||||
if (!baseDir) {
|
||||
throw new Error(`Invalid location: ${params.location}. Must be "project" or "user".`);
|
||||
}
|
||||
|
||||
// Expand ~ to user home if present
|
||||
const expandedBase = baseDir.startsWith('~')
|
||||
? path.join(os.homedir(), baseDir.slice(1))
|
||||
: baseDir;
|
||||
|
||||
// Build full path
|
||||
let targetPath;
|
||||
if (params.group) {
|
||||
// Grouped command: .claude/commands/{group}/{skillName}.md
|
||||
targetPath = path.join(expandedBase, params.group, `${params.skillName}.md`);
|
||||
} else {
|
||||
// Top-level command: .claude/commands/{skillName}.md
|
||||
targetPath = path.join(expandedBase, `${params.skillName}.md`);
|
||||
}
|
||||
|
||||
return targetPath;
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Get Base Directory
|
||||
|
||||
```javascript
|
||||
const location = validatedParams.location;
|
||||
const baseDir = locationMap[location];
|
||||
|
||||
if (!baseDir) {
|
||||
throw new Error(`Invalid location: ${location}. Must be "project" or "user".`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Expand User Path (if applicable)
|
||||
|
||||
```javascript
|
||||
const os = require('os');
|
||||
const path = require('path');
|
||||
|
||||
let expandedBase = baseDir;
|
||||
if (baseDir.startsWith('~')) {
|
||||
expandedBase = path.join(os.homedir(), baseDir.slice(1));
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Construct Full Path
|
||||
|
||||
```javascript
|
||||
let targetPath;
|
||||
let targetDir;
|
||||
|
||||
if (validatedParams.group) {
|
||||
// Command with group subdirectory
|
||||
targetDir = path.join(expandedBase, validatedParams.group);
|
||||
targetPath = path.join(targetDir, `${validatedParams.skillName}.md`);
|
||||
} else {
|
||||
// Top-level command
|
||||
targetDir = expandedBase;
|
||||
targetPath = path.join(targetDir, `${validatedParams.skillName}.md`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Ensure Target Directory Exists
|
||||
|
||||
```javascript
|
||||
// Check and create directory if needed
|
||||
Bash(`mkdir -p "${targetDir}"`);
|
||||
```
|
||||
|
||||
### Step 5: Check File Existence
|
||||
|
||||
```javascript
|
||||
const fileExists = Bash(`test -f "${targetPath}" && echo "EXISTS" || echo "NOT_FOUND"`);
|
||||
|
||||
if (fileExists.includes('EXISTS')) {
|
||||
console.warn(`Warning: Command file already exists at ${targetPath}. Will overwrite.`);
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```javascript
|
||||
{
|
||||
status: 'resolved',
|
||||
targetPath: targetPath, // Full path to command file
|
||||
targetDir: targetDir, // Directory containing command
|
||||
fileName: `${skillName}.md`,
|
||||
fileExists: fileExists.includes('EXISTS'),
|
||||
params: validatedParams // Pass through to next phase
|
||||
}
|
||||
```
|
||||
|
||||
## Path Examples
|
||||
|
||||
### Project Scope (No Group)
|
||||
```
|
||||
location: "project"
|
||||
skillName: "deploy"
|
||||
-> .claude/commands/deploy.md
|
||||
```
|
||||
|
||||
### Project Scope (With Group)
|
||||
```
|
||||
location: "project"
|
||||
skillName: "create"
|
||||
group: "issue"
|
||||
-> .claude/commands/issue/create.md
|
||||
```
|
||||
|
||||
### User Scope (No Group)
|
||||
```
|
||||
location: "user"
|
||||
skillName: "global-status"
|
||||
-> ~/.claude/commands/global-status.md
|
||||
```
|
||||
|
||||
### User Scope (With Group)
|
||||
```
|
||||
location: "user"
|
||||
skillName: "sync"
|
||||
group: "session"
|
||||
-> ~/.claude/commands/session/sync.md
|
||||
```
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 3: Template Loading](03-template-loading.md) with `targetPath` and `params`.
|
||||
123
.claude/skills/command-generator/phases/03-template-loading.md
Normal file
123
.claude/skills/command-generator/phases/03-template-loading.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# Phase 3: Template Loading
|
||||
|
||||
Load the command template file for content generation.
|
||||
|
||||
## Objective
|
||||
|
||||
Load the command template from the skill's templates directory. The template provides:
|
||||
- YAML frontmatter structure
|
||||
- Placeholder variables for substitution
|
||||
- Standard command file sections
|
||||
|
||||
## Input
|
||||
|
||||
From Phase 2:
|
||||
```javascript
|
||||
{
|
||||
targetPath: string,
|
||||
targetDir: string,
|
||||
fileName: string,
|
||||
fileExists: boolean,
|
||||
params: {
|
||||
skillName: string,
|
||||
description: string,
|
||||
location: string,
|
||||
group: string | null,
|
||||
argumentHint: string
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Template Location
|
||||
|
||||
```
|
||||
.claude/skills/command-generator/templates/command-md.md
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Locate Template File
|
||||
|
||||
```javascript
|
||||
// Template is located in the skill's templates directory
|
||||
const skillDir = '.claude/skills/command-generator';
|
||||
const templatePath = `${skillDir}/templates/command-md.md`;
|
||||
```
|
||||
|
||||
### Step 2: Read Template Content
|
||||
|
||||
```javascript
|
||||
const templateContent = Read(templatePath);
|
||||
|
||||
if (!templateContent) {
|
||||
throw new Error(`Command template not found at ${templatePath}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Validate Template Structure
|
||||
|
||||
```javascript
|
||||
// Verify template contains expected placeholders
|
||||
const requiredPlaceholders = ['{{name}}', '{{description}}'];
|
||||
const optionalPlaceholders = ['{{group}}', '{{argumentHint}}'];
|
||||
|
||||
for (const placeholder of requiredPlaceholders) {
|
||||
if (!templateContent.includes(placeholder)) {
|
||||
throw new Error(`Template missing required placeholder: ${placeholder}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Store Template for Next Phase
|
||||
|
||||
```javascript
|
||||
const template = {
|
||||
content: templateContent,
|
||||
requiredPlaceholders: requiredPlaceholders,
|
||||
optionalPlaceholders: optionalPlaceholders
|
||||
};
|
||||
```
|
||||
|
||||
## Template Format Reference
|
||||
|
||||
The template should follow this structure:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: {{name}}
|
||||
description: {{description}}
|
||||
{{#if group}}group: {{group}}{{/if}}
|
||||
{{#if argumentHint}}argument-hint: {{argumentHint}}{{/if}}
|
||||
---
|
||||
|
||||
# {{name}} Command
|
||||
|
||||
[Template content with placeholders]
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```javascript
|
||||
{
|
||||
status: 'loaded',
|
||||
template: {
|
||||
content: templateContent,
|
||||
requiredPlaceholders: requiredPlaceholders,
|
||||
optionalPlaceholders: optionalPlaceholders
|
||||
},
|
||||
targetPath: targetPath,
|
||||
params: params
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Action |
|
||||
|-------|--------|
|
||||
| Template file not found | Throw error with path |
|
||||
| Missing required placeholder | Throw error with missing placeholder name |
|
||||
| Empty template | Throw error |
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 4: Content Formatting](04-content-formatting.md) with `template`, `targetPath`, and `params`.
|
||||
184
.claude/skills/command-generator/phases/04-content-formatting.md
Normal file
184
.claude/skills/command-generator/phases/04-content-formatting.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Phase 4: Content Formatting
|
||||
|
||||
Format template content by substituting placeholders with parameter values.
|
||||
|
||||
## Objective
|
||||
|
||||
Replace all placeholder variables in the template with validated parameter values:
|
||||
- `{{name}}` -> skillName
|
||||
- `{{description}}` -> description
|
||||
- `{{group}}` -> group (if provided)
|
||||
- `{{argumentHint}}` -> argumentHint (if provided)
|
||||
|
||||
## Input
|
||||
|
||||
From Phase 3:
|
||||
```javascript
|
||||
{
|
||||
template: {
|
||||
content: string,
|
||||
requiredPlaceholders: string[],
|
||||
optionalPlaceholders: string[]
|
||||
},
|
||||
targetPath: string,
|
||||
params: {
|
||||
skillName: string,
|
||||
description: string,
|
||||
location: string,
|
||||
group: string | null,
|
||||
argumentHint: string
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Placeholder Mapping
|
||||
|
||||
```javascript
|
||||
const placeholderMap = {
|
||||
'{{name}}': params.skillName,
|
||||
'{{description}}': params.description,
|
||||
'{{group}}': params.group || '',
|
||||
'{{argumentHint}}': params.argumentHint || ''
|
||||
};
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Initialize Content
|
||||
|
||||
```javascript
|
||||
let formattedContent = template.content;
|
||||
```
|
||||
|
||||
### Step 2: Substitute Required Placeholders
|
||||
|
||||
```javascript
|
||||
// These must always be replaced
|
||||
formattedContent = formattedContent.replace(/\{\{name\}\}/g, params.skillName);
|
||||
formattedContent = formattedContent.replace(/\{\{description\}\}/g, params.description);
|
||||
```
|
||||
|
||||
### Step 3: Handle Optional Placeholders
|
||||
|
||||
```javascript
|
||||
// Group placeholder
|
||||
if (params.group) {
|
||||
formattedContent = formattedContent.replace(/\{\{group\}\}/g, params.group);
|
||||
} else {
|
||||
// Remove group line if not provided
|
||||
formattedContent = formattedContent.replace(/^group: \{\{group\}\}\n?/gm, '');
|
||||
formattedContent = formattedContent.replace(/\{\{group\}\}/g, '');
|
||||
}
|
||||
|
||||
// Argument hint placeholder
|
||||
if (params.argumentHint) {
|
||||
formattedContent = formattedContent.replace(/\{\{argumentHint\}\}/g, params.argumentHint);
|
||||
} else {
|
||||
// Remove argument-hint line if not provided
|
||||
formattedContent = formattedContent.replace(/^argument-hint: \{\{argumentHint\}\}\n?/gm, '');
|
||||
formattedContent = formattedContent.replace(/\{\{argumentHint\}\}/g, '');
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Handle Conditional Sections
|
||||
|
||||
```javascript
|
||||
// Remove empty frontmatter lines (caused by missing optional fields)
|
||||
formattedContent = formattedContent.replace(/\n{3,}/g, '\n\n');
|
||||
|
||||
// Handle {{#if group}} style conditionals
|
||||
if (formattedContent.includes('{{#if')) {
|
||||
// Process group conditional
|
||||
if (params.group) {
|
||||
formattedContent = formattedContent.replace(/\{\{#if group\}\}([\s\S]*?)\{\{\/if\}\}/g, '$1');
|
||||
} else {
|
||||
formattedContent = formattedContent.replace(/\{\{#if group\}\}[\s\S]*?\{\{\/if\}\}/g, '');
|
||||
}
|
||||
|
||||
// Process argumentHint conditional
|
||||
if (params.argumentHint) {
|
||||
formattedContent = formattedContent.replace(/\{\{#if argumentHint\}\}([\s\S]*?)\{\{\/if\}\}/g, '$1');
|
||||
} else {
|
||||
formattedContent = formattedContent.replace(/\{\{#if argumentHint\}\}[\s\S]*?\{\{\/if\}\}/g, '');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Validate Final Content
|
||||
|
||||
```javascript
|
||||
// Ensure no unresolved placeholders remain
|
||||
const unresolvedPlaceholders = formattedContent.match(/\{\{[^}]+\}\}/g);
|
||||
if (unresolvedPlaceholders) {
|
||||
console.warn(`Warning: Unresolved placeholders found: ${unresolvedPlaceholders.join(', ')}`);
|
||||
}
|
||||
|
||||
// Ensure frontmatter is valid
|
||||
const frontmatterMatch = formattedContent.match(/^---\n([\s\S]*?)\n---/);
|
||||
if (!frontmatterMatch) {
|
||||
throw new Error('Generated content has invalid frontmatter structure');
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate Summary
|
||||
|
||||
```javascript
|
||||
const summary = {
|
||||
name: params.skillName,
|
||||
description: params.description.substring(0, 50) + (params.description.length > 50 ? '...' : ''),
|
||||
location: params.location,
|
||||
group: params.group,
|
||||
hasArgumentHint: !!params.argumentHint
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```javascript
|
||||
{
|
||||
status: 'formatted',
|
||||
content: formattedContent,
|
||||
targetPath: targetPath,
|
||||
summary: summary
|
||||
}
|
||||
```
|
||||
|
||||
## Content Example
|
||||
|
||||
### Input Template
|
||||
```markdown
|
||||
---
|
||||
name: {{name}}
|
||||
description: {{description}}
|
||||
{{#if group}}group: {{group}}{{/if}}
|
||||
{{#if argumentHint}}argument-hint: {{argumentHint}}{{/if}}
|
||||
---
|
||||
|
||||
# {{name}} Command
|
||||
```
|
||||
|
||||
### Output (with all fields)
|
||||
```markdown
|
||||
---
|
||||
name: create
|
||||
description: Create structured issue from GitHub URL or text description
|
||||
group: issue
|
||||
argument-hint: [-y|--yes] <github-url | text-description> [--priority 1-5]
|
||||
---
|
||||
|
||||
# create Command
|
||||
```
|
||||
|
||||
### Output (minimal fields)
|
||||
```markdown
|
||||
---
|
||||
name: deploy
|
||||
description: Deploy application to production environment
|
||||
---
|
||||
|
||||
# deploy Command
|
||||
```
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 5: File Generation](05-file-generation.md) with `content` and `targetPath`.
|
||||
185
.claude/skills/command-generator/phases/05-file-generation.md
Normal file
185
.claude/skills/command-generator/phases/05-file-generation.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Phase 5: File Generation
|
||||
|
||||
Write the formatted content to the target command file.
|
||||
|
||||
## Objective
|
||||
|
||||
Generate the final command file by:
|
||||
1. Checking for existing file (warn if present)
|
||||
2. Writing formatted content to target path
|
||||
3. Confirming successful generation
|
||||
|
||||
## Input
|
||||
|
||||
From Phase 4:
|
||||
```javascript
|
||||
{
|
||||
status: 'formatted',
|
||||
content: string,
|
||||
targetPath: string,
|
||||
summary: {
|
||||
name: string,
|
||||
description: string,
|
||||
location: string,
|
||||
group: string | null,
|
||||
hasArgumentHint: boolean
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Pre-Write Check
|
||||
|
||||
```javascript
|
||||
// Check if file already exists
|
||||
const fileExists = Bash(`test -f "${targetPath}" && echo "EXISTS" || echo "NOT_FOUND"`);
|
||||
|
||||
if (fileExists.includes('EXISTS')) {
|
||||
console.warn(`
|
||||
WARNING: Command file already exists at: ${targetPath}
|
||||
The file will be overwritten with new content.
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Ensure Directory Exists
|
||||
|
||||
```javascript
|
||||
// Get directory from target path
|
||||
const targetDir = path.dirname(targetPath);
|
||||
|
||||
// Create directory if it doesn't exist
|
||||
Bash(`mkdir -p "${targetDir}"`);
|
||||
```
|
||||
|
||||
### Step 3: Write File
|
||||
|
||||
```javascript
|
||||
// Write the formatted content
|
||||
Write(targetPath, content);
|
||||
```
|
||||
|
||||
### Step 4: Verify Write
|
||||
|
||||
```javascript
|
||||
// Confirm file was created
|
||||
const verifyExists = Bash(`test -f "${targetPath}" && echo "SUCCESS" || echo "FAILED"`);
|
||||
|
||||
if (!verifyExists.includes('SUCCESS')) {
|
||||
throw new Error(`Failed to create command file at ${targetPath}`);
|
||||
}
|
||||
|
||||
// Verify content was written
|
||||
const writtenContent = Read(targetPath);
|
||||
if (!writtenContent || writtenContent.length === 0) {
|
||||
throw new Error(`Command file created but appears to be empty`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate Success Report
|
||||
|
||||
```javascript
|
||||
const report = {
|
||||
status: 'completed',
|
||||
file: {
|
||||
path: targetPath,
|
||||
name: summary.name,
|
||||
location: summary.location,
|
||||
group: summary.group,
|
||||
size: writtenContent.length,
|
||||
created: new Date().toISOString()
|
||||
},
|
||||
command: {
|
||||
name: summary.name,
|
||||
description: summary.description,
|
||||
hasArgumentHint: summary.hasArgumentHint
|
||||
},
|
||||
nextSteps: [
|
||||
`Edit ${targetPath} to add implementation details`,
|
||||
'Add usage examples and execution flow',
|
||||
'Test the command with Claude Code'
|
||||
]
|
||||
};
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### Success Output
|
||||
|
||||
```javascript
|
||||
{
|
||||
status: 'completed',
|
||||
file: {
|
||||
path: '.claude/commands/issue/create.md',
|
||||
name: 'create',
|
||||
location: 'project',
|
||||
group: 'issue',
|
||||
size: 1234,
|
||||
created: '2026-02-27T12:00:00.000Z'
|
||||
},
|
||||
command: {
|
||||
name: 'create',
|
||||
description: 'Create structured issue from GitHub URL...',
|
||||
hasArgumentHint: true
|
||||
},
|
||||
nextSteps: [
|
||||
'Edit .claude/commands/issue/create.md to add implementation details',
|
||||
'Add usage examples and execution flow',
|
||||
'Test the command with Claude Code'
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Console Output
|
||||
|
||||
```
|
||||
Command generated successfully!
|
||||
|
||||
File: .claude/commands/issue/create.md
|
||||
Name: create
|
||||
Description: Create structured issue from GitHub URL...
|
||||
Location: project
|
||||
Group: issue
|
||||
|
||||
Next Steps:
|
||||
1. Edit .claude/commands/issue/create.md to add implementation details
|
||||
2. Add usage examples and execution flow
|
||||
3. Test the command with Claude Code
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Action |
|
||||
|-------|--------|
|
||||
| Directory creation failed | Throw error with directory path |
|
||||
| File write failed | Throw error with target path |
|
||||
| Empty file detected | Throw error and attempt cleanup |
|
||||
| Permission denied | Throw error with permission hint |
|
||||
|
||||
## Cleanup on Failure
|
||||
|
||||
```javascript
|
||||
// If any step fails, attempt to clean up partial artifacts
|
||||
function cleanup(targetPath) {
|
||||
try {
|
||||
Bash(`rm -f "${targetPath}"`);
|
||||
} catch (e) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Completion
|
||||
|
||||
The command file has been successfully generated. The skill execution is complete.
|
||||
|
||||
### Usage Example
|
||||
|
||||
```bash
|
||||
# Use the generated command
|
||||
/issue:create https://github.com/owner/repo/issues/123
|
||||
|
||||
# Or with the group prefix
|
||||
/issue:create "Login fails with special chars"
|
||||
```
|
||||
160
.claude/skills/command-generator/specs/command-design-spec.md
Normal file
160
.claude/skills/command-generator/specs/command-design-spec.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# Command Design Specification
|
||||
|
||||
Guidelines and best practices for designing Claude Code command files.
|
||||
|
||||
## Command File Structure
|
||||
|
||||
### YAML Frontmatter
|
||||
|
||||
Every command file must start with YAML frontmatter containing:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: command-name # Required: Command identifier (lowercase, hyphens)
|
||||
description: Description # Required: Brief description of command purpose
|
||||
argument-hint: "[args]" # Optional: Argument format hint
|
||||
allowed-tools: Tool1, Tool2 # Optional: Restricted tool set
|
||||
examples: # Optional: Usage examples
|
||||
- /command:example1
|
||||
- /command:example2 --flag
|
||||
---
|
||||
```
|
||||
|
||||
### Frontmatter Fields
|
||||
|
||||
| Field | Required | Description |
|
||||
|-------|----------|-------------|
|
||||
| `name` | Yes | Command identifier, lowercase with hyphens |
|
||||
| `description` | Yes | Brief description, appears in command listings |
|
||||
| `argument-hint` | No | Usage hint for arguments (shown in help) |
|
||||
| `allowed-tools` | No | Restrict available tools for this command |
|
||||
| `examples` | No | Array of usage examples |
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Command Names
|
||||
|
||||
- Use lowercase letters only
|
||||
- Separate words with hyphens (`create-issue`, not `createIssue`)
|
||||
- Keep names short but descriptive (2-3 words max)
|
||||
- Use verbs for actions (`deploy`, `create`, `analyze`)
|
||||
|
||||
### Group Names
|
||||
|
||||
- Groups organize related commands
|
||||
- Use singular nouns (`issue`, `session`, `workflow`)
|
||||
- Common groups: `issue`, `workflow`, `session`, `memory`, `cli`
|
||||
|
||||
### Path Examples
|
||||
|
||||
```
|
||||
.claude/commands/deploy.md # Top-level command
|
||||
.claude/commands/issue/create.md # Grouped command
|
||||
.claude/commands/workflow/init.md # Grouped command
|
||||
```
|
||||
|
||||
## Content Sections
|
||||
|
||||
### Required Sections
|
||||
|
||||
1. **Overview**: Brief description of command purpose
|
||||
2. **Usage**: Command syntax and examples
|
||||
3. **Execution Flow**: High-level process diagram
|
||||
|
||||
### Recommended Sections
|
||||
|
||||
4. **Implementation**: Code examples for each phase
|
||||
5. **Error Handling**: Error cases and recovery
|
||||
6. **Related Commands**: Links to related functionality
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Clear Purpose
|
||||
|
||||
Each command should do one thing well:
|
||||
|
||||
```
|
||||
Good: /issue:create - Create a new issue
|
||||
Bad: /issue:manage - Create, update, delete issues (too broad)
|
||||
```
|
||||
|
||||
### 2. Consistent Structure
|
||||
|
||||
Follow the same pattern across all commands in a group:
|
||||
|
||||
```markdown
|
||||
# All issue commands should have:
|
||||
- Overview
|
||||
- Usage with examples
|
||||
- Phase-based implementation
|
||||
- Error handling table
|
||||
```
|
||||
|
||||
### 3. Progressive Detail
|
||||
|
||||
Start simple, add detail in phases:
|
||||
|
||||
```
|
||||
Phase 1: Quick overview
|
||||
Phase 2: Implementation details
|
||||
Phase 3: Edge cases and errors
|
||||
```
|
||||
|
||||
### 4. Reusable Patterns
|
||||
|
||||
Use consistent patterns for common operations:
|
||||
|
||||
```javascript
|
||||
// Input parsing pattern
|
||||
const args = parseArguments($ARGUMENTS);
|
||||
const flags = parseFlags($ARGUMENTS);
|
||||
|
||||
// Validation pattern
|
||||
if (!args.required) {
|
||||
throw new Error('Required argument missing');
|
||||
}
|
||||
```
|
||||
|
||||
## Scope Guidelines
|
||||
|
||||
### Project Commands (`.claude/commands/`)
|
||||
|
||||
- Project-specific workflows
|
||||
- Team conventions
|
||||
- Integration with project tools
|
||||
|
||||
### User Commands (`~/.claude/commands/`)
|
||||
|
||||
- Personal productivity tools
|
||||
- Cross-project utilities
|
||||
- Global configuration
|
||||
|
||||
## Error Messages
|
||||
|
||||
### Good Error Messages
|
||||
|
||||
```
|
||||
Error: GitHub issue URL required
|
||||
Usage: /issue:create <github-url>
|
||||
Example: /issue:create https://github.com/owner/repo/issues/123
|
||||
```
|
||||
|
||||
### Bad Error Messages
|
||||
|
||||
```
|
||||
Error: Invalid input
|
||||
```
|
||||
|
||||
## Testing Commands
|
||||
|
||||
After creating a command, test:
|
||||
|
||||
1. **Basic invocation**: Does it run without arguments?
|
||||
2. **Argument parsing**: Does it handle valid arguments?
|
||||
3. **Error cases**: Does it show helpful errors for invalid input?
|
||||
4. **Help text**: Is the usage clear?
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) - Full skill design specification
|
||||
- [../skill-generator/SKILL.md](../skill-generator/SKILL.md) - Meta-skill for creating skills
|
||||
75
.claude/skills/command-generator/templates/command-md.md
Normal file
75
.claude/skills/command-generator/templates/command-md.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
name: {{name}}
|
||||
description: {{description}}
|
||||
{{#if argumentHint}}argument-hint: {{argumentHint}}
|
||||
{{/if}}---
|
||||
|
||||
# {{name}} Command
|
||||
|
||||
## Overview
|
||||
|
||||
[Describe the command purpose and what it does]
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/{{#if group}}{{group}}:{{/if}}{{name}} [arguments]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Example 1: Basic usage
|
||||
/{{#if group}}{{group}}:{{/if}}{{name}}
|
||||
|
||||
# Example 2: With arguments
|
||||
/{{#if group}}{{group}}:{{/if}}{{name}} --option value
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Input Parsing
|
||||
- Parse arguments and flags
|
||||
- Validate input parameters
|
||||
|
||||
Phase 2: Core Processing
|
||||
- Execute main logic
|
||||
- Handle edge cases
|
||||
|
||||
Phase 3: Output Generation
|
||||
- Format results
|
||||
- Display to user
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Parsing
|
||||
|
||||
```javascript
|
||||
// Parse command arguments
|
||||
const args = parseArguments($ARGUMENTS);
|
||||
```
|
||||
|
||||
### Phase 2: Core Processing
|
||||
|
||||
```javascript
|
||||
// TODO: Implement core logic
|
||||
```
|
||||
|
||||
### Phase 3: Output Generation
|
||||
|
||||
```javascript
|
||||
// TODO: Format and display output
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Action |
|
||||
|-------|--------|
|
||||
| Invalid input | Show usage and error message |
|
||||
| Processing failure | Log error and suggest recovery |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- [Related command 1]
|
||||
- [Related command 2]
|
||||
442
.claude/skills/team-coordinate/SKILL.md
Normal file
442
.claude/skills/team-coordinate/SKILL.md
Normal file
@@ -0,0 +1,442 @@
|
||||
---
|
||||
name: team-coordinate
|
||||
description: Universal team coordination skill with dynamic role generation. Only coordinator is built-in -- all worker roles are generated at runtime based on task analysis. Beat/cadence model for orchestration. Triggers on "team coordinate".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Coordinate
|
||||
|
||||
Universal team coordination skill: analyze task -> generate roles -> dispatch -> execute -> deliver. Only the **coordinator** is built-in. All worker roles are **dynamically generated** based on task analysis.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-coordinate") |
|
||||
| args="task description" |
|
||||
| args="--role=coordinator" |
|
||||
| args="--role=<dynamic> --session=<path>" |
|
||||
+-------------------+-------------------------------+
|
||||
| Role Router
|
||||
+---- --role present? ----+
|
||||
| NO | YES
|
||||
v v
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto -> coordinator) (route to role file)
|
||||
| |
|
||||
coordinator +-------+-------+
|
||||
(built-in) | --role=coordinator?
|
||||
| |
|
||||
YES | | NO
|
||||
v | v
|
||||
built-in | Dynamic Role
|
||||
role.md | <session>/roles/<role>.md
|
||||
|
||||
Subagents (callable by any role, not team members):
|
||||
[discuss-subagent] - multi-perspective critique (dynamic perspectives)
|
||||
[explore-subagent] - codebase exploration with cache
|
||||
```
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
|
||||
Parse `$ARGUMENTS` to extract `--role` and `--session`. If no `--role` -> Orchestration Mode (auto route to coordinator).
|
||||
|
||||
### Role Registry
|
||||
|
||||
Only coordinator is statically registered. All other roles are dynamic, stored in `team-session.json#roles`.
|
||||
|
||||
| Role | File | Type |
|
||||
|------|------|------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | built-in orchestrator |
|
||||
| (dynamic) | `<session>/roles/<role-name>.md` | runtime-generated worker |
|
||||
|
||||
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
|
||||
|
||||
### Subagent Registry
|
||||
|
||||
| Subagent | Spec | Callable By | Purpose |
|
||||
|----------|------|-------------|---------|
|
||||
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | any role | Multi-perspective critique (dynamic perspectives) |
|
||||
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | any role | Codebase exploration with cache |
|
||||
|
||||
### Dispatch
|
||||
|
||||
1. Extract `--role` and `--session` from arguments
|
||||
2. If no `--role` -> route to coordinator (Orchestration Mode)
|
||||
3. If `--role=coordinator` -> Read built-in `roles/coordinator/role.md` -> Execute its phases
|
||||
4. If `--role=<other>` -> Read `<session>/roles/<role>.md` -> Execute its phases
|
||||
5. If session path not provided -> auto-discover from `.workflow/.team/TC-*/team-session.json`
|
||||
|
||||
### Orchestration Mode
|
||||
|
||||
When invoked without `--role`, coordinator auto-starts. User just provides task description.
|
||||
|
||||
**Invocation**: `Skill(skill="team-coordinate", args="task description")`
|
||||
|
||||
**Lifecycle**:
|
||||
```
|
||||
User provides task description
|
||||
-> coordinator Phase 1: task analysis (detect capabilities, build dependency graph)
|
||||
-> coordinator Phase 2: generate roles + initialize session
|
||||
-> coordinator Phase 3: create task chain from dependency graph
|
||||
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
|
||||
-> Worker executes -> SendMessage callback -> coordinator advances next step
|
||||
-> Loop until pipeline complete -> Phase 5 report
|
||||
```
|
||||
|
||||
**User Commands** (wake paused coordinator):
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status graph, no advancement |
|
||||
| `resume` / `continue` | Check worker states, advance next step |
|
||||
|
||||
---
|
||||
|
||||
## Shared Infrastructure
|
||||
|
||||
The following templates apply to all worker roles. Each generated role.md only needs to define **Phase 2-4** role-specific logic.
|
||||
|
||||
### Worker Phase 1: Task Discovery (all workers shared)
|
||||
|
||||
Each worker on startup executes the same task discovery flow:
|
||||
|
||||
1. Call `TaskList()` to get all tasks
|
||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
||||
3. No tasks -> idle wait
|
||||
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
|
||||
|
||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
||||
- Check if this task's output artifacts already exist
|
||||
- Artifacts complete -> skip to Phase 5 report completion
|
||||
- Artifacts incomplete or missing -> normal Phase 2-4 execution
|
||||
|
||||
### Worker Phase 5: Report + Fast-Advance (all workers shared)
|
||||
|
||||
Task completion with optional fast-advance to skip coordinator round-trip:
|
||||
|
||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
||||
- Params: operation="log", team=<team-name>, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
||||
- **CLI fallback**: When MCP unavailable -> `ccw team log --team <team> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
||||
2. **TaskUpdate**: Mark task completed
|
||||
3. **Fast-Advance Check**:
|
||||
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
|
||||
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip coordinator)
|
||||
- Otherwise -> **SendMessage** to coordinator for orchestration
|
||||
4. **Loop**: Back to Phase 1 to check for next task
|
||||
|
||||
**Fast-Advance Rules**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Same-prefix successor (Inner Loop role) | Do not spawn, main agent inner loop (Phase 5-L) |
|
||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via Task(run_in_background: true) |
|
||||
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
|
||||
| No ready tasks + others running | SendMessage to coordinator (status update) |
|
||||
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
|
||||
|
||||
**Fast-advance failure recovery**: If a fast-advanced task fails, the coordinator detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing. See [monitor.md](roles/coordinator/commands/monitor.md).
|
||||
|
||||
### Worker Inner Loop (roles with multiple same-prefix serial tasks)
|
||||
|
||||
When a role has **2+ serial same-prefix tasks**, it loops internally instead of spawning new agents:
|
||||
|
||||
**Inner Loop flow**:
|
||||
|
||||
```
|
||||
Phase 1: Discover task (first time)
|
||||
|
|
||||
+- Found task -> Phase 2-3: Load context + Execute work
|
||||
| |
|
||||
| v
|
||||
| Phase 4: Validation (+ optional Inline Discuss)
|
||||
| |
|
||||
| v
|
||||
| Phase 5-L: Loop Completion
|
||||
| |
|
||||
| +- TaskUpdate completed
|
||||
| +- team_msg log
|
||||
| +- Accumulate summary to context_accumulator
|
||||
| |
|
||||
| +- More same-prefix tasks?
|
||||
| | +- YES -> back to Phase 1 (inner loop)
|
||||
| | +- NO -> Phase 5-F: Final Report
|
||||
| |
|
||||
| +- Interrupt conditions?
|
||||
| +- consensus_blocked HIGH -> SendMessage -> STOP
|
||||
| +- Errors >= 3 -> SendMessage -> STOP
|
||||
|
|
||||
+- Phase 5-F: Final Report
|
||||
+- SendMessage (all task summaries)
|
||||
+- STOP
|
||||
```
|
||||
|
||||
**Phase 5-L vs Phase 5-F**:
|
||||
|
||||
| Step | Phase 5-L (looping) | Phase 5-F (final) |
|
||||
|------|---------------------|-------------------|
|
||||
| TaskUpdate completed | YES | YES |
|
||||
| team_msg log | YES | YES |
|
||||
| Accumulate summary | YES | - |
|
||||
| SendMessage to coordinator | NO | YES (all tasks summary) |
|
||||
| Fast-Advance to next prefix | - | YES (check cross-prefix successors) |
|
||||
|
||||
### Inline Discuss Protocol (optional for any role)
|
||||
|
||||
After completing primary output, roles may call the discuss subagent inline. Unlike v4's fixed perspective definitions, team-coordinate uses **dynamic perspectives** specified by the coordinator when generating each role.
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: "Discuss <round-id>",
|
||||
prompt: <see subagents/discuss-subagent.md for prompt template>
|
||||
})
|
||||
```
|
||||
|
||||
**Consensus handling**:
|
||||
|
||||
| Verdict | Severity | Role Action |
|
||||
|---------|----------|-------------|
|
||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | SendMessage with structured format. Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | SendMessage with warning. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
### Shared Explore Utility
|
||||
|
||||
Any role needing codebase context calls the explore subagent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore <angle>",
|
||||
prompt: <see subagents/explore-subagent.md for prompt template>
|
||||
})
|
||||
```
|
||||
|
||||
**Cache**: Results stored in `explorations/` with `cache-index.json`. Before exploring, always check cache first.
|
||||
|
||||
### Wisdom Accumulation (all roles)
|
||||
|
||||
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session init.
|
||||
|
||||
**Directory**:
|
||||
```
|
||||
<session-folder>/wisdom/
|
||||
+-- learnings.md # Patterns and insights
|
||||
+-- decisions.md # Design and strategy decisions
|
||||
+-- issues.md # Known risks and issues
|
||||
```
|
||||
|
||||
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
|
||||
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
|
||||
|
||||
### Role Isolation Rules
|
||||
|
||||
| Allowed | Prohibited |
|
||||
|---------|-----------|
|
||||
| Process own prefix tasks | Process other role's prefix tasks |
|
||||
| SendMessage to coordinator | Directly communicate with other workers |
|
||||
| Use tools appropriate to responsibility | Create tasks for other roles |
|
||||
| Call discuss/explore subagents | Modify resources outside own scope |
|
||||
| Fast-advance simple successors | Spawn parallel worker batches |
|
||||
| Report capability_gap to coordinator | Attempt work outside scope |
|
||||
|
||||
Coordinator additionally prohibited: directly write/modify deliverable artifacts, call implementation subagents directly, directly execute analysis/test/review.
|
||||
|
||||
---
|
||||
|
||||
## Cadence Control
|
||||
|
||||
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
|
||||
|
||||
```
|
||||
Beat Cycle (single beat)
|
||||
======================================================================
|
||||
Event Coordinator Workers
|
||||
----------------------------------------------------------------------
|
||||
callback/resume --> +- handleCallback -+
|
||||
| mark completed |
|
||||
| check pipeline |
|
||||
+- handleSpawnNext -+
|
||||
| find ready tasks |
|
||||
| spawn workers ---+--> [Worker A] Phase 1-5
|
||||
| (parallel OK) --+--> [Worker B] Phase 1-5
|
||||
+- STOP (idle) -----+ |
|
||||
|
|
||||
callback <-----------------------------------------+
|
||||
(next beat) SendMessage + TaskUpdate(completed)
|
||||
======================================================================
|
||||
|
||||
Fast-Advance (skips coordinator for simple linear successors)
|
||||
======================================================================
|
||||
[Worker A] Phase 5 complete
|
||||
+- 1 ready task? simple successor? --> spawn Worker B directly
|
||||
+- complex case? --> SendMessage to coordinator
|
||||
======================================================================
|
||||
```
|
||||
|
||||
**Pipelines are dynamic**: Unlike v4's predefined pipeline beat views (spec-only, impl-only, etc.), team-coordinate pipelines are generated per-task from the dependency graph. The beat model is the same -- only the pipeline shape varies.
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
### Standard Worker (single-task role)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
|
||||
## Primary Instruction
|
||||
All your work MUST be executed by calling Skill to get role definition:
|
||||
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log
|
||||
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> get role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task or fast-advance`
|
||||
})
|
||||
```
|
||||
|
||||
### Inner Loop Worker (multi-task role)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker (inner loop)",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
|
||||
## Primary Instruction
|
||||
All your work MUST be executed by calling Skill to get role definition:
|
||||
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Inner Loop Mode
|
||||
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
|
||||
After completing each task, loop back to find the next <PREFIX>-* task.
|
||||
Only SendMessage to coordinator when:
|
||||
- All <PREFIX>-* tasks are done
|
||||
- A consensus_blocked HIGH occurs
|
||||
- Errors accumulate (>= 3)
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log
|
||||
- Use subagent calls for heavy work, retain summaries in context`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/TC-<slug>-<date>/
|
||||
+-- team-session.json # Session state + dynamic role registry
|
||||
+-- task-analysis.json # Phase 1 output: capabilities, dependency graph
|
||||
+-- roles/ # Dynamic role definitions (generated Phase 2)
|
||||
| +-- <role-1>.md
|
||||
| +-- <role-2>.md
|
||||
+-- artifacts/ # All MD deliverables from workers
|
||||
| +-- <artifact>.md
|
||||
+-- shared-memory.json # Cross-role state store
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
| +-- learnings.md
|
||||
| +-- decisions.md
|
||||
| +-- issues.md
|
||||
+-- explorations/ # Shared explore cache
|
||||
| +-- cache-index.json
|
||||
| +-- explore-<angle>.json
|
||||
+-- discussions/ # Inline discuss records
|
||||
| +-- <round>.md
|
||||
+-- .msg/ # Team message bus logs
|
||||
```
|
||||
|
||||
### team-session.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "TC-<slug>-<date>",
|
||||
"task_description": "<original user input>",
|
||||
"status": "active | paused | completed",
|
||||
"team_name": "<team-name>",
|
||||
"roles": [
|
||||
{
|
||||
"name": "<role-name>",
|
||||
"prefix": "<PREFIX>",
|
||||
"responsibility_type": "<type>",
|
||||
"inner_loop": false,
|
||||
"role_file": "roles/<role-name>.md"
|
||||
}
|
||||
],
|
||||
"pipeline": {
|
||||
"dependency_graph": {},
|
||||
"tasks_total": 0,
|
||||
"tasks_completed": 0
|
||||
},
|
||||
"active_workers": [],
|
||||
"completed_tasks": [],
|
||||
"created_at": "<timestamp>"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Resume
|
||||
|
||||
Coordinator supports `--resume` / `--continue` for interrupted sessions:
|
||||
|
||||
1. Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
|
||||
2. Multiple matches -> AskUserQuestion for selection
|
||||
3. Audit TaskList -> reconcile session state <-> task status
|
||||
4. Reset in_progress -> pending (interrupted tasks)
|
||||
5. Rebuild team and spawn needed workers only
|
||||
6. Create missing tasks with correct blockedBy
|
||||
7. Kick first executable task -> Phase 4 coordination loop
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Check if `<session>/roles/<role>.md` exists; error with message if not |
|
||||
| Missing --role arg | Orchestration Mode -> coordinator |
|
||||
| Dynamic role file not found | Error with expected path, coordinator may need to regenerate |
|
||||
| Built-in role file not found | Error with expected path |
|
||||
| Command file not found | Fallback to inline execution |
|
||||
| Discuss subagent fails | Role proceeds without discuss, logs warning |
|
||||
| Explore cache corrupt | Clear cache, re-explore |
|
||||
| Fast-advance spawns wrong task | Coordinator reconciles on next callback |
|
||||
| Session path not provided | Auto-discover from `.workflow/.team/TC-*/team-session.json` |
|
||||
| capability_gap reported | Coordinator generates new role via handleAdapt |
|
||||
@@ -0,0 +1,175 @@
|
||||
# Command: analyze-task
|
||||
|
||||
## Purpose
|
||||
|
||||
Parse user task description -> detect required capabilities -> build dependency graph -> design dynamic roles. This replaces v4's static mode selection with intelligent task decomposition.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | User input from Phase 1 | Yes |
|
||||
| Clarification answers | AskUserQuestion results (if any) | No |
|
||||
| Session folder | From coordinator Phase 2 | Yes |
|
||||
|
||||
## Phase 3: Task Analysis
|
||||
|
||||
### Step 1: Signal Detection
|
||||
|
||||
Scan task description for capability keywords:
|
||||
|
||||
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|
||||
|--------|----------|------------|--------|---------------------|
|
||||
| Research | investigate, explore, compare, survey, find, research, discover, benchmark, study | researcher | RESEARCH | orchestration |
|
||||
| Writing | write, draft, document, article, report, blog, describe, explain, summarize, content | writer | DRAFT | code-gen (docs) |
|
||||
| Coding | implement, build, code, fix, refactor, develop, create app, program, migrate, port | developer | IMPL | code-gen (code) |
|
||||
| Design | design, architect, plan, structure, blueprint, model, schema, wireframe, layout | designer | DESIGN | orchestration |
|
||||
| Analysis | analyze, review, audit, assess, evaluate, inspect, examine, diagnose, profile | analyst | ANALYSIS | read-only |
|
||||
| Testing | test, verify, validate, QA, quality, check, assert, coverage, regression | tester | TEST | validation |
|
||||
| Planning | plan, breakdown, organize, schedule, decompose, roadmap, strategy, prioritize | planner | PLAN | orchestration |
|
||||
|
||||
**Multi-match**: A task may trigger multiple capabilities. E.g., "research and write a technical article" triggers both `researcher` and `writer`.
|
||||
|
||||
**No match**: If no keywords match, default to a single `general` capability with `TASK` prefix.
|
||||
|
||||
### Step 2: Artifact Inference
|
||||
|
||||
Each capability produces default output artifacts:
|
||||
|
||||
| Capability | Default Artifact | Format |
|
||||
|------------|-----------------|--------|
|
||||
| researcher | Research findings | `<session>/artifacts/research-findings.md` |
|
||||
| writer | Written document(s) | `<session>/artifacts/<doc-name>.md` |
|
||||
| developer | Code implementation | Source files + `<session>/artifacts/implementation-summary.md` |
|
||||
| designer | Design document | `<session>/artifacts/design-spec.md` |
|
||||
| analyst | Analysis report | `<session>/artifacts/analysis-report.md` |
|
||||
| tester | Test results | `<session>/artifacts/test-report.md` |
|
||||
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
|
||||
|
||||
### Step 3: Dependency Graph Construction
|
||||
|
||||
Build a DAG of work streams using these inference rules:
|
||||
|
||||
| Pattern | Shape | Example |
|
||||
|---------|-------|---------|
|
||||
| Knowledge -> Creation | research blockedBy nothing, creation blockedBy research | RESEARCH-001 -> DRAFT-001 |
|
||||
| Design -> Build | design first, build after | DESIGN-001 -> IMPL-001 |
|
||||
| Build -> Validate | build first, test/review after | IMPL-001 -> TEST-001 + ANALYSIS-001 |
|
||||
| Plan -> Execute | plan first, execute after | PLAN-001 -> IMPL-001 |
|
||||
| Independent parallel | no dependency between them | DRAFT-001 || IMPL-001 |
|
||||
| Analysis -> Revise | analysis finds issues, revise artifact | ANALYSIS-001 -> DRAFT-002 |
|
||||
|
||||
**Graph construction algorithm**:
|
||||
|
||||
1. Group capabilities by natural ordering: knowledge-gathering -> design/planning -> creation -> validation
|
||||
2. Within same tier: capabilities are parallel unless task description implies sequence
|
||||
3. Between tiers: downstream blockedBy upstream
|
||||
4. Single-capability tasks: one node, no dependencies
|
||||
|
||||
**Natural ordering tiers**:
|
||||
|
||||
| Tier | Capabilities | Description |
|
||||
|------|-------------|-------------|
|
||||
| 0 | researcher, planner | Knowledge gathering / planning |
|
||||
| 1 | designer | Design (requires context from tier 0 if present) |
|
||||
| 2 | writer, developer | Creation (requires design/plan if present) |
|
||||
| 3 | analyst, tester | Validation (requires artifacts to validate) |
|
||||
|
||||
### Step 4: Complexity Scoring
|
||||
|
||||
| Factor | Weight | Condition |
|
||||
|--------|--------|-----------|
|
||||
| Capability count | +1 each | Number of distinct capabilities |
|
||||
| Cross-domain factor | +2 | Capabilities span 3+ tiers |
|
||||
| Parallel tracks | +1 each | Independent parallel work streams |
|
||||
| Serial depth | +1 per level | Longest dependency chain length |
|
||||
|
||||
| Total Score | Complexity | Role Limit |
|
||||
|-------------|------------|------------|
|
||||
| 1-3 | Low | 1-2 roles |
|
||||
| 4-6 | Medium | 2-3 roles |
|
||||
| 7+ | High | 3-5 roles |
|
||||
|
||||
### Step 5: Role Minimization
|
||||
|
||||
Apply merging rules to reduce role count:
|
||||
|
||||
| Rule | Condition | Action |
|
||||
|------|-----------|--------|
|
||||
| Absorb trivial | Capability has exactly 1 task AND no explore needed | Merge into nearest related role |
|
||||
| Merge overlap | Two capabilities share >50% keywords from task description | Combine into single role |
|
||||
| Coordinator inline | Planner capability with 1 task, no explore | Coordinator handles inline, no separate role |
|
||||
| Cap at 5 | More than 5 roles after initial assignment | Merge lowest-priority pairs (priority: researcher > designer > developer > writer > analyst > planner > tester) |
|
||||
|
||||
**Merge priority** (when two must merge, keep the higher-priority one as the role name):
|
||||
|
||||
1. developer (code-gen is hardest to merge)
|
||||
2. researcher (context-gathering is foundational)
|
||||
3. writer (document generation has specific patterns)
|
||||
4. designer (design has specific outputs)
|
||||
5. analyst (analysis can be absorbed by reviewer pattern)
|
||||
6. planner (can be absorbed by coordinator)
|
||||
7. tester (can be absorbed by developer or analyst)
|
||||
|
||||
## Phase 4: Output
|
||||
|
||||
Write `<session-folder>/task-analysis.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"task_description": "<original user input>",
|
||||
"capabilities": [
|
||||
{
|
||||
"name": "researcher",
|
||||
"prefix": "RESEARCH",
|
||||
"responsibility_type": "orchestration",
|
||||
"tasks": [
|
||||
{ "id": "RESEARCH-001", "description": "..." }
|
||||
],
|
||||
"artifacts": ["research-findings.md"]
|
||||
}
|
||||
],
|
||||
"dependency_graph": {
|
||||
"RESEARCH-001": [],
|
||||
"DRAFT-001": ["RESEARCH-001"],
|
||||
"ANALYSIS-001": ["DRAFT-001"]
|
||||
},
|
||||
"roles": [
|
||||
{
|
||||
"name": "researcher",
|
||||
"prefix": "RESEARCH",
|
||||
"responsibility_type": "orchestration",
|
||||
"task_count": 1,
|
||||
"inner_loop": false
|
||||
},
|
||||
{
|
||||
"name": "writer",
|
||||
"prefix": "DRAFT",
|
||||
"responsibility_type": "code-gen (docs)",
|
||||
"task_count": 1,
|
||||
"inner_loop": false
|
||||
}
|
||||
],
|
||||
"complexity": {
|
||||
"capability_count": 2,
|
||||
"cross_domain_factor": false,
|
||||
"parallel_tracks": 0,
|
||||
"serial_depth": 2,
|
||||
"total_score": 3,
|
||||
"level": "low"
|
||||
},
|
||||
"artifacts": [
|
||||
{ "name": "research-findings.md", "producer": "researcher", "path": "artifacts/research-findings.md" },
|
||||
{ "name": "article-draft.md", "producer": "writer", "path": "artifacts/article-draft.md" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No capabilities detected | Default to single `general` role with TASK prefix |
|
||||
| Circular dependency in graph | Break cycle at lowest-tier edge, warn |
|
||||
| Task description too vague | Return minimal analysis, coordinator will AskUserQuestion |
|
||||
| All capabilities merge into one | Valid -- single-role execution, no team overhead |
|
||||
@@ -0,0 +1,85 @@
|
||||
# Command: dispatch
|
||||
|
||||
## Purpose
|
||||
|
||||
Create task chains from dynamic dependency graphs. Unlike v4's static mode-to-pipeline mapping, team-coordinate builds pipelines from the task-analysis.json produced by Phase 1.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task analysis | `<session-folder>/task-analysis.json` | Yes |
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Role registry | `team-session.json#roles` | Yes |
|
||||
| Scope | User requirements description | Yes |
|
||||
|
||||
## Phase 3: Task Chain Creation
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Read dependency graph** from `task-analysis.json#dependency_graph`
|
||||
2. **Topological sort** tasks to determine creation order
|
||||
3. **Validate** all task owners exist in role registry
|
||||
4. **For each task** (in topological order):
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "<PREFIX>-<NNN>",
|
||||
owner: "<role-name>",
|
||||
description: "<task description from task-analysis>\nSession: <session-folder>\nScope: <scope>\nInnerLoop: <true|false>",
|
||||
blockedBy: [<dependency-list from graph>],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
5. **Update team-session.json** with pipeline and tasks_total
|
||||
6. **Validate** created chain
|
||||
|
||||
### Task Description Template
|
||||
|
||||
Every task description includes session path and inner loop flag:
|
||||
|
||||
```
|
||||
<task description>
|
||||
Session: <session-folder>
|
||||
Scope: <scope>
|
||||
InnerLoop: <true|false>
|
||||
```
|
||||
|
||||
### InnerLoop Flag Rules
|
||||
|
||||
| Condition | InnerLoop |
|
||||
|-----------|-----------|
|
||||
| Role has 2+ serial same-prefix tasks | true |
|
||||
| Role has 1 task | false |
|
||||
| Tasks are parallel (no dependency between them) | false |
|
||||
|
||||
### Dependency Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| No orphan tasks | Every task is reachable from at least one root |
|
||||
| No circular deps | Topological sort succeeds without cycle |
|
||||
| All owners valid | Every task owner exists in team-session.json#roles |
|
||||
| All blockedBy valid | Every blockedBy references an existing task subject |
|
||||
| Session reference | Every task description contains `Session: <session-folder>` |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Task count | Matches dependency_graph node count |
|
||||
| Dependencies | Every blockedBy references an existing task subject |
|
||||
| Owner assignment | Each task owner is in role registry |
|
||||
| Session reference | Every task description contains `Session:` |
|
||||
| Pipeline integrity | No disconnected subgraphs (warn if found) |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Circular dependency detected | Report cycle, halt task creation |
|
||||
| Owner not in role registry | Error, coordinator must fix roles first |
|
||||
| TaskCreate fails | Log error, report to coordinator |
|
||||
| Duplicate task subject | Skip creation, log warning |
|
||||
| Empty dependency graph | Error, task analysis may have failed |
|
||||
@@ -0,0 +1,274 @@
|
||||
# Command: monitor
|
||||
|
||||
## Purpose
|
||||
|
||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Adapted from v4 for dynamic roles -- role names are read from `team-session.json#roles` instead of hardcoded. Includes `handleAdapt` for mid-pipeline capability gap handling.
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
| FAST_ADVANCE_AWARE | true | Workers may skip coordinator for simple linear successors |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Role registry | session.roles[] | Yes |
|
||||
|
||||
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name` rather than a static list. This is the key difference from v4.
|
||||
|
||||
## Phase 3: Handler Routing
|
||||
|
||||
### Wake-up Source Detection
|
||||
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
|
||||
| 2 | Contains "capability_gap" | handleAdapt |
|
||||
| 3 | Contains "check" or "status" | handleCheck |
|
||||
| 4 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Verify completion, update state, auto-advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
+- Find matching active worker by role (from session.roles)
|
||||
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
|
||||
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
|
||||
+- Task status = completed?
|
||||
| +- YES -> remove from active_workers -> update session
|
||||
| | +- -> handleSpawnNext
|
||||
| +- NO -> progress message, do not advance -> STOP
|
||||
+- No matching worker found
|
||||
+- Scan all active workers for completed tasks
|
||||
+- Found completed -> process each -> handleSpawnNext
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
|
||||
1. Check if the expected next task is already `in_progress` (fast-advanced)
|
||||
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
|
||||
3. If no -> normal handleSpawnNext
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCheck
|
||||
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] Pipeline Status
|
||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Execution Graph:
|
||||
<visual representation of dependency graph with status icons>
|
||||
|
||||
done=completed >>>=running o=pending .=not created
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
|
||||
|
||||
**Graph rendering**: Read dependency_graph from task-analysis.json, render each node with status icon. Show parallel branches side-by-side.
|
||||
|
||||
Then STOP.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline.
|
||||
|
||||
```
|
||||
Load active_workers from session
|
||||
+- No active workers -> handleSpawnNext
|
||||
+- Has active workers -> check each:
|
||||
+- status = completed -> mark done, log
|
||||
+- status = in_progress -> still running, log
|
||||
+- other status -> worker failure -> reset to pending
|
||||
After processing:
|
||||
+- Some completed -> handleSpawnNext
|
||||
+- All still running -> report status -> STOP
|
||||
+- All failed -> handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
Find all ready tasks, spawn workers in background, update session, STOP.
|
||||
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
+- completedSubjects: status = completed
|
||||
+- inProgressSubjects: status = in_progress
|
||||
+- readySubjects: pending + all blockedBy in completedSubjects
|
||||
|
||||
Ready tasks found?
|
||||
+- NONE + work in progress -> report waiting -> STOP
|
||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 5
|
||||
+- HAS ready tasks -> for each:
|
||||
+- Is task owner an Inner Loop role AND that role already has an active_worker?
|
||||
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
|
||||
| +- NO -> normal spawn below
|
||||
+- TaskUpdate -> in_progress
|
||||
+- team_msg log -> task_unblocked
|
||||
+- Spawn worker (see spawn tool call below)
|
||||
+- Add to session.active_workers
|
||||
Update session file -> output summary -> STOP
|
||||
```
|
||||
|
||||
**Spawn worker tool call** (one per ready task):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleAdapt
|
||||
|
||||
Handle mid-pipeline capability gap discovery. A worker reports `capability_gap` when it encounters work outside its scope.
|
||||
|
||||
```
|
||||
Parse capability_gap message:
|
||||
+- Extract: gap_description, requesting_role, suggested_capability
|
||||
+- Validate gap is genuine:
|
||||
+- Check existing roles in session.roles -> does any role cover this?
|
||||
| +- YES -> redirect: SendMessage to that role's owner -> STOP
|
||||
| +- NO -> genuine gap, proceed to role generation
|
||||
+- Generate new role:
|
||||
1. Read specs/role-template.md
|
||||
2. Fill template with capability details from gap description
|
||||
3. Write new role file to <session-folder>/roles/<new-role>.md
|
||||
4. Add to session.roles[]
|
||||
+- Create new task(s):
|
||||
TaskCreate({
|
||||
subject: "<NEW-PREFIX>-001",
|
||||
owner: "<new-role>",
|
||||
description: "<gap_description>\nSession: <session-folder>\nInnerLoop: false",
|
||||
blockedBy: [<requesting task if sequential>],
|
||||
status: "pending"
|
||||
})
|
||||
+- Update team-session.json: add role, increment tasks_total
|
||||
+- Spawn new worker -> STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task -> pending via TaskUpdate
|
||||
2. Log via team_msg (type: error)
|
||||
3. Report to user: task reset, will retry on next resume
|
||||
|
||||
### Fast-Advance Failure Recovery
|
||||
|
||||
When coordinator detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
|
||||
|
||||
```
|
||||
handleCallback / handleResume detects:
|
||||
+- Task is in_progress (was fast-advanced by predecessor)
|
||||
+- No active_worker entry for this task
|
||||
+- Original fast-advancing worker has already completed and exited
|
||||
+- Resolution:
|
||||
1. TaskUpdate -> reset task to pending
|
||||
2. Remove stale active_worker entry (if any)
|
||||
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
|
||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
||||
```
|
||||
|
||||
**Detection in handleResume**:
|
||||
|
||||
```
|
||||
For each in_progress task in TaskList():
|
||||
+- Has matching active_worker? -> normal, skip
|
||||
+- No matching active_worker? -> orphaned (likely fast-advance failure)
|
||||
+- Check creation time: if > 5 minutes with no progress callback
|
||||
+- Reset to pending -> handleSpawnNext
|
||||
```
|
||||
|
||||
**Prevention**: Fast-advance failures are self-healing. The coordinator reconciles orphaned tasks on every `resume`/`check` cycle.
|
||||
|
||||
### Consensus-Blocked Handling
|
||||
|
||||
When a worker reports `consensus_blocked` in its callback:
|
||||
|
||||
```
|
||||
handleCallback receives message with consensus_blocked flag
|
||||
+- Extract: divergence_severity, blocked_round, action_recommendation
|
||||
+- Route by severity:
|
||||
|
|
||||
+- severity = HIGH
|
||||
| +- Create REVISION task:
|
||||
| +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
|
||||
| +- Description includes: divergence details + action items from discuss
|
||||
| +- blockedBy: none (immediate execution)
|
||||
| +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
|
||||
| +- If already revised once -> PAUSE, escalate to user
|
||||
| +- Update session: mark task as "revised", log revision chain
|
||||
|
|
||||
+- severity = MEDIUM
|
||||
| +- Proceed with warning: include divergence in next task's context
|
||||
| +- Log action items to wisdom/issues.md
|
||||
| +- Normal handleSpawnNext
|
||||
|
|
||||
+- severity = LOW
|
||||
+- Proceed normally: treat as consensus_reached with notes
|
||||
+- Normal handleSpawnNext
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Dynamic roles valid | All task owners exist in session.roles |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
||||
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
|
||||
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
||||
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
|
||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
||||
| Dynamic role file not found | Error, coordinator must regenerate from task-analysis |
|
||||
| capability_gap from completed role | Validate gap, generate role if genuine |
|
||||
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
|
||||
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |
|
||||
233
.claude/skills/team-coordinate/roles/coordinator/role.md
Normal file
233
.claude/skills/team-coordinate/roles/coordinator/role.md
Normal file
@@ -0,0 +1,233 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the team-coordinate workflow: task analysis, dynamic role generation, task dispatching, progress monitoring, session state. The sole built-in role -- all worker roles are generated at runtime.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Analyze task -> Generate roles -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Analyze user task to detect capabilities and build dependency graph
|
||||
- Dynamically generate worker roles from specs/role-template.md
|
||||
- Create team and spawn worker subagents in background
|
||||
- Dispatch tasks with proper dependency chains from task-analysis.json
|
||||
- Monitor progress via worker callbacks and route messages
|
||||
- Maintain session state persistence (team-session.json)
|
||||
- Handle capability_gap reports (generate new roles mid-pipeline)
|
||||
- Handle consensus_blocked HIGH verdicts (create revision tasks or pause)
|
||||
- Detect fast-advance orphans on resume/check and reset to pending
|
||||
|
||||
### MUST NOT
|
||||
- Execute task work directly (delegate to workers)
|
||||
- Modify task output artifacts (workers own their deliverables)
|
||||
- Call implementation subagents (code-developer, etc.) directly
|
||||
- Skip dependency validation when creating task chains
|
||||
- Generate more than 5 worker roles (merge if exceeded)
|
||||
- Override consensus_blocked HIGH without user confirmation
|
||||
|
||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work is delegated to dynamically generated worker roles.
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
|
||||
| New session | None of above | -> Phase 0 |
|
||||
|
||||
For callback/check/resume/adapt: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
1. Scan `.workflow/.team/TC-*/team-session.json` for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session.completed_tasks <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Detect fast-advance orphans (in_progress without recent activity) -> reset to pending
|
||||
5. Determine remaining pipeline from reconciled state
|
||||
6. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
7. Create missing tasks with correct blockedBy dependencies
|
||||
8. Verify dependency chain integrity
|
||||
9. Update session file with reconciled state
|
||||
10. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Task Analysis
|
||||
|
||||
**Objective**: Parse user task, detect capabilities, build dependency graph, design roles.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse user task description**
|
||||
|
||||
2. **Clarify if ambiguous** via AskUserQuestion:
|
||||
- What is the scope? (specific files, module, project-wide)
|
||||
- What deliverables are expected? (documents, code, analysis reports)
|
||||
- Any constraints? (timeline, technology, style)
|
||||
|
||||
3. **Delegate to `commands/analyze-task.md`**:
|
||||
- Signal detection: scan keywords -> infer capabilities
|
||||
- Artifact inference: each capability -> default output type (.md)
|
||||
- Dependency graph: build DAG of work streams
|
||||
- Complexity scoring: count capabilities, cross-domain factor, parallel tracks
|
||||
- Role minimization: merge overlapping, absorb trivial, cap at 5
|
||||
|
||||
4. **Output**: Write `<session>/task-analysis.json`
|
||||
|
||||
**Success**: Task analyzed, capabilities detected, dependency graph built, roles designed.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Generate Roles + Initialize Session
|
||||
|
||||
**Objective**: Create session, generate dynamic role files, initialize shared infrastructure.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Generate session ID**: `TC-<slug>-<date>` (slug from first 3 meaningful words of task)
|
||||
|
||||
2. **Create session folder structure**:
|
||||
```
|
||||
.workflow/.team/<session-id>/
|
||||
+-- roles/
|
||||
+-- artifacts/
|
||||
+-- wisdom/
|
||||
+-- explorations/
|
||||
+-- discussions/
|
||||
+-- .msg/
|
||||
```
|
||||
|
||||
3. **Call TeamCreate** with team name derived from session ID
|
||||
|
||||
4. **Read `specs/role-template.md`** + `task-analysis.json`
|
||||
|
||||
5. **For each role in task-analysis.json#roles**:
|
||||
- Fill role template with:
|
||||
- role_name, prefix, responsibility_type from analysis
|
||||
- Phase 2-4 content from responsibility type reference sections in template
|
||||
- inner_loop flag from analysis (true if role has 2+ serial tasks)
|
||||
- Task-specific instructions from task description
|
||||
- Write generated role file to `<session>/roles/<role-name>.md`
|
||||
|
||||
6. **Register roles** in team-session.json#roles
|
||||
|
||||
7. **Initialize shared infrastructure**:
|
||||
- `wisdom/learnings.md`, `wisdom/decisions.md`, `wisdom/issues.md` (empty with headers)
|
||||
- `explorations/cache-index.json` (`{ "entries": [] }`)
|
||||
- `shared-memory.json` (`{}`)
|
||||
- `discussions/` (empty directory)
|
||||
|
||||
8. **Write team-session.json** with: session_id, task_description, status="active", roles, pipeline (empty), active_workers=[], created_at
|
||||
|
||||
**Success**: Session created, role files generated, shared infrastructure initialized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on dependency graph with proper dependencies.
|
||||
|
||||
Delegate to `commands/dispatch.md` which creates the full task chain:
|
||||
1. Reads dependency_graph from task-analysis.json
|
||||
2. Topological sorts tasks
|
||||
3. Creates tasks via TaskCreate with correct blockedBy
|
||||
4. Assigns owner based on role mapping from task-analysis.json
|
||||
5. Includes `Session: <session-folder>` in every task description
|
||||
6. Sets InnerLoop flag for multi-task roles
|
||||
7. Updates team-session.json with pipeline and tasks_total
|
||||
|
||||
**Success**: All tasks created with correct dependency chains, session updated.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
|
||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
||||
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Coordinator does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
1. Load `commands/monitor.md`
|
||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
3. For each ready task -> spawn worker (see SKILL.md Coordinator Spawn Template)
|
||||
- Use Standard Worker template for single-task roles
|
||||
- Use Inner Loop Worker template for multi-task roles
|
||||
4. Output status summary with execution graph
|
||||
5. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
||||
- User "check" -> handleCheck (status only)
|
||||
- User "resume" -> handleResume (advance)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List all deliverables with output paths in `<session>/artifacts/`
|
||||
3. Include discussion summaries (if inline discuss was used)
|
||||
4. Summarize wisdom accumulated during execution
|
||||
5. Update session status -> "completed"
|
||||
6. Offer next steps: exit / view artifacts / extend with additional tasks
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] ============================================
|
||||
[coordinator] TASK COMPLETE
|
||||
[coordinator]
|
||||
[coordinator] Deliverables:
|
||||
[coordinator] - <artifact-1.md> (<producer role>)
|
||||
[coordinator] - <artifact-2.md> (<producer role>)
|
||||
[coordinator]
|
||||
[coordinator] Pipeline: <completed>/<total> tasks
|
||||
[coordinator] Roles: <role-list>
|
||||
[coordinator] Duration: <elapsed>
|
||||
[coordinator]
|
||||
[coordinator] Session: <session-folder>
|
||||
[coordinator] ============================================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect in task analysis, report to user, halt |
|
||||
| Task description too vague | AskUserQuestion for clarification |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
| Role generation fails | Fall back to single general-purpose role |
|
||||
| capability_gap reported | handleAdapt: generate new role, create tasks, spawn |
|
||||
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
|
||||
| No capabilities detected | Default to single general role with TASK prefix |
|
||||
432
.claude/skills/team-coordinate/specs/role-template.md
Normal file
432
.claude/skills/team-coordinate/specs/role-template.md
Normal file
@@ -0,0 +1,432 @@
|
||||
# Dynamic Role Template
|
||||
|
||||
Template used by coordinator to generate worker role.md files at runtime. Each generated role is written to `<session>/roles/<role-name>.md`.
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
# Role: <role_name>
|
||||
|
||||
<role_description>
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `<role_name>` | **Tag**: `[<role_name>]`
|
||||
- **Task Prefix**: `<prefix>-*`
|
||||
- **Responsibility**: <responsibility_type>
|
||||
<if inner_loop>
|
||||
- **Mode**: Inner Loop (handle all `<prefix>-*` tasks in single agent)
|
||||
</if>
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `<prefix>-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[<role_name>]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within <responsibility_type> responsibility scope
|
||||
- Use fast-advance for simple linear successors (see SKILL.md Phase 5)
|
||||
- Produce MD artifacts in `<session>/artifacts/`
|
||||
<if inner_loop>
|
||||
- Use subagent for heavy work (do not execute CLI/generation in main agent context)
|
||||
- Maintain context_accumulator across tasks within the inner loop
|
||||
- Loop through all `<prefix>-*` tasks before reporting to coordinator
|
||||
</if>
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's scope
|
||||
- Omit `[<role_name>]` identifier in any output
|
||||
- Fast-advance when multiple tasks are ready or at checkpoint boundaries
|
||||
<if inner_loop>
|
||||
- Execute heavy work (CLI calls, large document generation) in main agent (delegate to subagent)
|
||||
- SendMessage to coordinator mid-loop (unless consensus_blocked HIGH or error count >= 3)
|
||||
</if>
|
||||
|
||||
## Toolbox
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
<tools based on responsibility_type -- see reference sections below>
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Description |
|
||||
|------|-----------|-------------|
|
||||
| `<prefix>_complete` | -> coordinator | Task completed with artifact path |
|
||||
| `<prefix>_error` | -> coordinator | Error encountered |
|
||||
| `capability_gap` | -> coordinator | Work outside role scope discovered |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <team-name>,
|
||||
from: "<role_name>",
|
||||
to: "coordinator",
|
||||
type: <message-type>,
|
||||
summary: "[<role_name>] <prefix> complete: <task-subject>",
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --team <team-name> --from <role_name> --to coordinator --type <message-type> --summary \"[<role_name>] <prefix> complete\" --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `<prefix>-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: <phase2_name>
|
||||
|
||||
<phase2_content -- generated by coordinator based on responsibility type>
|
||||
|
||||
### Phase 3: <phase3_name>
|
||||
|
||||
<phase3_content -- generated by coordinator based on task specifics>
|
||||
|
||||
### Phase 4: <phase4_name>
|
||||
|
||||
<phase4_content -- generated by coordinator based on responsibility type>
|
||||
|
||||
<if inline_discuss>
|
||||
### Phase 4b: Inline Discuss (optional)
|
||||
|
||||
After primary work, optionally call discuss subagent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: "Discuss <round-id>",
|
||||
prompt: "## Multi-Perspective Critique: <round-id>
|
||||
See subagents/discuss-subagent.md for prompt template.
|
||||
Perspectives: <specified by coordinator when generating this role>"
|
||||
})
|
||||
```
|
||||
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format. Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
</if>
|
||||
|
||||
<if inner_loop>
|
||||
### Phase 5-L: Loop Completion (Inner Loop)
|
||||
|
||||
When more same-prefix tasks remain:
|
||||
|
||||
1. **TaskUpdate**: Mark current task completed
|
||||
2. **team_msg**: Log task completion
|
||||
3. **Accumulate summary**:
|
||||
```
|
||||
context_accumulator.append({
|
||||
task: "<task-id>",
|
||||
artifact: "<output-path>",
|
||||
key_decisions: <from subagent return>,
|
||||
discuss_verdict: <from Phase 4>,
|
||||
summary: <from subagent return>
|
||||
})
|
||||
```
|
||||
4. **Interrupt check**:
|
||||
- consensus_blocked HIGH -> SendMessage -> STOP
|
||||
- Error count >= 3 -> SendMessage -> STOP
|
||||
5. **Loop**: Back to Phase 1
|
||||
|
||||
**Does NOT**: SendMessage to coordinator, Fast-Advance spawn.
|
||||
|
||||
### Phase 5-F: Final Report (Inner Loop)
|
||||
|
||||
When all same-prefix tasks are done:
|
||||
|
||||
1. **TaskUpdate**: Mark last task completed
|
||||
2. **team_msg**: Log completion
|
||||
3. **Summary report**: All tasks summary + discuss results + artifact paths
|
||||
4. **Fast-Advance check**: Check cross-prefix successors
|
||||
5. **SendMessage** or **spawn successor**
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
|
||||
|
||||
<else>
|
||||
### Phase 5: Report + Fast-Advance
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[<role_name>]` prefix -> TaskUpdate completed -> Fast-Advance Check -> Loop to Phase 1 for next task.
|
||||
</if>
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No <prefix>-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Context file not found | Notify coordinator, request location |
|
||||
| Subagent fails | Retry once with fallback; still fails -> log error, continue next task |
|
||||
| Fast-advance spawn fails | Fall back to SendMessage to coordinator |
|
||||
<if inner_loop>
|
||||
| Cumulative 3 task failures | SendMessage to coordinator, STOP inner loop |
|
||||
| Agent crash mid-loop | Coordinator detects orphan on resume -> re-spawn -> resume from interrupted task |
|
||||
</if>
|
||||
| Work outside scope discovered | SendMessage capability_gap to coordinator |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2-4 Content by Responsibility Type
|
||||
|
||||
Reference sections for coordinator to fill when generating roles. Select the matching section based on `responsibility_type`.
|
||||
|
||||
### orchestration
|
||||
|
||||
**Phase 2: Context Assessment**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
| Prior artifacts | <session>/artifacts/ | No |
|
||||
| Wisdom | <session>/wisdom/ | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read shared-memory.json for cross-role context
|
||||
3. Read prior artifacts (if any exist from upstream tasks)
|
||||
4. Load wisdom files for accumulated knowledge
|
||||
5. Optionally call explore subagent for codebase context
|
||||
```
|
||||
|
||||
**Phase 3: Subagent Execution**
|
||||
|
||||
```
|
||||
Delegate to appropriate subagent based on task:
|
||||
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
run_in_background: false,
|
||||
description: "<task-type> for <task-id>",
|
||||
prompt: "## Task
|
||||
- <task description>
|
||||
- Session: <session-folder>
|
||||
## Context
|
||||
<prior artifacts + shared memory + explore results>
|
||||
## Expected Output
|
||||
Write artifact to: <session>/artifacts/<artifact-name>.md
|
||||
Return JSON summary: { artifact_path, summary, key_decisions[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Result Aggregation**
|
||||
|
||||
```
|
||||
1. Verify subagent output artifact exists
|
||||
2. Read artifact, validate structure/completeness
|
||||
3. Update shared-memory.json with key findings
|
||||
4. Write insights to wisdom/ files
|
||||
```
|
||||
|
||||
### code-gen (docs)
|
||||
|
||||
**Phase 2: Load Prior Context**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Prior artifacts | <session>/artifacts/ from upstream tasks | Conditional |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
| Wisdom | <session>/wisdom/ | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read upstream artifacts (e.g., research findings for a writer)
|
||||
3. Read shared-memory.json for cross-role context
|
||||
4. Load wisdom for accumulated decisions
|
||||
```
|
||||
|
||||
**Phase 3: Document Generation**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "universal-executor",
|
||||
run_in_background: false,
|
||||
description: "Generate <doc-type> for <task-id>",
|
||||
prompt: "## Task
|
||||
- Generate: <document type>
|
||||
- Session: <session-folder>
|
||||
## Prior Context
|
||||
<upstream artifacts + shared memory>
|
||||
## Instructions
|
||||
<task-specific writing instructions from coordinator>
|
||||
## Expected Output
|
||||
Write document to: <session>/artifacts/<doc-name>.md
|
||||
Return JSON: { artifact_path, summary, key_decisions[], sections_generated[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Structure Validation**
|
||||
|
||||
```
|
||||
1. Verify document artifact exists
|
||||
2. Check document has expected sections
|
||||
3. Validate no placeholder text remains
|
||||
4. Update shared-memory.json with document metadata
|
||||
```
|
||||
|
||||
### code-gen (code)
|
||||
|
||||
**Phase 2: Load Plan/Specs**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Plan/design artifacts | <session>/artifacts/ | Conditional |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
| Wisdom | <session>/wisdom/ | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read plan/design artifacts from upstream
|
||||
3. Load shared-memory.json for implementation context
|
||||
4. Load wisdom for conventions and patterns
|
||||
```
|
||||
|
||||
**Phase 3: Code Implementation**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement <task-id>",
|
||||
prompt: "## Task
|
||||
- <implementation description>
|
||||
- Session: <session-folder>
|
||||
## Plan/Design Context
|
||||
<upstream artifacts>
|
||||
## Instructions
|
||||
<task-specific implementation instructions>
|
||||
## Expected Output
|
||||
Implement code changes.
|
||||
Write summary to: <session>/artifacts/implementation-summary.md
|
||||
Return JSON: { artifact_path, summary, files_changed[], key_decisions[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Syntax Validation**
|
||||
|
||||
```
|
||||
1. Run syntax check (tsc --noEmit or equivalent)
|
||||
2. Verify all planned files exist
|
||||
3. Check no broken imports
|
||||
4. If validation fails -> attempt auto-fix (max 2 attempts)
|
||||
5. Write implementation summary to artifacts/
|
||||
```
|
||||
|
||||
### read-only
|
||||
|
||||
**Phase 2: Target Loading**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Target artifacts/files | From task description or upstream | Yes |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path and target files from task description
|
||||
2. Read target artifacts or source files for analysis
|
||||
3. Load shared-memory.json for context
|
||||
```
|
||||
|
||||
**Phase 3: Multi-Dimension Analysis**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
run_in_background: false,
|
||||
description: "Analyze <target> for <task-id>",
|
||||
prompt: "## Task
|
||||
- Analyze: <target description>
|
||||
- Dimensions: <analysis dimensions from coordinator>
|
||||
- Session: <session-folder>
|
||||
## Target Content
|
||||
<artifact content or file content>
|
||||
## Expected Output
|
||||
Write report to: <session>/artifacts/analysis-report.md
|
||||
Return JSON: { artifact_path, summary, findings[], severity_counts: {critical, high, medium, low} }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Severity Classification**
|
||||
|
||||
```
|
||||
1. Verify analysis report exists
|
||||
2. Classify findings by severity (Critical/High/Medium/Low)
|
||||
3. Update shared-memory.json with key findings
|
||||
4. Write issues to wisdom/issues.md
|
||||
```
|
||||
|
||||
### validation
|
||||
|
||||
**Phase 2: Environment Detection**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Implementation artifacts | Upstream code changes | Yes |
|
||||
|
||||
Loading steps:
|
||||
1. Detect test framework from project files
|
||||
2. Get changed files from implementation
|
||||
3. Identify test command and coverage tool
|
||||
```
|
||||
|
||||
**Phase 3: Test-Fix Cycle**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "test-fix-agent",
|
||||
run_in_background: false,
|
||||
description: "Test-fix for <task-id>",
|
||||
prompt: "## Task
|
||||
- Run tests and fix failures
|
||||
- Session: <session-folder>
|
||||
- Max iterations: 5
|
||||
## Changed Files
|
||||
<from upstream implementation>
|
||||
## Expected Output
|
||||
Write report to: <session>/artifacts/test-report.md
|
||||
Return JSON: { artifact_path, pass_rate, coverage, iterations_used, remaining_failures[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Result Analysis**
|
||||
|
||||
```
|
||||
1. Check pass rate >= 95%
|
||||
2. Check coverage meets threshold
|
||||
3. Generate test report with pass/fail counts
|
||||
4. Update shared-memory.json with test results
|
||||
```
|
||||
133
.claude/skills/team-coordinate/subagents/discuss-subagent.md
Normal file
133
.claude/skills/team-coordinate/subagents/discuss-subagent.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Discuss Subagent
|
||||
|
||||
Lightweight multi-perspective critique engine. Called inline by any role needing peer review. Perspectives are dynamic -- specified by the calling role, not pre-defined.
|
||||
|
||||
## Design
|
||||
|
||||
Unlike team-lifecycle-v4's fixed perspective definitions (product, technical, quality, risk, coverage), team-coordinate uses **dynamic perspectives** passed in the prompt. The calling role decides what viewpoints matter for its artifact.
|
||||
|
||||
## Invocation
|
||||
|
||||
Called by roles after artifact creation:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: "Discuss <round-id>",
|
||||
prompt: `## Multi-Perspective Critique: <round-id>
|
||||
|
||||
### Input
|
||||
- Artifact: <artifact-path>
|
||||
- Round: <round-id>
|
||||
- Session: <session-folder>
|
||||
|
||||
### Perspectives
|
||||
<Dynamic perspective list -- each entry defines: name, cli_tool, role_label, focus_areas>
|
||||
|
||||
Example:
|
||||
| Perspective | CLI Tool | Role | Focus Areas |
|
||||
|-------------|----------|------|-------------|
|
||||
| Feasibility | gemini | Engineer | Implementation complexity, technical risks, resource needs |
|
||||
| Clarity | codex | Editor | Readability, logical flow, completeness of explanation |
|
||||
| Accuracy | gemini | Domain Expert | Factual correctness, source reliability, claim verification |
|
||||
|
||||
### Execution Steps
|
||||
1. Read artifact from <artifact-path>
|
||||
2. For each perspective, launch CLI analysis in background:
|
||||
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
|
||||
TASK: <focus-areas>
|
||||
MODE: analysis
|
||||
CONTEXT: Artifact content below
|
||||
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
CONSTRAINTS: Output valid JSON only
|
||||
|
||||
Artifact:
|
||||
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
||||
3. Wait for all CLI results
|
||||
4. Divergence detection:
|
||||
- High severity: any rating <= 2, critical issue identified
|
||||
- Medium severity: rating spread (max - min) >= 3, or single perspective rated <= 2 with others >= 3
|
||||
- Low severity: minor suggestions only, all ratings >= 3
|
||||
5. Consensus determination:
|
||||
- No high-severity divergences AND average rating >= 3.0 -> consensus_reached
|
||||
- Otherwise -> consensus_blocked
|
||||
6. Synthesize:
|
||||
- Convergent themes (agreed by 2+ perspectives)
|
||||
- Divergent views (conflicting assessments)
|
||||
- Action items from suggestions
|
||||
7. Write discussion record to: <session-folder>/discussions/<round-id>-discussion.md
|
||||
|
||||
### Discussion Record Format
|
||||
# Discussion Record: <round-id>
|
||||
|
||||
**Artifact**: <artifact-path>
|
||||
**Perspectives**: <list>
|
||||
**Consensus**: reached / blocked
|
||||
**Average Rating**: <avg>/5
|
||||
|
||||
## Convergent Themes
|
||||
- <theme>
|
||||
|
||||
## Divergent Views
|
||||
- **<topic>** (<severity>): <description>
|
||||
|
||||
## Action Items
|
||||
1. <item>
|
||||
|
||||
## Ratings
|
||||
| Perspective | Rating |
|
||||
|-------------|--------|
|
||||
| <name> | <n>/5 |
|
||||
|
||||
### Return Value
|
||||
|
||||
**When consensus_reached**:
|
||||
Return a summary string with:
|
||||
- Verdict: consensus_reached
|
||||
- Average rating
|
||||
- Key action items (top 3)
|
||||
- Discussion record path
|
||||
|
||||
**When consensus_blocked**:
|
||||
Return a structured summary with:
|
||||
- Verdict: consensus_blocked
|
||||
- Severity: HIGH | MEDIUM | LOW
|
||||
- Average rating
|
||||
- Divergence summary: top 3 divergent points with perspective attribution
|
||||
- Action items: prioritized list of required changes
|
||||
- Recommendation: revise | proceed-with-caution | escalate
|
||||
- Discussion record path
|
||||
|
||||
### Error Handling
|
||||
- Single CLI fails -> fallback to direct Claude analysis for that perspective
|
||||
- All CLI fail -> generate basic discussion from direct artifact reading
|
||||
- Artifact not found -> return error immediately`
|
||||
})
|
||||
```
|
||||
|
||||
## Integration with Calling Role
|
||||
|
||||
The calling role is responsible for:
|
||||
|
||||
1. **Before calling**: Complete primary artifact output
|
||||
2. **Calling**: Invoke discuss subagent with appropriate dynamic perspectives
|
||||
3. **After calling**:
|
||||
|
||||
| Verdict | Severity | Role Action |
|
||||
|---------|----------|-------------|
|
||||
| consensus_reached | - | Include action items in Phase 5 report, proceed normally |
|
||||
| consensus_blocked | HIGH | Include divergence details in Phase 5 SendMessage. Do NOT self-revise -- coordinator decides. |
|
||||
| consensus_blocked | MEDIUM | Include warning in Phase 5 SendMessage. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
|
||||
|
||||
**SendMessage format for consensus_blocked (HIGH or MEDIUM)**:
|
||||
|
||||
```
|
||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <artifact-path>
|
||||
Discussion: <discussion-record-path>
|
||||
```
|
||||
120
.claude/skills/team-coordinate/subagents/explore-subagent.md
Normal file
120
.claude/skills/team-coordinate/subagents/explore-subagent.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# Explore Subagent
|
||||
|
||||
Shared codebase exploration utility with centralized caching. Callable by any role needing code context.
|
||||
|
||||
## Invocation
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore <angle>",
|
||||
prompt: `Explore codebase for: <query>
|
||||
|
||||
Focus angle: <angle>
|
||||
Keywords: <keyword-list>
|
||||
Session folder: <session-folder>
|
||||
|
||||
## Cache Check
|
||||
1. Read <session-folder>/explorations/cache-index.json (if exists)
|
||||
2. Look for entry with matching angle
|
||||
3. If found AND file exists -> read cached result, return summary
|
||||
4. If not found -> proceed to exploration
|
||||
|
||||
## Exploration
|
||||
<angle-specific-focus-from-table-below>
|
||||
|
||||
## Output
|
||||
Write JSON to: <session-folder>/explorations/explore-<angle>.json
|
||||
Update cache-index.json with new entry
|
||||
|
||||
## Output Schema
|
||||
{
|
||||
"angle": "<angle>",
|
||||
"query": "<query>",
|
||||
"relevant_files": [
|
||||
{ "path": "...", "rationale": "...", "role": "...", "discovery_source": "...", "key_symbols": [] }
|
||||
],
|
||||
"patterns": [],
|
||||
"dependencies": [],
|
||||
"external_refs": [],
|
||||
"_metadata": { "created_by": "<calling-role>", "timestamp": "...", "cache_key": "..." }
|
||||
}
|
||||
|
||||
Return summary: file count, pattern count, top 5 files, output path`
|
||||
})
|
||||
```
|
||||
|
||||
## Cache Mechanism
|
||||
|
||||
### Cache Index Schema
|
||||
|
||||
`<session-folder>/explorations/cache-index.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"angle": "architecture",
|
||||
"keywords": ["auth", "middleware"],
|
||||
"file": "explore-architecture.json",
|
||||
"created_by": "analyst",
|
||||
"created_at": "2026-02-27T10:00:00Z",
|
||||
"file_count": 15
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Lookup Rules
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Exact angle match exists | Return cached result |
|
||||
| No match | Execute exploration, cache result |
|
||||
| Cache file missing but index has entry | Remove stale entry, re-explore |
|
||||
|
||||
### Cache Invalidation
|
||||
|
||||
Cache is session-scoped. No explicit invalidation needed -- each session starts fresh. If a role suspects stale data, it can pass `force_refresh: true` in the prompt to bypass cache.
|
||||
|
||||
## Angle Focus Guide
|
||||
|
||||
| Angle | Focus Points | Typical Caller |
|
||||
|-------|-------------|----------------|
|
||||
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs | any |
|
||||
| dependencies | Import chains, external libraries, circular dependencies, shared utilities | any |
|
||||
| modularity | Module interfaces, separation of concerns, extraction opportunities | any |
|
||||
| integration-points | API endpoints, data flow between modules, event systems | any |
|
||||
| security | Auth/authz logic, input validation, sensitive data handling, middleware | any |
|
||||
| dataflow | Data transformations, state propagation, validation points | any |
|
||||
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity | any |
|
||||
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging | any |
|
||||
| patterns | Code conventions, design patterns, naming conventions, best practices | any |
|
||||
| testing | Test files, coverage gaps, test patterns, mocking strategies | any |
|
||||
| general | Broad semantic search for topic-related code | any |
|
||||
|
||||
## Exploration Strategies
|
||||
|
||||
### Low Complexity (direct search)
|
||||
|
||||
For simple queries, use ACE semantic search:
|
||||
|
||||
```
|
||||
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<query>")
|
||||
```
|
||||
|
||||
ACE failure fallback: `rg -l '<keywords>' --type ts`
|
||||
|
||||
### Medium/High Complexity (multi-angle)
|
||||
|
||||
For complex queries, call cli-explore-agent per angle. The calling role determines complexity and selects angles.
|
||||
|
||||
## Search Tool Priority
|
||||
|
||||
| Tool | Priority | Use Case |
|
||||
|------|----------|----------|
|
||||
| mcp__ace-tool__search_context | P0 | Semantic search |
|
||||
| Grep / Glob | P1 | Pattern matching |
|
||||
| cli-explore-agent | Deep | Multi-angle exploration |
|
||||
| WebSearch | P3 | External docs |
|
||||
@@ -136,8 +136,9 @@ Each worker on startup executes the same task discovery flow:
|
||||
Task completion with optional fast-advance to skip coordinator round-trip:
|
||||
|
||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
||||
- Params: operation="log", team=<team-name>, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
||||
- **CLI fallback**: When MCP unavailable -> `ccw team log --team <team> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
||||
- Params: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
||||
- **`team` must be session ID** (e.g., `TLS-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field → take folder name.
|
||||
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
||||
2. **TaskUpdate**: Mark task completed
|
||||
3. **Fast-Advance Check**:
|
||||
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
|
||||
@@ -533,13 +534,13 @@ Session: <session-folder>
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
||||
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> get role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
3. team_msg(team=<session-id>) + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task or fast-advance`
|
||||
})
|
||||
```
|
||||
@@ -575,7 +576,7 @@ Only SendMessage to coordinator when:
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
||||
- Use subagent calls for heavy work, retain summaries in context`
|
||||
})
|
||||
```
|
||||
|
||||
@@ -19,6 +19,7 @@ import {
|
||||
GitBranch,
|
||||
Send,
|
||||
FileBarChart,
|
||||
Settings,
|
||||
} from 'lucide-react';
|
||||
import { Card } from '@/components/ui/Card';
|
||||
import { Button } from '@/components/ui/Button';
|
||||
@@ -31,7 +32,7 @@ import type { HookTriggerType } from './HookCard';
|
||||
/**
|
||||
* Template category type
|
||||
*/
|
||||
export type TemplateCategory = 'notification' | 'indexing' | 'automation';
|
||||
export type TemplateCategory = 'notification' | 'indexing' | 'automation' | 'utility';
|
||||
|
||||
/**
|
||||
* Hook template definition
|
||||
@@ -226,6 +227,34 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
|
||||
'-e',
|
||||
'const cp=require("child_process");const payload=JSON.stringify({type:"MEMORY_V2_STATUS_UPDATED",project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})'
|
||||
]
|
||||
},
|
||||
// --- Memory Operations ---
|
||||
{
|
||||
id: 'memory-auto-compress',
|
||||
name: 'Auto Memory Compress',
|
||||
description: 'Automatically compress memory when entries exceed threshold',
|
||||
category: 'automation',
|
||||
trigger: 'Stop',
|
||||
command: 'ccw',
|
||||
args: ['memory', 'consolidate', '--threshold', '50']
|
||||
},
|
||||
{
|
||||
id: 'memory-preview-extract',
|
||||
name: 'Memory Preview & Extract',
|
||||
description: 'Preview extraction queue and extract eligible sessions',
|
||||
category: 'automation',
|
||||
trigger: 'SessionStart',
|
||||
command: 'ccw',
|
||||
args: ['memory', 'preview', '--include-native']
|
||||
},
|
||||
{
|
||||
id: 'memory-status-check',
|
||||
name: 'Memory Status Check',
|
||||
description: 'Check memory extraction and consolidation status',
|
||||
category: 'utility',
|
||||
trigger: 'SessionStart',
|
||||
command: 'ccw',
|
||||
args: ['memory', 'status']
|
||||
}
|
||||
] as const;
|
||||
|
||||
@@ -234,7 +263,8 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
|
||||
const CATEGORY_ICONS: Record<TemplateCategory, { icon: typeof Bell; color: string; bg: string }> = {
|
||||
notification: { icon: Bell, color: 'text-blue-500', bg: 'bg-blue-500/10' },
|
||||
indexing: { icon: Database, color: 'text-purple-500', bg: 'bg-purple-500/10' },
|
||||
automation: { icon: Wrench, color: 'text-orange-500', bg: 'bg-orange-500/10' }
|
||||
automation: { icon: Wrench, color: 'text-orange-500', bg: 'bg-orange-500/10' },
|
||||
utility: { icon: Settings, color: 'text-gray-500', bg: 'bg-gray-500/10' }
|
||||
};
|
||||
|
||||
// ========== Template Icons ==========
|
||||
@@ -258,7 +288,8 @@ function getCategoryName(category: TemplateCategory, formatMessage: ReturnType<t
|
||||
const names: Record<TemplateCategory, string> = {
|
||||
notification: formatMessage({ id: 'cliHooks.templates.categories.notification' }),
|
||||
indexing: formatMessage({ id: 'cliHooks.templates.categories.indexing' }),
|
||||
automation: formatMessage({ id: 'cliHooks.templates.categories.automation' })
|
||||
automation: formatMessage({ id: 'cliHooks.templates.categories.automation' }),
|
||||
utility: formatMessage({ id: 'cliHooks.templates.categories.utility' })
|
||||
};
|
||||
return names[category];
|
||||
}
|
||||
@@ -352,7 +383,9 @@ export function HookQuickTemplates({
|
||||
</div>
|
||||
<div className="flex-1 min-w-0">
|
||||
<h4 className="text-sm font-medium text-foreground leading-tight">
|
||||
{formatMessage({ id: `cliHooks.templates.templates.${template.id}.name` })}
|
||||
{formatMessage(
|
||||
{ id: `cliHooks.templates.templates.${template.id}.name`, defaultMessage: template.name }
|
||||
)}
|
||||
</h4>
|
||||
<div className="flex items-center gap-1.5 mt-1 flex-wrap">
|
||||
<Badge variant="secondary" className="text-[10px] px-1.5 py-0">
|
||||
@@ -394,7 +427,9 @@ export function HookQuickTemplates({
|
||||
|
||||
{/* Description */}
|
||||
<p className="text-xs text-muted-foreground leading-relaxed flex-1 pl-11">
|
||||
{formatMessage({ id: `cliHooks.templates.templates.${template.id}.description` })}
|
||||
{formatMessage(
|
||||
{ id: `cliHooks.templates.templates.${template.id}.description`, defaultMessage: template.description }
|
||||
)}
|
||||
</p>
|
||||
</Card>
|
||||
);
|
||||
|
||||
102
ccw/frontend/src/components/mcp/CcwToolsMcpCard.test.tsx
Normal file
102
ccw/frontend/src/components/mcp/CcwToolsMcpCard.test.tsx
Normal file
@@ -0,0 +1,102 @@
|
||||
// ========================================
|
||||
// CcwToolsMcpCard Component Tests
|
||||
// ========================================
|
||||
|
||||
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
||||
import { render, screen, waitFor } from '@/test/i18n';
|
||||
import userEvent from '@testing-library/user-event';
|
||||
|
||||
import { CcwToolsMcpCard } from './CcwToolsMcpCard';
|
||||
import { updateCcwConfig, updateCcwConfigForCodex } from '@/lib/api';
|
||||
|
||||
vi.mock('@/lib/api', () => ({
|
||||
installCcwMcp: vi.fn(),
|
||||
uninstallCcwMcp: vi.fn(),
|
||||
updateCcwConfig: vi.fn(),
|
||||
installCcwMcpToCodex: vi.fn(),
|
||||
uninstallCcwMcpFromCodex: vi.fn(),
|
||||
updateCcwConfigForCodex: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@/hooks/useNotifications', () => ({
|
||||
useNotifications: () => ({
|
||||
success: vi.fn(),
|
||||
error: vi.fn(),
|
||||
}),
|
||||
}));
|
||||
|
||||
describe('CcwToolsMcpCard', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('preserves enabledTools when saving config (Codex)', async () => {
|
||||
const updateCodexMock = vi.mocked(updateCcwConfigForCodex);
|
||||
updateCodexMock.mockResolvedValue({
|
||||
isInstalled: true,
|
||||
enabledTools: [],
|
||||
installedScopes: ['global'],
|
||||
});
|
||||
|
||||
render(
|
||||
<CcwToolsMcpCard
|
||||
target="codex"
|
||||
isInstalled={true}
|
||||
enabledTools={['write_file', 'read_many_files']}
|
||||
onToggleTool={vi.fn()}
|
||||
onUpdateConfig={vi.fn()}
|
||||
onInstall={vi.fn()}
|
||||
/>,
|
||||
{ locale: 'en' }
|
||||
);
|
||||
|
||||
const user = userEvent.setup();
|
||||
await user.click(screen.getByText(/CCW MCP Server|mcp\.ccw\.title/i));
|
||||
await user.click(
|
||||
screen.getByRole('button', { name: /Save Configuration|mcp\.ccw\.actions\.saveConfig/i })
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(updateCodexMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enabledTools: ['write_file', 'read_many_files'],
|
||||
})
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
it('preserves enabledTools when saving config (Claude)', async () => {
|
||||
const updateClaudeMock = vi.mocked(updateCcwConfig);
|
||||
updateClaudeMock.mockResolvedValue({
|
||||
isInstalled: true,
|
||||
enabledTools: [],
|
||||
installedScopes: ['global'],
|
||||
});
|
||||
|
||||
render(
|
||||
<CcwToolsMcpCard
|
||||
isInstalled={true}
|
||||
enabledTools={['write_file', 'smart_search']}
|
||||
onToggleTool={vi.fn()}
|
||||
onUpdateConfig={vi.fn()}
|
||||
onInstall={vi.fn()}
|
||||
/>,
|
||||
{ locale: 'en' }
|
||||
);
|
||||
|
||||
const user = userEvent.setup();
|
||||
await user.click(screen.getByText(/CCW MCP Server|mcp\.ccw\.title/i));
|
||||
await user.click(
|
||||
screen.getByRole('button', { name: /Save Configuration|mcp\.ccw\.actions\.saveConfig/i })
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(updateClaudeMock).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
enabledTools: ['write_file', 'smart_search'],
|
||||
})
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -37,7 +37,7 @@ import {
|
||||
uninstallCcwMcpFromCodex,
|
||||
updateCcwConfigForCodex,
|
||||
} from '@/lib/api';
|
||||
import { mcpServersKeys } from '@/hooks';
|
||||
import { mcpServersKeys, useNotifications } from '@/hooks';
|
||||
import { useQueryClient } from '@tanstack/react-query';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
||||
@@ -128,6 +128,7 @@ export function CcwToolsMcpCard({
|
||||
}: CcwToolsMcpCardProps) {
|
||||
const { formatMessage } = useIntl();
|
||||
const queryClient = useQueryClient();
|
||||
const { success: notifySuccess, error: notifyError } = useNotifications();
|
||||
const currentProjectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
// Local state for config inputs
|
||||
@@ -179,9 +180,19 @@ export function CcwToolsMcpCard({
|
||||
onSuccess: () => {
|
||||
if (isCodex) {
|
||||
queryClient.invalidateQueries({ queryKey: ['codexMcpServers'] });
|
||||
queryClient.invalidateQueries({ queryKey: ['ccwMcpConfigCodex'] });
|
||||
} else {
|
||||
queryClient.invalidateQueries({ queryKey: mcpServersKeys.all });
|
||||
queryClient.invalidateQueries({ queryKey: ['ccwMcpConfig'] });
|
||||
}
|
||||
notifySuccess(formatMessage({ id: 'mcp.ccw.feedback.saveSuccess' }));
|
||||
},
|
||||
onError: (error) => {
|
||||
console.error('Failed to update CCW config:', error);
|
||||
notifyError(
|
||||
formatMessage({ id: 'mcp.ccw.feedback.saveError' }),
|
||||
error instanceof Error ? error.message : String(error)
|
||||
);
|
||||
},
|
||||
});
|
||||
|
||||
@@ -201,6 +212,9 @@ export function CcwToolsMcpCard({
|
||||
|
||||
const handleConfigSave = () => {
|
||||
updateConfigMutation.mutate({
|
||||
// Preserve current tool selection; otherwise updateCcwConfig* falls back to defaults
|
||||
// and can unintentionally overwrite user-chosen enabled tools.
|
||||
enabledTools,
|
||||
projectRoot: projectRootInput || undefined,
|
||||
allowedDirs: allowedDirsInput || undefined,
|
||||
enableSandbox: enableSandboxInput,
|
||||
|
||||
332
ccw/frontend/src/components/memory/SessionPreviewPanel.tsx
Normal file
332
ccw/frontend/src/components/memory/SessionPreviewPanel.tsx
Normal file
@@ -0,0 +1,332 @@
|
||||
// ========================================
|
||||
// SessionPreviewPanel Component
|
||||
// ========================================
|
||||
// Preview and select sessions for Memory V2 extraction
|
||||
|
||||
import { useState, useMemo } from 'react';
|
||||
import { useIntl } from 'react-intl';
|
||||
import { formatDistanceToNow } from 'date-fns';
|
||||
import { Search, Eye, Loader2, CheckCircle2, XCircle, Clock } from 'lucide-react';
|
||||
import { Button } from '@/components/ui/Button';
|
||||
import { Badge } from '@/components/ui/Badge';
|
||||
import { Input } from '@/components/ui/Input';
|
||||
import { Checkbox } from '@/components/ui/Checkbox';
|
||||
import {
|
||||
usePreviewSessions,
|
||||
useTriggerSelectiveExtraction,
|
||||
} from '@/hooks/useMemoryV2';
|
||||
import { cn } from '@/lib/utils';
|
||||
|
||||
interface SessionPreviewPanelProps {
|
||||
onClose?: () => void;
|
||||
onExtractComplete?: () => void;
|
||||
}
|
||||
|
||||
// Helper function to format bytes
|
||||
function formatBytes(bytes: number): string {
|
||||
if (bytes === 0) return '0 B';
|
||||
const k = 1024;
|
||||
const sizes = ['B', 'KB', 'MB', 'GB'];
|
||||
const i = Math.floor(Math.log(bytes) / Math.log(k));
|
||||
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`;
|
||||
}
|
||||
|
||||
// Helper function to format timestamp
|
||||
function formatTimestamp(timestamp: number): string {
|
||||
try {
|
||||
const date = new Date(timestamp);
|
||||
return formatDistanceToNow(date, { addSuffix: true });
|
||||
} catch {
|
||||
return '-';
|
||||
}
|
||||
}
|
||||
|
||||
export function SessionPreviewPanel({ onClose, onExtractComplete }: SessionPreviewPanelProps) {
|
||||
const intl = useIntl();
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
const [selectedIds, setSelectedIds] = useState<Set<string>>(new Set());
|
||||
const [includeNative, setIncludeNative] = useState(false);
|
||||
|
||||
const { data, isLoading, refetch } = usePreviewSessions(includeNative);
|
||||
const triggerExtraction = useTriggerSelectiveExtraction();
|
||||
|
||||
// Filter sessions based on search query
|
||||
const filteredSessions = useMemo(() => {
|
||||
if (!data?.sessions) return [];
|
||||
if (!searchQuery.trim()) return data.sessions;
|
||||
|
||||
const query = searchQuery.toLowerCase();
|
||||
return data.sessions.filter(
|
||||
(session) =>
|
||||
session.sessionId.toLowerCase().includes(query) ||
|
||||
session.tool.toLowerCase().includes(query) ||
|
||||
session.source.toLowerCase().includes(query)
|
||||
);
|
||||
}, [data?.sessions, searchQuery]);
|
||||
|
||||
// Get ready sessions (eligible and not extracted)
|
||||
const readySessions = useMemo(() => {
|
||||
return filteredSessions.filter((s) => s.eligible && !s.extracted);
|
||||
}, [filteredSessions]);
|
||||
|
||||
// Toggle session selection
|
||||
const toggleSelection = (sessionId: string) => {
|
||||
setSelectedIds((prev) => {
|
||||
const next = new Set(prev);
|
||||
if (next.has(sessionId)) {
|
||||
next.delete(sessionId);
|
||||
} else {
|
||||
next.add(sessionId);
|
||||
}
|
||||
return next;
|
||||
});
|
||||
};
|
||||
|
||||
// Select all ready sessions
|
||||
const selectAll = () => {
|
||||
setSelectedIds(new Set(readySessions.map((s) => s.sessionId)));
|
||||
};
|
||||
|
||||
// Clear selection
|
||||
const selectNone = () => {
|
||||
setSelectedIds(new Set());
|
||||
};
|
||||
|
||||
// Trigger extraction for selected sessions
|
||||
const handleExtract = async () => {
|
||||
if (selectedIds.size === 0) return;
|
||||
|
||||
triggerExtraction.mutate(
|
||||
{
|
||||
sessionIds: Array.from(selectedIds),
|
||||
includeNative,
|
||||
},
|
||||
{
|
||||
onSuccess: () => {
|
||||
setSelectedIds(new Set());
|
||||
onExtractComplete?.();
|
||||
},
|
||||
}
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="flex flex-col h-full">
|
||||
{/* Header */}
|
||||
<div className="flex items-center justify-between mb-4">
|
||||
<h2 className="text-lg font-semibold flex items-center gap-2">
|
||||
<Eye className="w-5 h-5" />
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.title', defaultMessage: 'Extraction Queue Preview' })}
|
||||
</h2>
|
||||
<div className="flex items-center gap-2">
|
||||
<label className="flex items-center gap-2 text-sm cursor-pointer">
|
||||
<Checkbox
|
||||
checked={includeNative}
|
||||
onCheckedChange={(checked) => setIncludeNative(checked === true)}
|
||||
/>
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.includeNative', defaultMessage: 'Include Native Sessions' })}
|
||||
</label>
|
||||
<Button variant="outline" size="sm" onClick={() => refetch()}>
|
||||
{isLoading ? (
|
||||
<Loader2 className="w-4 h-4 animate-spin" />
|
||||
) : (
|
||||
'Refresh'
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Summary Bar */}
|
||||
{data?.summary && (
|
||||
<div className="grid grid-cols-4 gap-2 mb-4">
|
||||
<div className="text-center p-2 bg-muted rounded">
|
||||
<div className="text-lg font-bold">{data.summary.total}</div>
|
||||
<div className="text-xs text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.total', defaultMessage: 'Total' })}
|
||||
</div>
|
||||
</div>
|
||||
<div className="text-center p-2 bg-muted rounded">
|
||||
<div className="text-lg font-bold text-blue-600">{data.summary.eligible}</div>
|
||||
<div className="text-xs text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.eligible', defaultMessage: 'Eligible' })}
|
||||
</div>
|
||||
</div>
|
||||
<div className="text-center p-2 bg-muted rounded">
|
||||
<div className="text-lg font-bold text-green-600">{data.summary.alreadyExtracted}</div>
|
||||
<div className="text-xs text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.extracted', defaultMessage: 'Already Extracted' })}
|
||||
</div>
|
||||
</div>
|
||||
<div className="text-center p-2 bg-muted rounded">
|
||||
<div className="text-lg font-bold text-amber-600">{data.summary.readyForExtraction}</div>
|
||||
<div className="text-xs text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.ready', defaultMessage: 'Ready' })}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Search and Actions */}
|
||||
<div className="flex items-center gap-2 mb-4">
|
||||
<div className="relative flex-1">
|
||||
<Search className="absolute left-3 top-1/2 transform -translate-y-1/2 w-4 h-4 text-muted-foreground" />
|
||||
<Input
|
||||
placeholder={intl.formatMessage({
|
||||
id: 'memory.v2.preview.selectSessions',
|
||||
defaultMessage: 'Search sessions...',
|
||||
})}
|
||||
value={searchQuery}
|
||||
onChange={(e) => setSearchQuery(e.target.value)}
|
||||
className="pl-9"
|
||||
/>
|
||||
</div>
|
||||
<Button variant="outline" size="sm" onClick={selectAll}>
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.selectAll', defaultMessage: 'Select All' })}
|
||||
</Button>
|
||||
<Button variant="outline" size="sm" onClick={selectNone}>
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.selectNone', defaultMessage: 'Select None' })}
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{/* Session Table */}
|
||||
<div className="flex-1 overflow-auto border rounded-lg">
|
||||
{isLoading ? (
|
||||
<div className="flex items-center justify-center h-48">
|
||||
<Loader2 className="w-6 h-6 animate-spin text-muted-foreground" />
|
||||
</div>
|
||||
) : filteredSessions.length === 0 ? (
|
||||
<div className="flex items-center justify-center h-48 text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.noSessions', defaultMessage: 'No sessions found' })}
|
||||
</div>
|
||||
) : (
|
||||
<table className="w-full text-sm">
|
||||
<thead className="bg-muted sticky top-0">
|
||||
<tr>
|
||||
<th className="w-10 p-2"></th>
|
||||
<th className="text-left p-2">Source</th>
|
||||
<th className="text-left p-2">Session ID</th>
|
||||
<th className="text-left p-2">Tool</th>
|
||||
<th className="text-left p-2">Timestamp</th>
|
||||
<th className="text-right p-2">Size</th>
|
||||
<th className="text-right p-2">Turns</th>
|
||||
<th className="text-center p-2">Status</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{filteredSessions.map((session) => {
|
||||
const isReady = session.eligible && !session.extracted;
|
||||
const isSelected = selectedIds.has(session.sessionId);
|
||||
const isDisabled = !isReady;
|
||||
|
||||
return (
|
||||
<tr
|
||||
key={session.sessionId}
|
||||
className={cn(
|
||||
'border-b hover:bg-muted/50 transition-colors',
|
||||
isDisabled && 'opacity-60',
|
||||
isSelected && 'bg-blue-50 dark:bg-blue-950/20'
|
||||
)}
|
||||
>
|
||||
<td className="p-2">
|
||||
<Checkbox
|
||||
checked={isSelected}
|
||||
disabled={isDisabled}
|
||||
onCheckedChange={() => toggleSelection(session.sessionId)}
|
||||
/>
|
||||
</td>
|
||||
<td className="p-2">
|
||||
<Badge
|
||||
variant="outline"
|
||||
className={cn(
|
||||
session.source === 'ccw'
|
||||
? 'bg-purple-100 text-purple-800 dark:bg-purple-900/30 dark:text-purple-300'
|
||||
: 'bg-cyan-100 text-cyan-800 dark:bg-cyan-900/30 dark:text-cyan-300'
|
||||
)}
|
||||
>
|
||||
{session.source === 'ccw'
|
||||
? intl.formatMessage({ id: 'memory.v2.preview.sourceCcw', defaultMessage: 'CCW' })
|
||||
: intl.formatMessage({ id: 'memory.v2.preview.sourceNative', defaultMessage: 'Native' })}
|
||||
</Badge>
|
||||
</td>
|
||||
<td className="p-2 font-mono text-xs truncate max-w-[150px]" title={session.sessionId}>
|
||||
{session.sessionId}
|
||||
</td>
|
||||
<td className="p-2 truncate max-w-[100px]" title={session.tool}>
|
||||
{session.tool || '-'}
|
||||
</td>
|
||||
<td className="p-2 text-muted-foreground">
|
||||
{formatTimestamp(session.timestamp)}
|
||||
</td>
|
||||
<td className="p-2 text-right font-mono text-xs">
|
||||
{formatBytes(session.bytes)}
|
||||
</td>
|
||||
<td className="p-2 text-right">
|
||||
{session.turns}
|
||||
</td>
|
||||
<td className="p-2 text-center">
|
||||
{session.extracted ? (
|
||||
<Badge className="bg-green-100 text-green-800 dark:bg-green-900/30 dark:text-green-300">
|
||||
<CheckCircle2 className="w-3 h-3 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.extracted', defaultMessage: 'Extracted' })}
|
||||
</Badge>
|
||||
) : session.eligible ? (
|
||||
<Badge className="bg-amber-100 text-amber-800 dark:bg-amber-900/30 dark:text-amber-300">
|
||||
<Clock className="w-3 h-3 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.ready', defaultMessage: 'Ready' })}
|
||||
</Badge>
|
||||
) : (
|
||||
<Badge className="bg-gray-100 text-gray-800 dark:bg-gray-800 dark:text-gray-300">
|
||||
<XCircle className="w-3 h-3 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.ineligible', defaultMessage: 'Ineligible' })}
|
||||
</Badge>
|
||||
)}
|
||||
</td>
|
||||
</tr>
|
||||
);
|
||||
})}
|
||||
</tbody>
|
||||
</table>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Footer Actions */}
|
||||
<div className="flex items-center justify-between mt-4 pt-4 border-t">
|
||||
<div className="text-sm text-muted-foreground">
|
||||
{selectedIds.size > 0 ? (
|
||||
intl.formatMessage(
|
||||
{ id: 'memory.v2.preview.selected', defaultMessage: '{count} sessions selected' },
|
||||
{ count: selectedIds.size }
|
||||
)
|
||||
) : (
|
||||
intl.formatMessage({ id: 'memory.v2.preview.selectHint', defaultMessage: 'Select sessions to extract' })
|
||||
)}
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
{onClose && (
|
||||
<Button variant="outline" onClick={onClose}>
|
||||
{intl.formatMessage({ id: 'common.close', defaultMessage: 'Close' })}
|
||||
</Button>
|
||||
)}
|
||||
<Button
|
||||
onClick={handleExtract}
|
||||
disabled={selectedIds.size === 0 || triggerExtraction.isPending}
|
||||
>
|
||||
{triggerExtraction.isPending ? (
|
||||
<>
|
||||
<Loader2 className="w-4 h-4 mr-1 animate-spin" />
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.extracting', defaultMessage: 'Extracting...' })}
|
||||
</>
|
||||
) : (
|
||||
intl.formatMessage(
|
||||
{ id: 'memory.v2.preview.extractSelected', defaultMessage: 'Extract Selected ({count})' },
|
||||
{ count: selectedIds.size }
|
||||
)
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default SessionPreviewPanel;
|
||||
@@ -28,6 +28,7 @@ import { Card } from '@/components/ui/Card';
|
||||
import { Button } from '@/components/ui/Button';
|
||||
import { Badge } from '@/components/ui/Badge';
|
||||
import { Dialog, DialogContent, DialogHeader, DialogTitle } from '@/components/ui/Dialog';
|
||||
import { SessionPreviewPanel } from '@/components/memory/SessionPreviewPanel';
|
||||
import 'highlight.js/styles/github-dark.css';
|
||||
import {
|
||||
useExtractionStatus,
|
||||
@@ -84,6 +85,7 @@ function ExtractionCard() {
|
||||
const { data: status, isLoading, refetch } = useExtractionStatus();
|
||||
const trigger = useTriggerExtraction();
|
||||
const [maxSessions, setMaxSessions] = useState(10);
|
||||
const [showPreview, setShowPreview] = useState(false);
|
||||
|
||||
const handleTrigger = () => {
|
||||
trigger.mutate(maxSessions);
|
||||
@@ -94,83 +96,107 @@ function ExtractionCard() {
|
||||
const lastRunText = formatRelativeTime(status?.lastRun);
|
||||
|
||||
return (
|
||||
<Card className="p-4">
|
||||
<div className="flex items-start justify-between mb-4">
|
||||
<div>
|
||||
<h3 className="font-medium flex items-center gap-2">
|
||||
<Zap className="w-5 h-5 text-yellow-500" />
|
||||
Phase 1: {intl.formatMessage({ id: 'memory.v2.extraction.title', defaultMessage: 'Extraction' })}
|
||||
</h3>
|
||||
<p className="text-sm text-muted-foreground mt-1">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.description', defaultMessage: 'Extract structured memories from CLI sessions' })}
|
||||
</p>
|
||||
{lastRunText && (
|
||||
<p className="text-xs text-muted-foreground mt-1">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.lastRun', defaultMessage: 'Last run' })}: {lastRunText}
|
||||
<>
|
||||
<Card className="p-4">
|
||||
<div className="flex items-start justify-between mb-4">
|
||||
<div>
|
||||
<h3 className="font-medium flex items-center gap-2">
|
||||
<Zap className="w-5 h-5 text-yellow-500" />
|
||||
Phase 1: {intl.formatMessage({ id: 'memory.v2.extraction.title', defaultMessage: 'Extraction' })}
|
||||
</h3>
|
||||
<p className="text-sm text-muted-foreground mt-1">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.description', defaultMessage: 'Extract structured memories from CLI sessions' })}
|
||||
</p>
|
||||
{lastRunText && (
|
||||
<p className="text-xs text-muted-foreground mt-1">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.lastRun', defaultMessage: 'Last run' })}: {lastRunText}
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
{status && (
|
||||
<div className="text-right">
|
||||
<div className="text-2xl font-bold">{status.total_stage1}</div>
|
||||
<div className="text-xs text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.extracted', defaultMessage: 'Extracted' })}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
{status && (
|
||||
<div className="text-right">
|
||||
<div className="text-2xl font-bold">{status.total_stage1}</div>
|
||||
<div className="text-xs text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.extracted', defaultMessage: 'Extracted' })}
|
||||
|
||||
<div className="flex items-center gap-2 mb-4">
|
||||
<input
|
||||
type="number"
|
||||
value={maxSessions}
|
||||
onChange={(e) => setMaxSessions(Math.max(1, parseInt(e.target.value) || 10))}
|
||||
className="w-20 px-2 py-1 text-sm border rounded bg-background"
|
||||
min={1}
|
||||
max={64}
|
||||
/>
|
||||
<span className="text-sm text-muted-foreground">sessions max</span>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center gap-2">
|
||||
<Button
|
||||
onClick={handleTrigger}
|
||||
disabled={trigger.isPending || hasRunningJob}
|
||||
size="sm"
|
||||
>
|
||||
{trigger.isPending || hasRunningJob ? (
|
||||
<>
|
||||
<Loader2 className="w-4 h-4 mr-1 animate-spin" />
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.extracting', defaultMessage: 'Extracting...' })}
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Play className="w-4 h-4 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.trigger', defaultMessage: 'Trigger Extraction' })}
|
||||
</>
|
||||
)}
|
||||
</Button>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={() => setShowPreview(true)}
|
||||
title={intl.formatMessage({ id: 'memory.v2.preview.previewQueue', defaultMessage: 'Preview Queue' })}
|
||||
>
|
||||
<Eye className="w-4 h-4 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.preview.previewQueue', defaultMessage: 'Preview Queue' })}
|
||||
</Button>
|
||||
<Button variant="outline" size="sm" onClick={() => refetch()}>
|
||||
<RefreshCw className={cn('w-4 h-4', isLoading && 'animate-spin')} />
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{status?.jobs && status.jobs.length > 0 && (
|
||||
<div className="mt-4 pt-4 border-t">
|
||||
<div className="text-xs text-muted-foreground mb-2">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.recentJobs', defaultMessage: 'Recent Jobs' })}
|
||||
</div>
|
||||
<div className="space-y-1 max-h-32 overflow-y-auto">
|
||||
{status.jobs.slice(0, 5).map((job) => (
|
||||
<div key={job.job_key} className="flex items-center justify-between text-sm">
|
||||
<span className="font-mono text-xs truncate max-w-[150px]">{job.job_key}</span>
|
||||
<StatusBadge status={job.status} />
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</Card>
|
||||
|
||||
<div className="flex items-center gap-2 mb-4">
|
||||
<input
|
||||
type="number"
|
||||
value={maxSessions}
|
||||
onChange={(e) => setMaxSessions(Math.max(1, parseInt(e.target.value) || 10))}
|
||||
className="w-20 px-2 py-1 text-sm border rounded bg-background"
|
||||
min={1}
|
||||
max={64}
|
||||
/>
|
||||
<span className="text-sm text-muted-foreground">sessions max</span>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center gap-2">
|
||||
<Button
|
||||
onClick={handleTrigger}
|
||||
disabled={trigger.isPending || hasRunningJob}
|
||||
size="sm"
|
||||
>
|
||||
{trigger.isPending || hasRunningJob ? (
|
||||
<>
|
||||
<Loader2 className="w-4 h-4 mr-1 animate-spin" />
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.extracting', defaultMessage: 'Extracting...' })}
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Play className="w-4 h-4 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.trigger', defaultMessage: 'Trigger Extraction' })}
|
||||
</>
|
||||
)}
|
||||
</Button>
|
||||
<Button variant="outline" size="sm" onClick={() => refetch()}>
|
||||
<RefreshCw className={cn('w-4 h-4', isLoading && 'animate-spin')} />
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{status?.jobs && status.jobs.length > 0 && (
|
||||
<div className="mt-4 pt-4 border-t">
|
||||
<div className="text-xs text-muted-foreground mb-2">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.recentJobs', defaultMessage: 'Recent Jobs' })}
|
||||
</div>
|
||||
<div className="space-y-1 max-h-32 overflow-y-auto">
|
||||
{status.jobs.slice(0, 5).map((job) => (
|
||||
<div key={job.job_key} className="flex items-center justify-between text-sm">
|
||||
<span className="font-mono text-xs truncate max-w-[150px]">{job.job_key}</span>
|
||||
<StatusBadge status={job.status} />
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
{/* Preview Queue Dialog */}
|
||||
<Dialog open={showPreview} onOpenChange={setShowPreview}>
|
||||
<DialogContent className="max-w-4xl max-h-[80vh]">
|
||||
<SessionPreviewPanel
|
||||
onClose={() => setShowPreview(false)}
|
||||
onExtractComplete={() => {
|
||||
setShowPreview(false);
|
||||
refetch();
|
||||
}}
|
||||
/>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
408
ccw/frontend/src/components/shared/CommandCreateDialog.tsx
Normal file
408
ccw/frontend/src/components/shared/CommandCreateDialog.tsx
Normal file
@@ -0,0 +1,408 @@
|
||||
// ========================================
|
||||
// Command Create Dialog Component
|
||||
// ========================================
|
||||
// Modal dialog for creating/importing commands with two modes:
|
||||
// - Import: import existing command file
|
||||
// - CLI Generate: AI-generated command from description
|
||||
|
||||
import { useState, useCallback } from 'react';
|
||||
import { useIntl } from 'react-intl';
|
||||
import {
|
||||
Folder,
|
||||
User,
|
||||
FileCode,
|
||||
Sparkles,
|
||||
CheckCircle,
|
||||
XCircle,
|
||||
Loader2,
|
||||
Info,
|
||||
} from 'lucide-react';
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogHeader,
|
||||
DialogFooter,
|
||||
DialogTitle,
|
||||
DialogDescription,
|
||||
} from '@/components/ui/Dialog';
|
||||
import { Button } from '@/components/ui/Button';
|
||||
import { Input } from '@/components/ui/Input';
|
||||
import { Textarea } from '@/components/ui/Textarea';
|
||||
import { Label } from '@/components/ui/Label';
|
||||
import { validateCommandImport, createCommand } from '@/lib/api';
|
||||
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
||||
import { cn } from '@/lib/utils';
|
||||
|
||||
export interface CommandCreateDialogProps {
|
||||
open: boolean;
|
||||
onOpenChange: (open: boolean) => void;
|
||||
onCreated: () => void;
|
||||
cliType?: 'claude' | 'codex';
|
||||
}
|
||||
|
||||
type CreateMode = 'import' | 'cli-generate';
|
||||
type CommandLocation = 'project' | 'user';
|
||||
|
||||
interface ValidationResult {
|
||||
valid: boolean;
|
||||
errors?: string[];
|
||||
commandInfo?: { name: string; description: string; usage?: string };
|
||||
}
|
||||
|
||||
export function CommandCreateDialog({ open, onOpenChange, onCreated, cliType = 'claude' }: CommandCreateDialogProps) {
|
||||
const { formatMessage } = useIntl();
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
const [mode, setMode] = useState<CreateMode>('import');
|
||||
const [location, setLocation] = useState<CommandLocation>('project');
|
||||
|
||||
// Import mode state
|
||||
const [sourcePath, setSourcePath] = useState('');
|
||||
const [customName, setCustomName] = useState('');
|
||||
const [validationResult, setValidationResult] = useState<ValidationResult | null>(null);
|
||||
const [isValidating, setIsValidating] = useState(false);
|
||||
|
||||
// CLI Generate mode state
|
||||
const [commandName, setCommandName] = useState('');
|
||||
const [description, setDescription] = useState('');
|
||||
|
||||
const [isCreating, setIsCreating] = useState(false);
|
||||
|
||||
const resetState = useCallback(() => {
|
||||
setMode('import');
|
||||
setLocation('project');
|
||||
setSourcePath('');
|
||||
setCustomName('');
|
||||
setValidationResult(null);
|
||||
setIsValidating(false);
|
||||
setCommandName('');
|
||||
setDescription('');
|
||||
setIsCreating(false);
|
||||
}, []);
|
||||
|
||||
const handleOpenChange = useCallback((open: boolean) => {
|
||||
if (!open) {
|
||||
resetState();
|
||||
}
|
||||
onOpenChange(open);
|
||||
}, [onOpenChange, resetState]);
|
||||
|
||||
const handleValidate = useCallback(async () => {
|
||||
if (!sourcePath.trim()) return;
|
||||
|
||||
setIsValidating(true);
|
||||
setValidationResult(null);
|
||||
|
||||
try {
|
||||
const result = await validateCommandImport(sourcePath.trim());
|
||||
setValidationResult(result);
|
||||
} catch (err) {
|
||||
setValidationResult({
|
||||
valid: false,
|
||||
errors: [err instanceof Error ? err.message : String(err)],
|
||||
});
|
||||
} finally {
|
||||
setIsValidating(false);
|
||||
}
|
||||
}, [sourcePath]);
|
||||
|
||||
const handleCreate = useCallback(async () => {
|
||||
if (mode === 'import') {
|
||||
if (!sourcePath.trim()) return;
|
||||
if (!validationResult?.valid) return;
|
||||
} else {
|
||||
if (!commandName.trim()) return;
|
||||
if (!description.trim()) return;
|
||||
}
|
||||
|
||||
setIsCreating(true);
|
||||
|
||||
try {
|
||||
await createCommand({
|
||||
mode,
|
||||
location,
|
||||
sourcePath: mode === 'import' ? sourcePath.trim() : undefined,
|
||||
commandName: mode === 'import' ? (customName.trim() || undefined) : commandName.trim(),
|
||||
description: mode === 'cli-generate' ? description.trim() : undefined,
|
||||
generationType: mode === 'cli-generate' ? 'description' : undefined,
|
||||
projectPath,
|
||||
cliType,
|
||||
});
|
||||
|
||||
handleOpenChange(false);
|
||||
onCreated();
|
||||
} catch (err) {
|
||||
console.error('Failed to create command:', err);
|
||||
if (mode === 'import') {
|
||||
setValidationResult({
|
||||
valid: false,
|
||||
errors: [err instanceof Error ? err.message : formatMessage({ id: 'commands.create.createError' })],
|
||||
});
|
||||
}
|
||||
} finally {
|
||||
setIsCreating(false);
|
||||
}
|
||||
}, [mode, location, sourcePath, customName, commandName, description, validationResult, projectPath, handleOpenChange, onCreated, formatMessage]);
|
||||
|
||||
const canCreate = mode === 'import'
|
||||
? sourcePath.trim() && validationResult?.valid && !isCreating
|
||||
: commandName.trim() && description.trim() && !isCreating;
|
||||
|
||||
return (
|
||||
<Dialog open={open} onOpenChange={handleOpenChange}>
|
||||
<DialogContent className="max-w-2xl max-h-[90vh] overflow-y-auto">
|
||||
<DialogHeader>
|
||||
<DialogTitle>{formatMessage({ id: 'commands.create.title' })}</DialogTitle>
|
||||
<DialogDescription>
|
||||
{formatMessage({ id: 'commands.description' })}
|
||||
</DialogDescription>
|
||||
</DialogHeader>
|
||||
|
||||
<div className="space-y-5 py-2">
|
||||
{/* Location Selection */}
|
||||
<div className="space-y-2">
|
||||
<Label>{formatMessage({ id: 'commands.create.location' })}</Label>
|
||||
<div className="grid grid-cols-2 gap-3">
|
||||
<button
|
||||
type="button"
|
||||
className={cn(
|
||||
'px-4 py-3 text-left border-2 rounded-lg transition-all',
|
||||
location === 'project'
|
||||
? 'border-primary bg-primary/10'
|
||||
: 'border-border hover:border-primary/50'
|
||||
)}
|
||||
onClick={() => setLocation('project')}
|
||||
>
|
||||
<div className="flex items-center gap-2">
|
||||
<Folder className="w-5 h-5" />
|
||||
<div>
|
||||
<div className="font-medium text-sm">{formatMessage({ id: 'commands.create.locationProject' })}</div>
|
||||
<div className="text-xs text-muted-foreground">{`.${cliType}/commands/`}</div>
|
||||
</div>
|
||||
</div>
|
||||
</button>
|
||||
<button
|
||||
type="button"
|
||||
className={cn(
|
||||
'px-4 py-3 text-left border-2 rounded-lg transition-all',
|
||||
location === 'user'
|
||||
? 'border-primary bg-primary/10'
|
||||
: 'border-border hover:border-primary/50'
|
||||
)}
|
||||
onClick={() => setLocation('user')}
|
||||
>
|
||||
<div className="flex items-center gap-2">
|
||||
<User className="w-5 h-5" />
|
||||
<div>
|
||||
<div className="font-medium text-sm">{formatMessage({ id: 'commands.create.locationUser' })}</div>
|
||||
<div className="text-xs text-muted-foreground">{`~/.${cliType}/commands/`}</div>
|
||||
</div>
|
||||
</div>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Mode Selection */}
|
||||
<div className="space-y-2">
|
||||
<Label>{formatMessage({ id: 'commands.create.mode' })}</Label>
|
||||
<div className="grid grid-cols-2 gap-3">
|
||||
<button
|
||||
type="button"
|
||||
className={cn(
|
||||
'px-4 py-3 text-left border-2 rounded-lg transition-all',
|
||||
mode === 'import'
|
||||
? 'border-primary bg-primary/10'
|
||||
: 'border-border hover:border-primary/50'
|
||||
)}
|
||||
onClick={() => setMode('import')}
|
||||
>
|
||||
<div className="flex items-center gap-2">
|
||||
<FileCode className="w-5 h-5" />
|
||||
<div>
|
||||
<div className="font-medium text-sm">{formatMessage({ id: 'commands.create.modeImport' })}</div>
|
||||
<div className="text-xs text-muted-foreground">{formatMessage({ id: 'commands.create.modeImportHint' })}</div>
|
||||
</div>
|
||||
</div>
|
||||
</button>
|
||||
<button
|
||||
type="button"
|
||||
className={cn(
|
||||
'px-4 py-3 text-left border-2 rounded-lg transition-all',
|
||||
mode === 'cli-generate'
|
||||
? 'border-primary bg-primary/10'
|
||||
: 'border-border hover:border-primary/50'
|
||||
)}
|
||||
onClick={() => setMode('cli-generate')}
|
||||
>
|
||||
<div className="flex items-center gap-2">
|
||||
<Sparkles className="w-5 h-5" />
|
||||
<div>
|
||||
<div className="font-medium text-sm">{formatMessage({ id: 'commands.create.modeGenerate' })}</div>
|
||||
<div className="text-xs text-muted-foreground">{formatMessage({ id: 'commands.create.modeGenerateHint' })}</div>
|
||||
</div>
|
||||
</div>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Import Mode Content */}
|
||||
{mode === 'import' && (
|
||||
<div className="space-y-4">
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="sourcePath">{formatMessage({ id: 'commands.create.sourcePath' })}</Label>
|
||||
<Input
|
||||
id="sourcePath"
|
||||
value={sourcePath}
|
||||
onChange={(e) => {
|
||||
setSourcePath(e.target.value);
|
||||
setValidationResult(null);
|
||||
}}
|
||||
placeholder={formatMessage({ id: 'commands.create.sourcePathPlaceholder' })}
|
||||
className="font-mono text-sm"
|
||||
/>
|
||||
<p className="text-xs text-muted-foreground">{formatMessage({ id: 'commands.create.sourcePathHint' })}</p>
|
||||
</div>
|
||||
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="customName">
|
||||
{formatMessage({ id: 'commands.create.customName' })}
|
||||
<span className="text-muted-foreground ml-1">({formatMessage({ id: 'commands.create.customNameHint' })})</span>
|
||||
</Label>
|
||||
<Input
|
||||
id="customName"
|
||||
value={customName}
|
||||
onChange={(e) => setCustomName(e.target.value)}
|
||||
placeholder={formatMessage({ id: 'commands.create.customNamePlaceholder' })}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Validation Result */}
|
||||
{isValidating && (
|
||||
<div className="flex items-center gap-2 p-3 bg-muted/50 rounded-lg">
|
||||
<Loader2 className="w-4 h-4 animate-spin" />
|
||||
<span className="text-sm text-muted-foreground">{formatMessage({ id: 'commands.create.validating' })}</span>
|
||||
</div>
|
||||
)}
|
||||
{validationResult && !isValidating && (
|
||||
validationResult.valid ? (
|
||||
<div className="p-4 bg-green-500/10 border border-green-500/20 rounded-lg">
|
||||
<div className="flex items-center gap-2 text-green-600 mb-2">
|
||||
<CheckCircle className="w-5 h-5" />
|
||||
<span className="font-medium">{formatMessage({ id: 'commands.create.validCommand' })}</span>
|
||||
</div>
|
||||
{validationResult.commandInfo && (
|
||||
<div className="space-y-1 text-sm">
|
||||
<div>
|
||||
<span className="text-muted-foreground">{formatMessage({ id: 'commands.card.name' })}: </span>
|
||||
<span>{validationResult.commandInfo.name}</span>
|
||||
</div>
|
||||
{validationResult.commandInfo.description && (
|
||||
<div>
|
||||
<span className="text-muted-foreground">{formatMessage({ id: 'commands.card.description' })}: </span>
|
||||
<span>{validationResult.commandInfo.description}</span>
|
||||
</div>
|
||||
)}
|
||||
{validationResult.commandInfo.usage && (
|
||||
<div>
|
||||
<span className="text-muted-foreground">{formatMessage({ id: 'commands.card.usage' })}: </span>
|
||||
<span>{validationResult.commandInfo.usage}</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
) : (
|
||||
<div className="p-4 bg-destructive/10 border border-destructive/20 rounded-lg">
|
||||
<div className="flex items-center gap-2 text-destructive mb-2">
|
||||
<XCircle className="w-5 h-5" />
|
||||
<span className="font-medium">{formatMessage({ id: 'commands.create.invalidCommand' })}</span>
|
||||
</div>
|
||||
{validationResult.errors && (
|
||||
<ul className="space-y-1 text-sm">
|
||||
{validationResult.errors.map((error, i) => (
|
||||
<li key={i} className="text-destructive">{error}</li>
|
||||
))}
|
||||
</ul>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* CLI Generate Mode Content */}
|
||||
{mode === 'cli-generate' && (
|
||||
<div className="space-y-4">
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="commandName">
|
||||
{formatMessage({ id: 'commands.create.commandName' })} <span className="text-destructive">*</span>
|
||||
</Label>
|
||||
<Input
|
||||
id="commandName"
|
||||
value={commandName}
|
||||
onChange={(e) => setCommandName(e.target.value)}
|
||||
placeholder={formatMessage({ id: 'commands.create.commandNamePlaceholder' })}
|
||||
/>
|
||||
<p className="text-xs text-muted-foreground">{formatMessage({ id: 'commands.create.commandNameHint' })}</p>
|
||||
</div>
|
||||
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="description">
|
||||
{formatMessage({ id: 'commands.create.descriptionLabel' })} <span className="text-destructive">*</span>
|
||||
</Label>
|
||||
<Textarea
|
||||
id="description"
|
||||
value={description}
|
||||
onChange={(e) => setDescription(e.target.value)}
|
||||
placeholder={formatMessage({ id: 'commands.create.descriptionPlaceholder' })}
|
||||
rows={6}
|
||||
/>
|
||||
<p className="text-xs text-muted-foreground">{formatMessage({ id: 'commands.create.descriptionHint' })}</p>
|
||||
</div>
|
||||
|
||||
<div className="p-3 bg-blue-500/10 border border-blue-500/20 rounded-lg">
|
||||
<div className="flex items-start gap-2">
|
||||
<Info className="w-4 h-4 text-blue-600 mt-0.5" />
|
||||
<div className="text-sm text-blue-600">
|
||||
<p className="font-medium">{formatMessage({ id: 'commands.create.generateInfo' })}</p>
|
||||
<p className="text-xs mt-1">{formatMessage({ id: 'commands.create.generateTimeHint' })}</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<DialogFooter className="gap-2">
|
||||
<Button variant="outline" onClick={() => handleOpenChange(false)} disabled={isCreating}>
|
||||
{formatMessage({ id: 'commands.actions.cancel' })}
|
||||
</Button>
|
||||
{mode === 'import' && (
|
||||
<Button
|
||||
variant="outline"
|
||||
onClick={handleValidate}
|
||||
disabled={!sourcePath.trim() || isValidating || isCreating}
|
||||
>
|
||||
{isValidating && <Loader2 className="w-4 h-4 mr-2 animate-spin" />}
|
||||
{formatMessage({ id: 'commands.create.validate' })}
|
||||
</Button>
|
||||
)}
|
||||
<Button
|
||||
onClick={handleCreate}
|
||||
disabled={!canCreate}
|
||||
>
|
||||
{isCreating && <Loader2 className="w-4 h-4 mr-2 animate-spin" />}
|
||||
{isCreating
|
||||
? formatMessage({ id: 'commands.create.creating' })
|
||||
: mode === 'import'
|
||||
? formatMessage({ id: 'commands.create.import' })
|
||||
: formatMessage({ id: 'commands.create.generate' })
|
||||
}
|
||||
</Button>
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
);
|
||||
}
|
||||
|
||||
export default CommandCreateDialog;
|
||||
@@ -10,9 +10,13 @@ import {
|
||||
triggerConsolidation,
|
||||
getConsolidationStatus,
|
||||
getV2Jobs,
|
||||
previewExtractionQueue,
|
||||
triggerSelectiveExtraction,
|
||||
type ExtractionStatus,
|
||||
type ConsolidationStatus,
|
||||
type V2JobsResponse,
|
||||
type ExtractionPreviewResponse,
|
||||
type SelectiveExtractionResponse,
|
||||
} from '../lib/api';
|
||||
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
||||
|
||||
@@ -23,6 +27,8 @@ export const memoryV2Keys = {
|
||||
consolidationStatus: (path?: string) => [...memoryV2Keys.all, 'consolidation', path] as const,
|
||||
jobs: (path?: string, filters?: { kind?: string; status_filter?: string }) =>
|
||||
[...memoryV2Keys.all, 'jobs', path, filters] as const,
|
||||
preview: (path?: string, includeNative?: boolean) =>
|
||||
[...memoryV2Keys.all, 'preview', path, includeNative] as const,
|
||||
};
|
||||
|
||||
// Default stale time: 30 seconds (V2 status changes frequently)
|
||||
@@ -97,5 +103,35 @@ export function useTriggerConsolidation() {
|
||||
});
|
||||
}
|
||||
|
||||
// Hook: Preview sessions for extraction
|
||||
export function usePreviewSessions(includeNative: boolean = false) {
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
return useQuery({
|
||||
queryKey: memoryV2Keys.preview(projectPath, includeNative),
|
||||
queryFn: () => previewExtractionQueue(includeNative, undefined, projectPath),
|
||||
enabled: !!projectPath,
|
||||
staleTime: 10 * 1000, // 10 seconds
|
||||
});
|
||||
}
|
||||
|
||||
// Hook: Trigger selective extraction
|
||||
export function useTriggerSelectiveExtraction() {
|
||||
const queryClient = useQueryClient();
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
return useMutation({
|
||||
mutationFn: (params: { sessionIds: string[]; includeNative?: boolean }) =>
|
||||
triggerSelectiveExtraction({
|
||||
sessionIds: params.sessionIds,
|
||||
includeNative: params.includeNative,
|
||||
path: projectPath,
|
||||
}),
|
||||
onSuccess: () => {
|
||||
queryClient.invalidateQueries({ queryKey: memoryV2Keys.all });
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// Export types
|
||||
export type { ExtractionStatus, ConsolidationStatus, V2JobsResponse };
|
||||
export type { ExtractionStatus, ConsolidationStatus, V2JobsResponse, ExtractionPreviewResponse, SelectiveExtractionResponse };
|
||||
|
||||
@@ -1492,6 +1492,39 @@ export async function getCommandsGroupsConfig(
|
||||
return fetchApi<{ groups: Record<string, any>; assignments: Record<string, string> }>(`/api/commands/groups/config?${params}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate a command file for import
|
||||
*/
|
||||
export async function validateCommandImport(sourcePath: string): Promise<{
|
||||
valid: boolean;
|
||||
errors?: string[];
|
||||
commandInfo?: { name: string; description: string; version?: string };
|
||||
}> {
|
||||
return fetchApi('/api/commands/validate-import', {
|
||||
method: 'POST',
|
||||
body: JSON.stringify({ sourcePath }),
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Create/import a command
|
||||
*/
|
||||
export async function createCommand(params: {
|
||||
mode: 'import' | 'cli-generate';
|
||||
location: 'project' | 'user';
|
||||
sourcePath?: string;
|
||||
commandName?: string;
|
||||
description?: string;
|
||||
generationType?: 'description' | 'template';
|
||||
projectPath?: string;
|
||||
cliType?: 'claude' | 'codex';
|
||||
}): Promise<{ commandName: string; path: string }> {
|
||||
return fetchApi('/api/commands/create', {
|
||||
method: 'POST',
|
||||
body: JSON.stringify(params),
|
||||
});
|
||||
}
|
||||
|
||||
// ========== Memory API ==========
|
||||
|
||||
export interface CoreMemory {
|
||||
@@ -1744,6 +1777,79 @@ export async function getV2Jobs(
|
||||
return fetchApi<V2JobsResponse>(`/api/core-memory/jobs?${params}`);
|
||||
}
|
||||
|
||||
// ========== Memory V2 Preview API ==========
|
||||
|
||||
export interface SessionPreviewItem {
|
||||
sessionId: string;
|
||||
source: 'ccw' | 'native';
|
||||
tool: string;
|
||||
timestamp: number;
|
||||
eligible: boolean;
|
||||
extracted: boolean;
|
||||
bytes: number;
|
||||
turns: number;
|
||||
}
|
||||
|
||||
export interface ExtractionPreviewResponse {
|
||||
success: boolean;
|
||||
sessions: SessionPreviewItem[];
|
||||
summary: {
|
||||
total: number;
|
||||
eligible: number;
|
||||
alreadyExtracted: number;
|
||||
readyForExtraction: number;
|
||||
};
|
||||
}
|
||||
|
||||
export interface SelectiveExtractionRequest {
|
||||
sessionIds: string[];
|
||||
includeNative?: boolean;
|
||||
path?: string;
|
||||
}
|
||||
|
||||
export interface SelectiveExtractionResponse {
|
||||
success: boolean;
|
||||
jobId: string;
|
||||
queued: number;
|
||||
skipped: number;
|
||||
invalidIds: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Preview extraction queue - get list of sessions available for extraction
|
||||
*/
|
||||
export async function previewExtractionQueue(
|
||||
includeNative: boolean = false,
|
||||
maxSessions?: number,
|
||||
projectPath?: string
|
||||
): Promise<ExtractionPreviewResponse> {
|
||||
const params = new URLSearchParams();
|
||||
if (projectPath) params.set('path', projectPath);
|
||||
if (includeNative) params.set('include_native', 'true');
|
||||
if (maxSessions) params.set('max_sessions', String(maxSessions));
|
||||
return fetchApi<ExtractionPreviewResponse>(`/api/core-memory/extract/preview?${params}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Trigger selective extraction for specific sessions
|
||||
*/
|
||||
export async function triggerSelectiveExtraction(
|
||||
request: SelectiveExtractionRequest
|
||||
): Promise<SelectiveExtractionResponse> {
|
||||
const params = new URLSearchParams();
|
||||
if (request.path) params.set('path', request.path);
|
||||
return fetchApi<SelectiveExtractionResponse>(
|
||||
`/api/core-memory/extract/selective?${params}`,
|
||||
{
|
||||
method: 'POST',
|
||||
body: JSON.stringify({
|
||||
session_ids: request.sessionIds,
|
||||
include_native: request.includeNative,
|
||||
}),
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
// ========== Project Overview API ==========
|
||||
|
||||
export interface TechnologyStack {
|
||||
|
||||
@@ -74,7 +74,8 @@
|
||||
"categories": {
|
||||
"notification": "Notification",
|
||||
"indexing": "Indexing",
|
||||
"automation": "Automation"
|
||||
"automation": "Automation",
|
||||
"utility": "Utility"
|
||||
},
|
||||
"templates": {
|
||||
"session-start-notify": {
|
||||
@@ -116,6 +117,30 @@
|
||||
"project-state-inject": {
|
||||
"name": "Project State Inject",
|
||||
"description": "Inject project guidelines and recent dev history at session start"
|
||||
},
|
||||
"memory-v2-extract": {
|
||||
"name": "Memory V2 Extract",
|
||||
"description": "Trigger Phase 1 extraction when session ends (after idle period)"
|
||||
},
|
||||
"memory-v2-auto-consolidate": {
|
||||
"name": "Memory V2 Auto Consolidate",
|
||||
"description": "Trigger Phase 2 consolidation after extraction jobs complete"
|
||||
},
|
||||
"memory-sync-dashboard": {
|
||||
"name": "Memory Sync Dashboard",
|
||||
"description": "Sync memory V2 status to dashboard on changes"
|
||||
},
|
||||
"memory-auto-compress": {
|
||||
"name": "Auto Memory Compress",
|
||||
"description": "Automatically compress memory when entries exceed threshold"
|
||||
},
|
||||
"memory-preview-extract": {
|
||||
"name": "Memory Preview & Extract",
|
||||
"description": "Preview extraction queue and extract eligible sessions"
|
||||
},
|
||||
"memory-status-check": {
|
||||
"name": "Memory Status Check",
|
||||
"description": "Check memory extraction and consolidation status"
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
|
||||
@@ -10,7 +10,8 @@
|
||||
"collapseAll": "Collapse All",
|
||||
"copy": "Copy",
|
||||
"showDisabled": "Show Disabled",
|
||||
"hideDisabled": "Hide Disabled"
|
||||
"hideDisabled": "Hide Disabled",
|
||||
"cancel": "Cancel"
|
||||
},
|
||||
"source": {
|
||||
"builtin": "Built-in",
|
||||
@@ -57,5 +58,45 @@
|
||||
"clickToDisableAll": "Click to disable all",
|
||||
"noCommands": "No commands in this group",
|
||||
"noEnabledCommands": "No enabled commands in this group"
|
||||
},
|
||||
"create": {
|
||||
"title": "Create Command",
|
||||
"location": "Location",
|
||||
"locationProject": "Project Commands",
|
||||
"locationProjectHint": ".claude/commands/",
|
||||
"locationUser": "Global Commands",
|
||||
"locationUserHint": "~/.claude/commands/",
|
||||
"mode": "Creation Mode",
|
||||
"modeImport": "Import File",
|
||||
"modeImportHint": "Import command from existing file",
|
||||
"modeGenerate": "AI Generate",
|
||||
"modeGenerateHint": "Generate command using AI",
|
||||
"sourcePath": "Source File Path",
|
||||
"sourcePathPlaceholder": "Enter absolute path to command file",
|
||||
"sourcePathHint": "File must be a valid command markdown file",
|
||||
"customName": "Custom Name",
|
||||
"customNamePlaceholder": "Leave empty to use original name",
|
||||
"customNameHint": "Optional, overrides default command name",
|
||||
"commandName": "Command Name",
|
||||
"commandNamePlaceholder": "Enter command name",
|
||||
"commandNameHint": "Used as the command file name",
|
||||
"descriptionLabel": "Command Description",
|
||||
"descriptionPlaceholder": "Describe what this command should do...",
|
||||
"descriptionHint": "AI will generate command content based on this description",
|
||||
"generateInfo": "AI will use CLI tools to generate the command",
|
||||
"generateTimeHint": "Generation may take some time",
|
||||
"validate": "Validate",
|
||||
"import": "Import",
|
||||
"generate": "Generate",
|
||||
"validating": "Validating...",
|
||||
"validCommand": "Validation passed",
|
||||
"invalidCommand": "Validation failed",
|
||||
"creating": "Creating...",
|
||||
"created": "Command \"{name}\" created successfully",
|
||||
"createError": "Failed to create command",
|
||||
"sourcePathRequired": "Please enter source file path",
|
||||
"commandNameRequired": "Please enter command name",
|
||||
"descriptionRequired": "Please enter command description",
|
||||
"validateFirst": "Please validate the command file first"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -169,6 +169,10 @@
|
||||
"saveConfig": "Save Configuration",
|
||||
"saving": "Saving..."
|
||||
},
|
||||
"feedback": {
|
||||
"saveSuccess": "Configuration saved",
|
||||
"saveError": "Failed to save configuration"
|
||||
},
|
||||
"scope": {
|
||||
"global": "Global",
|
||||
"project": "Project",
|
||||
|
||||
@@ -180,6 +180,28 @@
|
||||
"statusBanner": {
|
||||
"running": "Pipeline Running - {count} job(s) in progress",
|
||||
"hasErrors": "Pipeline Idle - {count} job(s) failed"
|
||||
},
|
||||
"preview": {
|
||||
"title": "Extraction Queue Preview",
|
||||
"selectSessions": "Search sessions...",
|
||||
"sourceCcw": "CCW",
|
||||
"sourceNative": "Native",
|
||||
"selectAll": "Select All",
|
||||
"selectNone": "Select None",
|
||||
"extractSelected": "Extract Selected ({count})",
|
||||
"noSessions": "No sessions found",
|
||||
"total": "Total",
|
||||
"eligible": "Eligible",
|
||||
"extracted": "Already Extracted",
|
||||
"ready": "Ready",
|
||||
"previewQueue": "Preview Queue",
|
||||
"includeNative": "Include Native Sessions",
|
||||
"selected": "{count} sessions selected",
|
||||
"selectHint": "Select sessions to extract",
|
||||
"ineligible": "Ineligible"
|
||||
},
|
||||
"extraction": {
|
||||
"selectiveTriggered": "Selective extraction triggered"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -74,7 +74,8 @@
|
||||
"categories": {
|
||||
"notification": "通知",
|
||||
"indexing": "索引",
|
||||
"automation": "自动化"
|
||||
"automation": "自动化",
|
||||
"utility": "实用工具"
|
||||
},
|
||||
"templates": {
|
||||
"session-start-notify": {
|
||||
@@ -116,6 +117,30 @@
|
||||
"project-state-inject": {
|
||||
"name": "项目状态注入",
|
||||
"description": "会话启动时注入项目约束和最近开发历史"
|
||||
},
|
||||
"memory-v2-extract": {
|
||||
"name": "Memory V2 提取",
|
||||
"description": "会话结束时触发第一阶段提取(空闲期后)"
|
||||
},
|
||||
"memory-v2-auto-consolidate": {
|
||||
"name": "Memory V2 自动合并",
|
||||
"description": "提取作业完成后触发第二阶段合并"
|
||||
},
|
||||
"memory-sync-dashboard": {
|
||||
"name": "Memory 同步仪表盘",
|
||||
"description": "变更时同步 Memory V2 状态到仪表盘"
|
||||
},
|
||||
"memory-auto-compress": {
|
||||
"name": "自动内存压缩",
|
||||
"description": "当条目超过阈值时自动压缩内存"
|
||||
},
|
||||
"memory-preview-extract": {
|
||||
"name": "内存预览与提取",
|
||||
"description": "预览提取队列并提取符合条件的会话"
|
||||
},
|
||||
"memory-status-check": {
|
||||
"name": "内存状态检查",
|
||||
"description": "检查内存提取和合并状态"
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
|
||||
@@ -10,7 +10,8 @@
|
||||
"collapseAll": "全部收起",
|
||||
"copy": "复制",
|
||||
"showDisabled": "显示已禁用",
|
||||
"hideDisabled": "隐藏已禁用"
|
||||
"hideDisabled": "隐藏已禁用",
|
||||
"cancel": "取消"
|
||||
},
|
||||
"source": {
|
||||
"builtin": "内置",
|
||||
@@ -57,5 +58,45 @@
|
||||
"clickToDisableAll": "点击全部禁用",
|
||||
"noCommands": "此分组中没有命令",
|
||||
"noEnabledCommands": "此分组中没有已启用的命令"
|
||||
},
|
||||
"create": {
|
||||
"title": "创建命令",
|
||||
"location": "存储位置",
|
||||
"locationProject": "项目命令",
|
||||
"locationProjectHint": ".claude/commands/",
|
||||
"locationUser": "全局命令",
|
||||
"locationUserHint": "~/.claude/commands/",
|
||||
"mode": "创建方式",
|
||||
"modeImport": "导入文件",
|
||||
"modeImportHint": "从现有文件导入命令",
|
||||
"modeGenerate": "AI 生成",
|
||||
"modeGenerateHint": "使用 AI 生成命令",
|
||||
"sourcePath": "源文件路径",
|
||||
"sourcePathPlaceholder": "输入命令文件的绝对路径",
|
||||
"sourcePathHint": "文件必须是有效的命令 Markdown 文件",
|
||||
"customName": "自定义名称",
|
||||
"customNamePlaceholder": "留空则使用原始名称",
|
||||
"customNameHint": "可选,覆盖默认命令名称",
|
||||
"commandName": "命令名称",
|
||||
"commandNamePlaceholder": "输入命令名称",
|
||||
"commandNameHint": "用作命令文件名称",
|
||||
"descriptionLabel": "命令描述",
|
||||
"descriptionPlaceholder": "描述这个命令应该做什么...",
|
||||
"descriptionHint": "AI 将根据描述生成命令内容",
|
||||
"generateInfo": "AI 将使用 CLI 工具生成命令",
|
||||
"generateTimeHint": "生成过程可能需要一些时间",
|
||||
"validate": "验证",
|
||||
"import": "导入",
|
||||
"generate": "生成",
|
||||
"validating": "验证中...",
|
||||
"validCommand": "验证通过",
|
||||
"invalidCommand": "验证失败",
|
||||
"creating": "创建中...",
|
||||
"created": "命令 \"{name}\" 创建成功",
|
||||
"createError": "创建命令失败",
|
||||
"sourcePathRequired": "请输入源文件路径",
|
||||
"commandNameRequired": "请输入命令名称",
|
||||
"descriptionRequired": "请输入命令描述",
|
||||
"validateFirst": "请先验证命令文件"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -169,6 +169,10 @@
|
||||
"saveConfig": "保存配置",
|
||||
"saving": "保存中..."
|
||||
},
|
||||
"feedback": {
|
||||
"saveSuccess": "配置已保存",
|
||||
"saveError": "保存配置失败"
|
||||
},
|
||||
"scope": {
|
||||
"global": "全局",
|
||||
"project": "项目",
|
||||
|
||||
@@ -180,6 +180,28 @@
|
||||
"statusBanner": {
|
||||
"running": "Pipeline 运行中 - {count} 个作业正在执行",
|
||||
"hasErrors": "Pipeline 空闲 - {count} 个作业失败"
|
||||
},
|
||||
"preview": {
|
||||
"title": "提取队列预览",
|
||||
"selectSessions": "搜索会话...",
|
||||
"sourceCcw": "CCW",
|
||||
"sourceNative": "原生",
|
||||
"selectAll": "全选",
|
||||
"selectNone": "取消全选",
|
||||
"extractSelected": "提取选中 ({count})",
|
||||
"noSessions": "未找到会话",
|
||||
"total": "总计",
|
||||
"eligible": "符合条件",
|
||||
"extracted": "已提取",
|
||||
"ready": "就绪",
|
||||
"previewQueue": "预览队列",
|
||||
"includeNative": "包含原生会话",
|
||||
"selected": "已选择 {count} 个会话",
|
||||
"selectHint": "选择要提取的会话",
|
||||
"ineligible": "不符合条件"
|
||||
},
|
||||
"extraction": {
|
||||
"selectiveTriggered": "选择性提取已触发"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -252,6 +252,11 @@ export function run(argv: string[]): void {
|
||||
.option('--batch-size <n>', 'Batch size for embedding', '8')
|
||||
.option('--top-k <n>', 'Number of semantic search results', '10')
|
||||
.option('--min-score <f>', 'Minimum similarity score for semantic search', '0.5')
|
||||
// Pipeline V2 options
|
||||
.option('--include-native', 'Include native sessions (preview)')
|
||||
.option('--path <path>', 'Project path (pipeline commands)')
|
||||
.option('--max-sessions <n>', 'Max sessions to extract (extract)')
|
||||
.option('--session-ids <ids>', 'Comma-separated session IDs (extract)')
|
||||
.action((subcommand, args, options) => memoryCommand(subcommand, args, options));
|
||||
|
||||
// Core Memory command
|
||||
|
||||
@@ -20,6 +20,9 @@ import {
|
||||
} from '../core/memory-embedder-bridge.js';
|
||||
import { getCoreMemoryStore } from '../core/core-memory-store.js';
|
||||
import { CliHistoryStore } from '../tools/cli-history-store.js';
|
||||
import { MemoryExtractionPipeline, type PreviewResult, type SessionPreviewItem } from '../core/memory-extraction-pipeline.js';
|
||||
import { MemoryConsolidationPipeline } from '../core/memory-consolidation-pipeline.js';
|
||||
import { MemoryJobScheduler } from '../core/memory-job-scheduler.js';
|
||||
|
||||
interface TrackOptions {
|
||||
type?: string;
|
||||
@@ -74,6 +77,28 @@ interface EmbedStatusOptions {
|
||||
json?: boolean;
|
||||
}
|
||||
|
||||
// Memory Pipeline V2 subcommand options
|
||||
interface PipelinePreviewOptions {
|
||||
includeNative?: boolean;
|
||||
path?: string;
|
||||
json?: boolean;
|
||||
}
|
||||
|
||||
interface PipelineExtractOptions {
|
||||
maxSessions?: string;
|
||||
sessionIds?: string;
|
||||
path?: string;
|
||||
}
|
||||
|
||||
interface PipelineConsolidateOptions {
|
||||
path?: string;
|
||||
}
|
||||
|
||||
interface PipelineStatusOptions {
|
||||
path?: string;
|
||||
json?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Read JSON data from stdin (for Claude Code hooks)
|
||||
*/
|
||||
@@ -967,9 +992,388 @@ async function embedStatusAction(options: EmbedStatusOptions): Promise<void> {
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Memory Pipeline V2 Subcommands
|
||||
// ============================================================
|
||||
|
||||
/**
|
||||
* Preview eligible sessions for extraction
|
||||
*/
|
||||
async function pipelinePreviewAction(options: PipelinePreviewOptions): Promise<void> {
|
||||
const { includeNative, path: projectPath, json } = options;
|
||||
const basePath = projectPath || process.cwd();
|
||||
|
||||
try {
|
||||
const pipeline = new MemoryExtractionPipeline(basePath);
|
||||
const preview = pipeline.previewEligibleSessions({
|
||||
includeNative: includeNative || false,
|
||||
});
|
||||
|
||||
if (json) {
|
||||
console.log(JSON.stringify(preview, null, 2));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(chalk.bold.cyan('\n Extraction Queue Preview\n'));
|
||||
console.log(chalk.gray(` Project: ${basePath}`));
|
||||
console.log(chalk.gray(` Include Native: ${includeNative ? 'Yes' : 'No'}\n`));
|
||||
|
||||
// Summary
|
||||
const { summary } = preview;
|
||||
console.log(chalk.bold.white(' Summary:'));
|
||||
console.log(chalk.white(` Total Sessions: ${summary.total}`));
|
||||
console.log(chalk.white(` Eligible: ${summary.eligible}`));
|
||||
console.log(chalk.white(` Already Extracted: ${summary.alreadyExtracted}`));
|
||||
console.log(chalk.green(` Ready for Extraction: ${summary.readyForExtraction}`));
|
||||
|
||||
if (preview.sessions.length === 0) {
|
||||
console.log(chalk.yellow('\n No eligible sessions found.\n'));
|
||||
return;
|
||||
}
|
||||
|
||||
// Sessions table
|
||||
console.log(chalk.bold.white('\n Sessions:\n'));
|
||||
console.log(chalk.gray(' ID Source Tool Turns Bytes Status'));
|
||||
console.log(chalk.gray(' ' + '-'.repeat(76)));
|
||||
|
||||
for (const session of preview.sessions) {
|
||||
const id = session.sessionId.padEnd(20);
|
||||
const source = session.source.padEnd(11);
|
||||
const tool = (session.tool || '-').padEnd(11);
|
||||
const turns = String(session.turns).padStart(5);
|
||||
const bytes = String(session.bytes).padStart(9);
|
||||
const status = session.extracted
|
||||
? chalk.green('extracted')
|
||||
: session.eligible
|
||||
? chalk.cyan('ready')
|
||||
: chalk.gray('skipped');
|
||||
|
||||
console.log(` ${chalk.dim(id)} ${source} ${tool} ${turns} ${bytes} ${status}`);
|
||||
}
|
||||
|
||||
console.log(chalk.gray('\n ' + '-'.repeat(76)));
|
||||
console.log(chalk.gray(` Showing ${preview.sessions.length} sessions\n`));
|
||||
|
||||
} catch (error) {
|
||||
if (json) {
|
||||
console.log(JSON.stringify({ error: (error as Error).message }, null, 2));
|
||||
} else {
|
||||
console.error(chalk.red(`\n Error: ${(error as Error).message}\n`));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Trigger extraction for sessions
|
||||
*/
|
||||
async function pipelineExtractAction(options: PipelineExtractOptions): Promise<void> {
|
||||
const { maxSessions, sessionIds, path: projectPath } = options;
|
||||
const basePath = projectPath || process.cwd();
|
||||
|
||||
try {
|
||||
const store = getCoreMemoryStore(basePath);
|
||||
const scheduler = new MemoryJobScheduler(store.getDb());
|
||||
const pipeline = new MemoryExtractionPipeline(basePath);
|
||||
|
||||
// Selective extraction with specific session IDs
|
||||
if (sessionIds) {
|
||||
const ids = sessionIds.split(',').map(id => id.trim()).filter(Boolean);
|
||||
|
||||
if (ids.length === 0) {
|
||||
console.error(chalk.red('Error: No valid session IDs provided'));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(chalk.bold.cyan('\n Selective Extraction\n'));
|
||||
console.log(chalk.gray(` Project: ${basePath}`));
|
||||
console.log(chalk.gray(` Session IDs: ${ids.join(', ')}\n`));
|
||||
|
||||
// Validate sessions
|
||||
const preview = pipeline.previewEligibleSessions({ includeNative: false });
|
||||
const validSessionIds = new Set(preview.sessions.map(s => s.sessionId));
|
||||
|
||||
const queued: string[] = [];
|
||||
const skipped: string[] = [];
|
||||
const invalid: string[] = [];
|
||||
|
||||
for (const sessionId of ids) {
|
||||
if (!validSessionIds.has(sessionId)) {
|
||||
invalid.push(sessionId);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if already extracted
|
||||
const existingOutput = store.getStage1Output(sessionId);
|
||||
if (existingOutput) {
|
||||
skipped.push(sessionId);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Enqueue job
|
||||
scheduler.enqueueJob('phase1_extraction', sessionId, Math.floor(Date.now() / 1000));
|
||||
queued.push(sessionId);
|
||||
}
|
||||
|
||||
console.log(chalk.green(` Queued: ${queued.length} sessions`));
|
||||
console.log(chalk.yellow(` Skipped (already extracted): ${skipped.length}`));
|
||||
|
||||
if (invalid.length > 0) {
|
||||
console.log(chalk.red(` Invalid: ${invalid.length}`));
|
||||
console.log(chalk.gray(` ${invalid.join(', ')}`));
|
||||
}
|
||||
|
||||
// Process queued sessions
|
||||
if (queued.length > 0) {
|
||||
console.log(chalk.cyan('\n Processing extraction jobs...\n'));
|
||||
|
||||
let succeeded = 0;
|
||||
let failed = 0;
|
||||
|
||||
for (const sessionId of queued) {
|
||||
try {
|
||||
await pipeline.runExtractionJob(sessionId);
|
||||
succeeded++;
|
||||
console.log(chalk.green(` [OK] ${sessionId}`));
|
||||
} catch (err) {
|
||||
failed++;
|
||||
console.log(chalk.red(` [FAIL] ${sessionId}: ${(err as Error).message}`));
|
||||
}
|
||||
}
|
||||
|
||||
console.log(chalk.bold.white(`\n Completed: ${succeeded} succeeded, ${failed} failed\n`));
|
||||
} else {
|
||||
console.log();
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
// Batch extraction
|
||||
const max = maxSessions ? parseInt(maxSessions, 10) : 10;
|
||||
|
||||
console.log(chalk.bold.cyan('\n Batch Extraction\n'));
|
||||
console.log(chalk.gray(` Project: ${basePath}`));
|
||||
console.log(chalk.gray(` Max Sessions: ${max}\n`));
|
||||
|
||||
// Get eligible sessions
|
||||
const eligible = pipeline.scanEligibleSessions(max);
|
||||
const preview = pipeline.previewEligibleSessions({ maxSessions: max });
|
||||
|
||||
console.log(chalk.white(` Found ${eligible.length} eligible sessions`));
|
||||
console.log(chalk.white(` Ready for extraction: ${preview.summary.readyForExtraction}\n`));
|
||||
|
||||
if (eligible.length === 0) {
|
||||
console.log(chalk.yellow(' No eligible sessions to extract.\n'));
|
||||
return;
|
||||
}
|
||||
|
||||
// Queue jobs
|
||||
const jobId = `batch-${Date.now()}`;
|
||||
const queued: string[] = [];
|
||||
|
||||
for (const session of eligible) {
|
||||
const existingOutput = store.getStage1Output(session.id);
|
||||
if (!existingOutput) {
|
||||
const watermark = Math.floor(new Date(session.updated_at).getTime() / 1000);
|
||||
scheduler.enqueueJob('phase1_extraction', session.id, watermark);
|
||||
queued.push(session.id);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(chalk.cyan(` Job ID: ${jobId}`));
|
||||
console.log(chalk.cyan(` Queued: ${queued.length} sessions\n`));
|
||||
|
||||
// Process queued sessions
|
||||
if (queued.length > 0) {
|
||||
console.log(chalk.cyan(' Processing extraction jobs...\n'));
|
||||
|
||||
let succeeded = 0;
|
||||
let failed = 0;
|
||||
|
||||
for (const sessionId of queued) {
|
||||
try {
|
||||
await pipeline.runExtractionJob(sessionId);
|
||||
succeeded++;
|
||||
console.log(chalk.green(` [OK] ${sessionId}`));
|
||||
} catch (err) {
|
||||
failed++;
|
||||
console.log(chalk.red(` [FAIL] ${sessionId}: ${(err as Error).message}`));
|
||||
}
|
||||
}
|
||||
|
||||
console.log(chalk.bold.white(`\n Completed: ${succeeded} succeeded, ${failed} failed\n`));
|
||||
} else {
|
||||
console.log(chalk.yellow(' No new sessions to extract.\n'));
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`\n Error: ${(error as Error).message}\n`));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Trigger consolidation pipeline
|
||||
*/
|
||||
async function pipelineConsolidateAction(options: PipelineConsolidateOptions): Promise<void> {
|
||||
const { path: projectPath } = options;
|
||||
const basePath = projectPath || process.cwd();
|
||||
|
||||
try {
|
||||
const pipeline = new MemoryConsolidationPipeline(basePath);
|
||||
|
||||
console.log(chalk.bold.cyan('\n Memory Consolidation\n'));
|
||||
console.log(chalk.gray(` Project: ${basePath}\n`));
|
||||
|
||||
// Get current status
|
||||
const status = pipeline.getStatus();
|
||||
|
||||
if (status) {
|
||||
console.log(chalk.white(` Current Status: ${status.status}`));
|
||||
}
|
||||
|
||||
console.log(chalk.cyan('\n Triggering consolidation...\n'));
|
||||
|
||||
// Run consolidation
|
||||
await pipeline.runConsolidation();
|
||||
|
||||
console.log(chalk.green(' Consolidation completed successfully.\n'));
|
||||
|
||||
// Show result
|
||||
const memoryMd = pipeline.getMemoryMdContent();
|
||||
if (memoryMd) {
|
||||
console.log(chalk.white(' Memory.md Preview:'));
|
||||
console.log(chalk.gray(' ' + '-'.repeat(60)));
|
||||
const preview = memoryMd.substring(0, 500);
|
||||
console.log(chalk.dim(preview.split('\n').map(line => ' ' + line).join('\n')));
|
||||
if (memoryMd.length > 500) {
|
||||
console.log(chalk.gray(' ...'));
|
||||
}
|
||||
console.log(chalk.gray(' ' + '-'.repeat(60)));
|
||||
console.log(chalk.gray(` (${memoryMd.length} bytes total)\n`));
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`\n Error: ${(error as Error).message}\n`));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Show pipeline status
|
||||
*/
|
||||
async function pipelineStatusAction(options: PipelineStatusOptions): Promise<void> {
|
||||
const { path: projectPath, json } = options;
|
||||
const basePath = projectPath || process.cwd();
|
||||
|
||||
try {
|
||||
const store = getCoreMemoryStore(basePath);
|
||||
const scheduler = new MemoryJobScheduler(store.getDb());
|
||||
|
||||
// Extraction status
|
||||
const stage1Count = store.countStage1Outputs();
|
||||
const extractionJobs = scheduler.listJobs('phase1_extraction');
|
||||
|
||||
// Consolidation status
|
||||
let consolidationStatus = 'unavailable';
|
||||
let memoryMdAvailable = false;
|
||||
|
||||
try {
|
||||
const consolidationPipeline = new MemoryConsolidationPipeline(basePath);
|
||||
const status = consolidationPipeline.getStatus();
|
||||
consolidationStatus = status?.status || 'unknown';
|
||||
memoryMdAvailable = !!consolidationPipeline.getMemoryMdContent();
|
||||
} catch {
|
||||
// Consolidation pipeline may not be initialized
|
||||
}
|
||||
|
||||
// Job counts by status
|
||||
const jobCounts: Record<string, number> = {};
|
||||
for (const job of extractionJobs) {
|
||||
jobCounts[job.status] = (jobCounts[job.status] || 0) + 1;
|
||||
}
|
||||
|
||||
const result = {
|
||||
extraction: {
|
||||
stage1Count,
|
||||
totalJobs: extractionJobs.length,
|
||||
jobCounts,
|
||||
recentJobs: extractionJobs.slice(0, 10).map(j => ({
|
||||
job_key: j.job_key,
|
||||
status: j.status,
|
||||
started_at: j.started_at,
|
||||
finished_at: j.finished_at,
|
||||
last_error: j.last_error,
|
||||
})),
|
||||
},
|
||||
consolidation: {
|
||||
status: consolidationStatus,
|
||||
memoryMdAvailable,
|
||||
},
|
||||
};
|
||||
|
||||
if (json) {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(chalk.bold.cyan('\n Memory Pipeline Status\n'));
|
||||
console.log(chalk.gray(` Project: ${basePath}\n`));
|
||||
|
||||
// Extraction status
|
||||
console.log(chalk.bold.white(' Extraction Pipeline:'));
|
||||
console.log(chalk.white(` Stage 1 Outputs: ${stage1Count}`));
|
||||
console.log(chalk.white(` Total Jobs: ${extractionJobs.length}`));
|
||||
|
||||
if (Object.keys(jobCounts).length > 0) {
|
||||
console.log(chalk.white(' Job Status:'));
|
||||
for (const [status, count] of Object.entries(jobCounts)) {
|
||||
const statusColor = status === 'completed' ? chalk.green :
|
||||
status === 'running' ? chalk.yellow : chalk.gray;
|
||||
console.log(` ${statusColor(status)}: ${count}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Consolidation status
|
||||
console.log(chalk.bold.white('\n Consolidation Pipeline:'));
|
||||
console.log(chalk.white(` Status: ${consolidationStatus}`));
|
||||
console.log(chalk.white(` Memory.md Available: ${memoryMdAvailable ? 'Yes' : 'No'}`));
|
||||
|
||||
// Recent jobs
|
||||
if (extractionJobs.length > 0) {
|
||||
console.log(chalk.bold.white('\n Recent Extraction Jobs:\n'));
|
||||
console.log(chalk.gray(' Status Job Key'));
|
||||
console.log(chalk.gray(' ' + '-'.repeat(60)));
|
||||
|
||||
for (const job of extractionJobs.slice(0, 10)) {
|
||||
const statusIcon = job.status === 'done' ? chalk.green('done ') :
|
||||
job.status === 'running' ? chalk.yellow('running ') :
|
||||
job.status === 'pending' ? chalk.gray('pending ') :
|
||||
chalk.red('error ');
|
||||
console.log(` ${statusIcon} ${chalk.dim(job.job_key)}`);
|
||||
}
|
||||
|
||||
if (extractionJobs.length > 10) {
|
||||
console.log(chalk.gray(` ... and ${extractionJobs.length - 10} more`));
|
||||
}
|
||||
}
|
||||
|
||||
console.log();
|
||||
|
||||
} catch (error) {
|
||||
if (json) {
|
||||
console.log(JSON.stringify({ error: (error as Error).message }, null, 2));
|
||||
} else {
|
||||
console.error(chalk.red(`\n Error: ${(error as Error).message}\n`));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Memory command entry point
|
||||
* @param {string} subcommand - Subcommand (track, import, stats, search, suggest, prune, embed, embed-status)
|
||||
* @param {string} subcommand - Subcommand (track, import, stats, search, suggest, prune, embed, embed-status, preview, extract, consolidate, status)
|
||||
* @param {string|string[]} args - Arguments array
|
||||
* @param {Object} options - CLI options
|
||||
*/
|
||||
@@ -1018,6 +1422,23 @@ export async function memoryCommand(
|
||||
await embedStatusAction(options as EmbedStatusOptions);
|
||||
break;
|
||||
|
||||
// Memory Pipeline V2 subcommands
|
||||
case 'preview':
|
||||
await pipelinePreviewAction(options as PipelinePreviewOptions);
|
||||
break;
|
||||
|
||||
case 'extract':
|
||||
await pipelineExtractAction(options as PipelineExtractOptions);
|
||||
break;
|
||||
|
||||
case 'consolidate':
|
||||
await pipelineConsolidateAction(options as PipelineConsolidateOptions);
|
||||
break;
|
||||
|
||||
case 'status':
|
||||
await pipelineStatusAction(options as PipelineStatusOptions);
|
||||
break;
|
||||
|
||||
default:
|
||||
console.log(chalk.bold.cyan('\n CCW Memory Module\n'));
|
||||
console.log(' Context tracking and prompt optimization.\n');
|
||||
@@ -1031,6 +1452,12 @@ export async function memoryCommand(
|
||||
console.log(chalk.gray(' embed Generate embeddings for semantic search'));
|
||||
console.log(chalk.gray(' embed-status Show embedding generation status'));
|
||||
console.log();
|
||||
console.log(chalk.bold.cyan(' Memory Pipeline V2:'));
|
||||
console.log(chalk.gray(' preview Preview eligible sessions for extraction'));
|
||||
console.log(chalk.gray(' extract Trigger extraction for sessions'));
|
||||
console.log(chalk.gray(' consolidate Trigger consolidation pipeline'));
|
||||
console.log(chalk.gray(' status Show pipeline status'));
|
||||
console.log();
|
||||
console.log(' Track Options:');
|
||||
console.log(chalk.gray(' --type <type> Entity type: file, module, topic'));
|
||||
console.log(chalk.gray(' --action <action> Action: read, write, mention'));
|
||||
@@ -1074,6 +1501,25 @@ export async function memoryCommand(
|
||||
console.log(chalk.gray(' --older-than <age> Age threshold (default: 30d)'));
|
||||
console.log(chalk.gray(' --dry-run Preview without deleting'));
|
||||
console.log();
|
||||
console.log(chalk.bold.cyan(' Pipeline V2 Options:'));
|
||||
console.log();
|
||||
console.log(' Preview Options:');
|
||||
console.log(chalk.gray(' --include-native Include native sessions in preview'));
|
||||
console.log(chalk.gray(' --path <path> Project path (default: current directory)'));
|
||||
console.log(chalk.gray(' --json Output as JSON'));
|
||||
console.log();
|
||||
console.log(' Extract Options:');
|
||||
console.log(chalk.gray(' --max-sessions <n> Max sessions to extract (default: 10)'));
|
||||
console.log(chalk.gray(' --session-ids <ids> Comma-separated session IDs for selective extraction'));
|
||||
console.log(chalk.gray(' --path <path> Project path (default: current directory)'));
|
||||
console.log();
|
||||
console.log(' Consolidate Options:');
|
||||
console.log(chalk.gray(' --path <path> Project path (default: current directory)'));
|
||||
console.log();
|
||||
console.log(' Pipeline Status Options:');
|
||||
console.log(chalk.gray(' --path <path> Project path (default: current directory)'));
|
||||
console.log(chalk.gray(' --json Output as JSON'));
|
||||
console.log();
|
||||
console.log(' Examples:');
|
||||
console.log(chalk.gray(' ccw memory track --type file --action read --value "src/auth.ts"'));
|
||||
console.log(chalk.gray(' ccw memory import --source history --project "my-app"'));
|
||||
@@ -1086,5 +1532,13 @@ export async function memoryCommand(
|
||||
console.log(chalk.gray(' ccw memory suggest --context "implementing JWT auth"'));
|
||||
console.log(chalk.gray(' ccw memory prune --older-than 60d --dry-run'));
|
||||
console.log();
|
||||
console.log(chalk.cyan(' Pipeline V2 Examples:'));
|
||||
console.log(chalk.gray(' ccw memory preview # Preview extraction queue'));
|
||||
console.log(chalk.gray(' ccw memory preview --include-native # Include native sessions'));
|
||||
console.log(chalk.gray(' ccw memory extract --max-sessions 10 # Batch extract up to 10'));
|
||||
console.log(chalk.gray(' ccw memory extract --session-ids sess-1,sess-2 # Selective extraction'));
|
||||
console.log(chalk.gray(' ccw memory consolidate # Run consolidation'));
|
||||
console.log(chalk.gray(' ccw memory status # Check pipeline status'));
|
||||
console.log();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,12 +3,12 @@
|
||||
* Delegates to team-msg.ts handler for JSONL-based persistent messaging
|
||||
*
|
||||
* Commands:
|
||||
* ccw team log --team <name> --from <role> --to <role> --type <type> --summary "..."
|
||||
* ccw team read --team <name> --id <MSG-NNN>
|
||||
* ccw team list --team <name> [--from <role>] [--to <role>] [--type <type>] [--last <n>]
|
||||
* ccw team status --team <name>
|
||||
* ccw team delete --team <name> --id <MSG-NNN>
|
||||
* ccw team clear --team <name>
|
||||
* ccw team log --team <session-id> --from <role> --to <role> --type <type> --summary "..."
|
||||
* ccw team read --team <session-id> --id <MSG-NNN>
|
||||
* ccw team list --team <session-id> [--from <role>] [--to <role>] [--type <type>] [--last <n>]
|
||||
* ccw team status --team <session-id>
|
||||
* ccw team delete --team <session-id> --id <MSG-NNN>
|
||||
* ccw team clear --team <session-id>
|
||||
*/
|
||||
|
||||
import chalk from 'chalk';
|
||||
@@ -145,7 +145,7 @@ function printHelp(): void {
|
||||
console.log(chalk.gray(' clear Clear all messages for a team'));
|
||||
console.log();
|
||||
console.log(' Required:');
|
||||
console.log(chalk.gray(' --team <name> Team name'));
|
||||
console.log(chalk.gray(' --team <session-id> Session ID (e.g., TLS-my-project-2026-02-27), NOT team name'));
|
||||
console.log();
|
||||
console.log(' Log Options:');
|
||||
console.log(chalk.gray(' --from <role> Sender role name'));
|
||||
@@ -168,12 +168,12 @@ function printHelp(): void {
|
||||
console.log(chalk.gray(' --json Output as JSON'));
|
||||
console.log();
|
||||
console.log(' Examples:');
|
||||
console.log(chalk.gray(' ccw team log --team my-team --from executor --to coordinator --type impl_complete --summary "Task done"'));
|
||||
console.log(chalk.gray(' ccw team list --team my-team --last 5'));
|
||||
console.log(chalk.gray(' ccw team read --team my-team --id MSG-003'));
|
||||
console.log(chalk.gray(' ccw team status --team my-team'));
|
||||
console.log(chalk.gray(' ccw team delete --team my-team --id MSG-003'));
|
||||
console.log(chalk.gray(' ccw team clear --team my-team'));
|
||||
console.log(chalk.gray(' ccw team log --team my-team --from planner --to coordinator --type plan_ready --summary "Plan ready" --json'));
|
||||
console.log(chalk.gray(' ccw team log --team TLS-my-project-2026-02-27 --from executor --to coordinator --type impl_complete --summary "Task done"'));
|
||||
console.log(chalk.gray(' ccw team list --team TLS-my-project-2026-02-27 --last 5'));
|
||||
console.log(chalk.gray(' ccw team read --team TLS-my-project-2026-02-27 --id MSG-003'));
|
||||
console.log(chalk.gray(' ccw team status --team TLS-my-project-2026-02-27'));
|
||||
console.log(chalk.gray(' ccw team delete --team TLS-my-project-2026-02-27 --id MSG-003'));
|
||||
console.log(chalk.gray(' ccw team clear --team TLS-my-project-2026-02-27'));
|
||||
console.log(chalk.gray(' ccw team log --team TLS-my-project-2026-02-27 --from planner --to coordinator --type plan_ready --summary "Plan ready" --json'));
|
||||
console.log();
|
||||
}
|
||||
|
||||
@@ -28,6 +28,8 @@ import {
|
||||
} from './memory-v2-config.js';
|
||||
import { EXTRACTION_SYSTEM_PROMPT, buildExtractionUserPrompt } from './memory-extraction-prompts.js';
|
||||
import { redactSecrets } from '../utils/secret-redactor.js';
|
||||
import { getNativeSessions, type NativeSession } from '../tools/native-session-discovery.js';
|
||||
import { existsSync, readFileSync, statSync } from 'fs';
|
||||
|
||||
// -- Types --
|
||||
|
||||
@@ -58,6 +60,27 @@ export interface BatchExtractionResult {
|
||||
errors: Array<{ sessionId: string; error: string }>;
|
||||
}
|
||||
|
||||
export interface SessionPreviewItem {
|
||||
sessionId: string;
|
||||
source: 'ccw' | 'native';
|
||||
tool: string;
|
||||
timestamp: number;
|
||||
eligible: boolean;
|
||||
extracted: boolean;
|
||||
bytes: number;
|
||||
turns: number;
|
||||
}
|
||||
|
||||
export interface PreviewResult {
|
||||
sessions: SessionPreviewItem[];
|
||||
summary: {
|
||||
total: number;
|
||||
eligible: number;
|
||||
alreadyExtracted: number;
|
||||
readyForExtraction: number;
|
||||
};
|
||||
}
|
||||
|
||||
// -- Turn type bitmask constants --
|
||||
|
||||
/** All turn types included */
|
||||
@@ -77,6 +100,15 @@ const TRUNCATION_MARKER = '\n\n[... CONTENT TRUNCATED ...]\n\n';
|
||||
|
||||
const JOB_KIND_EXTRACTION = 'phase1_extraction';
|
||||
|
||||
// -- Authorization error for session access --
|
||||
|
||||
export class SessionAccessDeniedError extends Error {
|
||||
constructor(sessionId: string, projectPath: string) {
|
||||
super(`Session '${sessionId}' does not belong to project '${projectPath}'`);
|
||||
this.name = 'SessionAccessDeniedError';
|
||||
}
|
||||
}
|
||||
|
||||
// -- Pipeline --
|
||||
|
||||
export class MemoryExtractionPipeline {
|
||||
@@ -92,6 +124,58 @@ export class MemoryExtractionPipeline {
|
||||
this.currentSessionId = options?.currentSessionId;
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// Authorization
|
||||
// ========================================================================
|
||||
|
||||
/**
|
||||
* Verify that a session belongs to the current project path.
|
||||
*
|
||||
* This is a security-critical authorization check to prevent cross-project
|
||||
* session access. Sessions are scoped to projects, and accessing a session
|
||||
* from another project should be denied.
|
||||
*
|
||||
* @param sessionId - The session ID to verify
|
||||
* @returns true if the session belongs to this project, false otherwise
|
||||
*/
|
||||
verifySessionBelongsToProject(sessionId: string): boolean {
|
||||
const historyStore = getHistoryStore(this.projectPath);
|
||||
const session = historyStore.getConversation(sessionId);
|
||||
|
||||
// If session exists in this project's history store, it's authorized
|
||||
if (session) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check native sessions - verify the session file is within project directory
|
||||
const nativeTools = ['gemini', 'qwen', 'codex', 'claude', 'opencode'] as const;
|
||||
for (const tool of nativeTools) {
|
||||
try {
|
||||
const nativeSessions = getNativeSessions(tool, { workingDir: this.projectPath });
|
||||
const found = nativeSessions.some(s => s.sessionId === sessionId);
|
||||
if (found) {
|
||||
return true;
|
||||
}
|
||||
} catch {
|
||||
// Skip tools with discovery errors
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Verify session access and throw if unauthorized.
|
||||
*
|
||||
* @param sessionId - The session ID to verify
|
||||
* @throws SessionAccessDeniedError if session doesn't belong to project
|
||||
*/
|
||||
private ensureSessionAccess(sessionId: string): void {
|
||||
if (!this.verifySessionBelongsToProject(sessionId)) {
|
||||
throw new SessionAccessDeniedError(sessionId, this.projectPath);
|
||||
}
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// Eligibility scanning
|
||||
// ========================================================================
|
||||
@@ -148,6 +232,122 @@ export class MemoryExtractionPipeline {
|
||||
return eligible;
|
||||
}
|
||||
|
||||
/**
|
||||
* Preview eligible sessions with detailed information for selective extraction.
|
||||
*
|
||||
* Returns session metadata including byte size, turn count, and extraction status.
|
||||
* Native sessions are returned empty in Phase 1 (Phase 2 will implement native integration).
|
||||
*
|
||||
* @param options - Preview options
|
||||
* @param options.includeNative - Whether to include native sessions (placeholder for Phase 2)
|
||||
* @param options.maxSessions - Maximum number of sessions to return
|
||||
* @returns PreviewResult with sessions and summary counts
|
||||
*/
|
||||
previewEligibleSessions(options?: { includeNative?: boolean; maxSessions?: number }): PreviewResult {
|
||||
const store = getCoreMemoryStore(this.projectPath);
|
||||
const maxSessions = options?.maxSessions || MAX_SESSIONS_PER_STARTUP;
|
||||
|
||||
// Scan CCW sessions using existing logic
|
||||
const ccwSessions = this.scanEligibleSessions(maxSessions);
|
||||
|
||||
const sessions: SessionPreviewItem[] = [];
|
||||
|
||||
// Process CCW sessions
|
||||
for (const session of ccwSessions) {
|
||||
const transcript = this.filterTranscript(session);
|
||||
const bytes = Buffer.byteLength(transcript, 'utf-8');
|
||||
const turns = session.turns?.length || 0;
|
||||
const timestamp = new Date(session.created_at).getTime();
|
||||
|
||||
// Check if already extracted
|
||||
const existingOutput = store.getStage1Output(session.id);
|
||||
const extracted = existingOutput !== null;
|
||||
|
||||
sessions.push({
|
||||
sessionId: session.id,
|
||||
source: 'ccw',
|
||||
tool: session.tool || 'unknown',
|
||||
timestamp,
|
||||
eligible: true,
|
||||
extracted,
|
||||
bytes,
|
||||
turns,
|
||||
});
|
||||
}
|
||||
|
||||
// Native sessions integration (Phase 2)
|
||||
if (options?.includeNative) {
|
||||
const nativeTools = ['gemini', 'qwen', 'codex', 'claude', 'opencode'] as const;
|
||||
const now = Date.now();
|
||||
const maxAgeMs = MAX_SESSION_AGE_DAYS * 24 * 60 * 60 * 1000;
|
||||
const minIdleMs = MIN_IDLE_HOURS * 60 * 60 * 1000;
|
||||
|
||||
for (const tool of nativeTools) {
|
||||
try {
|
||||
const nativeSessions = getNativeSessions(tool, { workingDir: this.projectPath });
|
||||
|
||||
for (const session of nativeSessions) {
|
||||
// Age check: created within MAX_SESSION_AGE_DAYS
|
||||
if (now - session.createdAt.getTime() > maxAgeMs) continue;
|
||||
|
||||
// Idle check: last updated at least MIN_IDLE_HOURS ago
|
||||
if (now - session.updatedAt.getTime() < minIdleMs) continue;
|
||||
|
||||
// Skip current session
|
||||
if (this.currentSessionId && session.sessionId === this.currentSessionId) continue;
|
||||
|
||||
// Get file stats for bytes
|
||||
let bytes = 0;
|
||||
let turns = 0;
|
||||
try {
|
||||
if (existsSync(session.filePath)) {
|
||||
const stats = statSync(session.filePath);
|
||||
bytes = stats.size;
|
||||
|
||||
// Parse session file to count turns
|
||||
turns = this.countNativeSessionTurns(session);
|
||||
}
|
||||
} catch {
|
||||
// Skip sessions with file access errors
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if already extracted
|
||||
const existingOutput = store.getStage1Output(session.sessionId);
|
||||
const extracted = existingOutput !== null;
|
||||
|
||||
sessions.push({
|
||||
sessionId: session.sessionId,
|
||||
source: 'native',
|
||||
tool: session.tool,
|
||||
timestamp: session.updatedAt.getTime(),
|
||||
eligible: true,
|
||||
extracted,
|
||||
bytes,
|
||||
turns,
|
||||
});
|
||||
}
|
||||
} catch {
|
||||
// Skip tools with discovery errors
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Compute summary
|
||||
const eligible = sessions.filter(s => s.eligible && !s.extracted);
|
||||
const alreadyExtracted = sessions.filter(s => s.extracted);
|
||||
|
||||
return {
|
||||
sessions,
|
||||
summary: {
|
||||
total: sessions.length,
|
||||
eligible: sessions.filter(s => s.eligible).length,
|
||||
alreadyExtracted: alreadyExtracted.length,
|
||||
readyForExtraction: eligible.length,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// Transcript filtering
|
||||
// ========================================================================
|
||||
@@ -202,6 +402,291 @@ export class MemoryExtractionPipeline {
|
||||
return parts.join('\n\n');
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// Native session handling
|
||||
// ========================================================================
|
||||
|
||||
/**
|
||||
* Count the number of turns in a native session file.
|
||||
*
|
||||
* Parses the session file based on tool-specific format:
|
||||
* - Gemini: { messages: [{ type, content }] }
|
||||
* - Qwen: JSONL with { type, message: { parts: [{ text }] } }
|
||||
* - Codex: JSONL with session events
|
||||
* - Claude: JSONL with { type, message } entries
|
||||
* - OpenCode: Message files in message/<session-id>/ directory
|
||||
*
|
||||
* @param session - The native session to count turns for
|
||||
* @returns Number of turns (user/assistant exchanges)
|
||||
*/
|
||||
countNativeSessionTurns(session: NativeSession): number {
|
||||
try {
|
||||
const content = readFileSync(session.filePath, 'utf8');
|
||||
|
||||
switch (session.tool) {
|
||||
case 'gemini': {
|
||||
// Gemini format: JSON with messages array
|
||||
const data = JSON.parse(content);
|
||||
if (data.messages && Array.isArray(data.messages)) {
|
||||
// Count user messages as turns
|
||||
return data.messages.filter((m: { type: string }) => m.type === 'user').length;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
case 'qwen': {
|
||||
// Qwen format: JSONL
|
||||
const lines = content.split('\n').filter(l => l.trim());
|
||||
let turnCount = 0;
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
// Count user messages
|
||||
if (entry.type === 'user' || entry.role === 'user') {
|
||||
turnCount++;
|
||||
}
|
||||
} catch {
|
||||
// Skip invalid lines
|
||||
}
|
||||
}
|
||||
return turnCount;
|
||||
}
|
||||
|
||||
case 'codex': {
|
||||
// Codex format: JSONL with session events
|
||||
const lines = content.split('\n').filter(l => l.trim());
|
||||
let turnCount = 0;
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
// Count user_message events
|
||||
if (entry.type === 'event_msg' && entry.payload?.type === 'user_message') {
|
||||
turnCount++;
|
||||
}
|
||||
} catch {
|
||||
// Skip invalid lines
|
||||
}
|
||||
}
|
||||
return turnCount;
|
||||
}
|
||||
|
||||
case 'claude': {
|
||||
// Claude format: JSONL
|
||||
const lines = content.split('\n').filter(l => l.trim());
|
||||
let turnCount = 0;
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
// Count user messages (skip meta and command messages)
|
||||
if (entry.type === 'user' &&
|
||||
entry.message?.role === 'user' &&
|
||||
!entry.isMeta) {
|
||||
turnCount++;
|
||||
}
|
||||
} catch {
|
||||
// Skip invalid lines
|
||||
}
|
||||
}
|
||||
return turnCount;
|
||||
}
|
||||
|
||||
case 'opencode': {
|
||||
// OpenCode uses separate message files, count from session data
|
||||
// For now, return a reasonable estimate based on file size
|
||||
// Actual message counting would require reading message files
|
||||
const stats = statSync(session.filePath);
|
||||
// Rough estimate: 1 turn per 2KB of session file
|
||||
return Math.max(1, Math.floor(stats.size / 2048));
|
||||
}
|
||||
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
} catch {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Load and format transcript from a native session file.
|
||||
*
|
||||
* Extracts text content from the session file and formats it
|
||||
* consistently with CCW session transcripts.
|
||||
*
|
||||
* @param session - The native session to load
|
||||
* @returns Formatted transcript string
|
||||
*/
|
||||
loadNativeSessionTranscript(session: NativeSession): string {
|
||||
try {
|
||||
const content = readFileSync(session.filePath, 'utf8');
|
||||
const parts: string[] = [];
|
||||
let turnNum = 1;
|
||||
|
||||
switch (session.tool) {
|
||||
case 'gemini': {
|
||||
// Gemini format: { messages: [{ type, content }] }
|
||||
const data = JSON.parse(content);
|
||||
if (data.messages && Array.isArray(data.messages)) {
|
||||
for (const msg of data.messages) {
|
||||
if (msg.type === 'user' && msg.content) {
|
||||
parts.push(`--- Turn ${turnNum} ---\n[USER] ${msg.content}`);
|
||||
} else if (msg.type === 'assistant' && msg.content) {
|
||||
parts.push(`[ASSISTANT] ${msg.content}`);
|
||||
turnNum++;
|
||||
}
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'qwen': {
|
||||
// Qwen format: JSONL with { type, message: { parts: [{ text }] } }
|
||||
const lines = content.split('\n').filter(l => l.trim());
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
|
||||
// User message
|
||||
if (entry.type === 'user' && entry.message?.parts) {
|
||||
const text = entry.message.parts
|
||||
.filter((p: { text?: string }) => p.text)
|
||||
.map((p: { text?: string }) => p.text)
|
||||
.join('\n');
|
||||
if (text) {
|
||||
parts.push(`--- Turn ${turnNum} ---\n[USER] ${text}`);
|
||||
}
|
||||
}
|
||||
// Assistant response
|
||||
else if (entry.type === 'assistant' && entry.message?.parts) {
|
||||
const text = entry.message.parts
|
||||
.filter((p: { text?: string }) => p.text)
|
||||
.map((p: { text?: string }) => p.text)
|
||||
.join('\n');
|
||||
if (text) {
|
||||
parts.push(`[ASSISTANT] ${text}`);
|
||||
turnNum++;
|
||||
}
|
||||
}
|
||||
// Legacy format
|
||||
else if (entry.role === 'user' && entry.content) {
|
||||
parts.push(`--- Turn ${turnNum} ---\n[USER] ${entry.content}`);
|
||||
} else if (entry.role === 'assistant' && entry.content) {
|
||||
parts.push(`[ASSISTANT] ${entry.content}`);
|
||||
turnNum++;
|
||||
}
|
||||
} catch {
|
||||
// Skip invalid lines
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'codex': {
|
||||
// Codex format: JSONL with { type, payload }
|
||||
const lines = content.split('\n').filter(l => l.trim());
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
|
||||
// User message
|
||||
if (entry.type === 'event_msg' &&
|
||||
entry.payload?.type === 'user_message' &&
|
||||
entry.payload.message) {
|
||||
parts.push(`--- Turn ${turnNum} ---\n[USER] ${entry.payload.message}`);
|
||||
}
|
||||
// Assistant response
|
||||
else if (entry.type === 'event_msg' &&
|
||||
entry.payload?.type === 'assistant_message' &&
|
||||
entry.payload.message) {
|
||||
parts.push(`[ASSISTANT] ${entry.payload.message}`);
|
||||
turnNum++;
|
||||
}
|
||||
} catch {
|
||||
// Skip invalid lines
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'claude': {
|
||||
// Claude format: JSONL with { type, message }
|
||||
const lines = content.split('\n').filter(l => l.trim());
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
|
||||
if (entry.type === 'user' && entry.message?.role === 'user' && !entry.isMeta) {
|
||||
const msgContent = entry.message.content;
|
||||
|
||||
// Handle string content
|
||||
if (typeof msgContent === 'string' &&
|
||||
!msgContent.startsWith('<command-') &&
|
||||
!msgContent.includes('<local-command')) {
|
||||
parts.push(`--- Turn ${turnNum} ---\n[USER] ${msgContent}`);
|
||||
}
|
||||
// Handle array content
|
||||
else if (Array.isArray(msgContent)) {
|
||||
for (const item of msgContent) {
|
||||
if (item.type === 'text' && item.text) {
|
||||
parts.push(`--- Turn ${turnNum} ---\n[USER] ${item.text}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Assistant response
|
||||
else if (entry.type === 'assistant' && entry.message?.content) {
|
||||
const msgContent = entry.message.content;
|
||||
if (typeof msgContent === 'string') {
|
||||
parts.push(`[ASSISTANT] ${msgContent}`);
|
||||
turnNum++;
|
||||
} else if (Array.isArray(msgContent)) {
|
||||
const textParts = msgContent
|
||||
.filter((item: { type?: string; text?: string }) => item.type === 'text' && item.text)
|
||||
.map((item: { text?: string }) => item.text)
|
||||
.join('\n');
|
||||
if (textParts) {
|
||||
parts.push(`[ASSISTANT] ${textParts}`);
|
||||
turnNum++;
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Skip invalid lines
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'opencode': {
|
||||
// OpenCode stores messages in separate files
|
||||
// For transcript extraction, read session metadata and messages
|
||||
// This is a simplified extraction - full implementation would
|
||||
// traverse message/part directories
|
||||
try {
|
||||
const sessionData = JSON.parse(content);
|
||||
if (sessionData.title) {
|
||||
parts.push(`--- Session ---\n[SESSION] ${sessionData.title}`);
|
||||
}
|
||||
if (sessionData.summary) {
|
||||
parts.push(`[SUMMARY] ${sessionData.summary}`);
|
||||
}
|
||||
} catch {
|
||||
// Return empty if parsing fails
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return parts.join('\n\n');
|
||||
} catch {
|
||||
return '';
|
||||
}
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// Truncation
|
||||
// ========================================================================
|
||||
@@ -354,20 +839,55 @@ export class MemoryExtractionPipeline {
|
||||
/**
|
||||
* Run the full extraction pipeline for a single session.
|
||||
*
|
||||
* Pipeline stages: Filter -> Truncate -> LLM Extract -> PostProcess -> Store
|
||||
* Pipeline stages: Authorize -> Filter -> Truncate -> LLM Extract -> PostProcess -> Store
|
||||
*
|
||||
* SECURITY: This method includes authorization verification to ensure the session
|
||||
* belongs to the current project path before processing.
|
||||
*
|
||||
* @param sessionId - The session to extract from
|
||||
* @param options - Optional configuration
|
||||
* @param options.source - 'ccw' for CCW history or 'native' for native CLI sessions
|
||||
* @param options.nativeSession - Native session data (required when source is 'native')
|
||||
* @param options.skipAuthorization - Internal use only: skip authorization (already validated)
|
||||
* @returns The stored Stage1Output, or null if extraction failed
|
||||
* @throws SessionAccessDeniedError if session doesn't belong to the project
|
||||
*/
|
||||
async runExtractionJob(sessionId: string): Promise<Stage1Output | null> {
|
||||
const historyStore = getHistoryStore(this.projectPath);
|
||||
const record = historyStore.getConversation(sessionId);
|
||||
if (!record) {
|
||||
throw new Error(`Session not found: ${sessionId}`);
|
||||
async runExtractionJob(
|
||||
sessionId: string,
|
||||
options?: {
|
||||
source?: 'ccw' | 'native';
|
||||
nativeSession?: NativeSession;
|
||||
skipAuthorization?: boolean;
|
||||
}
|
||||
): Promise<Stage1Output | null> {
|
||||
// SECURITY: Authorization check - verify session belongs to this project
|
||||
// Skip only if explicitly requested (for internal batch processing where already validated)
|
||||
if (!options?.skipAuthorization) {
|
||||
this.ensureSessionAccess(sessionId);
|
||||
}
|
||||
|
||||
const source = options?.source || 'ccw';
|
||||
let transcript: string;
|
||||
let sourceUpdatedAt: number;
|
||||
|
||||
if (source === 'native' && options?.nativeSession) {
|
||||
// Native session extraction
|
||||
const nativeSession = options.nativeSession;
|
||||
transcript = this.loadNativeSessionTranscript(nativeSession);
|
||||
sourceUpdatedAt = Math.floor(nativeSession.updatedAt.getTime() / 1000);
|
||||
} else {
|
||||
// CCW session extraction (default)
|
||||
const historyStore = getHistoryStore(this.projectPath);
|
||||
const record = historyStore.getConversation(sessionId);
|
||||
if (!record) {
|
||||
throw new Error(`Session not found: ${sessionId}`);
|
||||
}
|
||||
|
||||
// Stage 1: Filter transcript
|
||||
transcript = this.filterTranscript(record);
|
||||
sourceUpdatedAt = Math.floor(new Date(record.updated_at).getTime() / 1000);
|
||||
}
|
||||
|
||||
// Stage 1: Filter transcript
|
||||
const transcript = this.filterTranscript(record);
|
||||
if (!transcript.trim()) {
|
||||
return null; // Empty transcript, nothing to extract
|
||||
}
|
||||
@@ -385,7 +905,6 @@ export class MemoryExtractionPipeline {
|
||||
const extracted = this.postProcess(llmOutput);
|
||||
|
||||
// Stage 5: Store result
|
||||
const sourceUpdatedAt = Math.floor(new Date(record.updated_at).getTime() / 1000);
|
||||
const generatedAt = Math.floor(Date.now() / 1000);
|
||||
|
||||
const output: Stage1Output = {
|
||||
@@ -492,7 +1011,8 @@ export class MemoryExtractionPipeline {
|
||||
const token = claim.ownership_token!;
|
||||
|
||||
try {
|
||||
const output = await this.runExtractionJob(session.id);
|
||||
// Batch extraction: sessions already validated by scanEligibleSessions(), skip auth check
|
||||
const output = await this.runExtractionJob(session.id, { skipAuthorization: true });
|
||||
if (output) {
|
||||
const watermark = output.source_updated_at;
|
||||
scheduler.markSucceeded(JOB_KIND_EXTRACTION, session.id, token, watermark);
|
||||
|
||||
@@ -7,10 +7,13 @@
|
||||
* - POST /api/commands/:name/toggle - Enable/disable single command
|
||||
* - POST /api/commands/group/:groupName/toggle - Batch toggle commands by group
|
||||
*/
|
||||
import { existsSync, readdirSync, readFileSync, mkdirSync, renameSync } from 'fs';
|
||||
import { existsSync, readdirSync, readFileSync, mkdirSync, renameSync, copyFileSync } from 'fs';
|
||||
import { promises as fsPromises } from 'fs';
|
||||
import { join, relative, dirname, basename } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import { validatePath as validateAllowedPath } from '../../utils/path-validator.js';
|
||||
import { executeCliTool } from '../../tools/cli-executor.js';
|
||||
import { SmartContentFormatter } from '../../tools/cli-output-converter.js';
|
||||
import type { RouteContext } from './types.js';
|
||||
|
||||
// ========== Types ==========
|
||||
@@ -62,6 +65,38 @@ interface CommandGroupsConfig {
|
||||
assignments: Record<string, string>; // commandName -> groupId mapping
|
||||
}
|
||||
|
||||
/**
|
||||
* Command creation mode type
|
||||
*/
|
||||
type CommandCreationMode = 'upload' | 'generate';
|
||||
|
||||
/**
|
||||
* Parameters for creating a command
|
||||
*/
|
||||
interface CreateCommandParams {
|
||||
mode: CommandCreationMode;
|
||||
location: CommandLocation;
|
||||
sourcePath?: string; // Required for 'upload' mode - path to uploaded file
|
||||
skillName?: string; // Required for 'generate' mode - skill to generate from
|
||||
description?: string; // Optional description for generated commands
|
||||
projectPath: string;
|
||||
cliType?: string; // CLI tool type for generation
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of command creation operation
|
||||
*/
|
||||
interface CommandCreationResult extends CommandOperationResult {
|
||||
commandInfo?: CommandMetadata | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validation result for command file
|
||||
*/
|
||||
type CommandFileValidation =
|
||||
| { valid: true; errors: string[]; commandInfo: CommandMetadata }
|
||||
| { valid: false; errors: string[]; commandInfo: null };
|
||||
|
||||
// ========== Helper Functions ==========
|
||||
|
||||
function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
@@ -126,6 +161,388 @@ function parseCommandFrontmatter(content: string): CommandMetadata {
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate a command file for creation
|
||||
* Checks file existence, reads content, parses frontmatter, validates required fields
|
||||
*/
|
||||
function validateCommandFile(filePath: string): CommandFileValidation {
|
||||
const errors: string[] = [];
|
||||
|
||||
// Check file exists
|
||||
if (!existsSync(filePath)) {
|
||||
return { valid: false, errors: ['Command file does not exist'], commandInfo: null };
|
||||
}
|
||||
|
||||
// Check file extension
|
||||
if (!filePath.endsWith('.md')) {
|
||||
return { valid: false, errors: ['Command file must be a .md file'], commandInfo: null };
|
||||
}
|
||||
|
||||
// Read file content
|
||||
let content: string;
|
||||
try {
|
||||
content = readFileSync(filePath, 'utf8');
|
||||
} catch (err) {
|
||||
return { valid: false, errors: [`Failed to read file: ${(err as Error).message}`], commandInfo: null };
|
||||
}
|
||||
|
||||
// Parse frontmatter
|
||||
const commandInfo = parseCommandFrontmatter(content);
|
||||
|
||||
// Validate required fields
|
||||
if (!commandInfo.name || commandInfo.name.trim() === '') {
|
||||
errors.push('Command name is required in frontmatter');
|
||||
}
|
||||
|
||||
// Check for valid frontmatter structure
|
||||
if (!content.startsWith('---')) {
|
||||
errors.push('Command file must have YAML frontmatter (starting with ---)');
|
||||
} else {
|
||||
const endIndex = content.indexOf('---', 3);
|
||||
if (endIndex < 0) {
|
||||
errors.push('Command file has invalid frontmatter (missing closing ---)');
|
||||
}
|
||||
}
|
||||
|
||||
if (errors.length > 0) {
|
||||
return { valid: false, errors, commandInfo: null };
|
||||
}
|
||||
|
||||
return { valid: true, errors: [], commandInfo };
|
||||
}
|
||||
|
||||
/**
|
||||
* Upload (copy) a command file to the commands directory
|
||||
* Handles group subdirectory creation and path security validation
|
||||
* @param sourcePath - Source command file path
|
||||
* @param targetGroup - Target group subdirectory (e.g., 'workflow/review')
|
||||
* @param location - 'project' or 'user'
|
||||
* @param projectPath - Project root path
|
||||
* @param customName - Optional custom filename (without .md extension)
|
||||
* @returns CommandCreationResult with success status and command info
|
||||
*/
|
||||
async function uploadCommand(
|
||||
sourcePath: string,
|
||||
targetGroup: string,
|
||||
location: CommandLocation,
|
||||
projectPath: string,
|
||||
customName?: string
|
||||
): Promise<CommandCreationResult> {
|
||||
try {
|
||||
// Validate source file exists and is .md
|
||||
if (!existsSync(sourcePath)) {
|
||||
return { success: false, message: 'Source command file does not exist', status: 404 };
|
||||
}
|
||||
|
||||
if (!sourcePath.endsWith('.md')) {
|
||||
return { success: false, message: 'Source file must be a .md file', status: 400 };
|
||||
}
|
||||
|
||||
// Validate source file content
|
||||
const validation = validateCommandFile(sourcePath);
|
||||
if (!validation.valid) {
|
||||
return { success: false, message: validation.errors.join(', '), status: 400 };
|
||||
}
|
||||
|
||||
// Get target commands directory
|
||||
const commandsDir = getCommandsDir(location, projectPath);
|
||||
|
||||
// Build target path with optional group subdirectory
|
||||
let targetDir = commandsDir;
|
||||
if (targetGroup && targetGroup.trim() !== '') {
|
||||
// Sanitize group path - prevent path traversal
|
||||
const sanitizedGroup = targetGroup
|
||||
.replace(/\.\./g, '') // Remove path traversal attempts
|
||||
.replace(/[<>:"|?*]/g, '') // Remove invalid characters
|
||||
.replace(/\/+/g, '/') // Collapse multiple slashes
|
||||
.replace(/^\/|\/$/g, ''); // Remove leading/trailing slashes
|
||||
|
||||
if (sanitizedGroup) {
|
||||
targetDir = join(commandsDir, sanitizedGroup);
|
||||
}
|
||||
}
|
||||
|
||||
// Create target directory if needed
|
||||
if (!existsSync(targetDir)) {
|
||||
mkdirSync(targetDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Determine target filename
|
||||
const sourceBasename = basename(sourcePath, '.md');
|
||||
const targetFilename = (customName && customName.trim() !== '')
|
||||
? `${customName.replace(/\.md$/, '')}.md`
|
||||
: `${sourceBasename}.md`;
|
||||
|
||||
// Sanitize filename - prevent path traversal
|
||||
const sanitizedFilename = targetFilename
|
||||
.replace(/\.\./g, '')
|
||||
.replace(/[<>:"|?*]/g, '')
|
||||
.replace(/\//g, '');
|
||||
|
||||
const targetPath = join(targetDir, sanitizedFilename);
|
||||
|
||||
// Security check: ensure target path is within commands directory
|
||||
const resolvedTarget = targetPath; // Already resolved by join
|
||||
const resolvedCommandsDir = commandsDir;
|
||||
|
||||
if (!resolvedTarget.startsWith(resolvedCommandsDir)) {
|
||||
return { success: false, message: 'Invalid target path - path traversal detected', status: 400 };
|
||||
}
|
||||
|
||||
// Check if target already exists
|
||||
if (existsSync(targetPath)) {
|
||||
return { success: false, message: `Command '${sanitizedFilename}' already exists in target location`, status: 409 };
|
||||
}
|
||||
|
||||
// Copy file to target path
|
||||
copyFileSync(sourcePath, targetPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
message: 'Command uploaded successfully',
|
||||
commandName: validation.commandInfo.name,
|
||||
location,
|
||||
commandInfo: {
|
||||
name: validation.commandInfo.name,
|
||||
description: validation.commandInfo.description,
|
||||
group: targetGroup || 'other'
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
success: false,
|
||||
message: (error as Error).message,
|
||||
status: 500
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generation parameters for command generation via CLI
|
||||
*/
|
||||
interface CommandGenerationParams {
|
||||
commandName: string;
|
||||
description: string;
|
||||
location: CommandLocation;
|
||||
projectPath: string;
|
||||
group?: string;
|
||||
argumentHint?: string;
|
||||
broadcastToClients?: (data: unknown) => void;
|
||||
cliType?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate command via CLI tool using command-generator skill
|
||||
* Follows the pattern from skills-routes.ts generateSkillViaCLI
|
||||
* @param params - Generation parameters including name, description, location, etc.
|
||||
* @returns CommandCreationResult with success status and generated command info
|
||||
*/
|
||||
async function generateCommandViaCLI({
|
||||
commandName,
|
||||
description,
|
||||
location,
|
||||
projectPath,
|
||||
group,
|
||||
argumentHint,
|
||||
broadcastToClients,
|
||||
cliType = 'claude'
|
||||
}: CommandGenerationParams): Promise<CommandCreationResult> {
|
||||
// Generate unique execution ID for tracking
|
||||
const executionId = `cmd-gen-${commandName}-${Date.now()}`;
|
||||
|
||||
try {
|
||||
// Validate required inputs
|
||||
if (!commandName || commandName.trim() === '') {
|
||||
return { success: false, message: 'Command name is required', status: 400 };
|
||||
}
|
||||
|
||||
if (!description || description.trim() === '') {
|
||||
return { success: false, message: 'Description is required for command generation', status: 400 };
|
||||
}
|
||||
|
||||
// Sanitize command name - prevent path traversal
|
||||
if (commandName.includes('..') || commandName.includes('/') || commandName.includes('\\')) {
|
||||
return { success: false, message: 'Invalid command name - path characters not allowed', status: 400 };
|
||||
}
|
||||
|
||||
// Get target commands directory
|
||||
const commandsDir = getCommandsDir(location, projectPath);
|
||||
|
||||
// Build target path with optional group subdirectory
|
||||
let targetDir = commandsDir;
|
||||
|
||||
if (group && group.trim() !== '') {
|
||||
const sanitizedGroup = group
|
||||
.replace(/\.\./g, '')
|
||||
.replace(/[<>:"|?*]/g, '')
|
||||
.replace(/\/+/g, '/')
|
||||
.replace(/^\/|\/$/g, '');
|
||||
|
||||
if (sanitizedGroup) {
|
||||
targetDir = join(commandsDir, sanitizedGroup);
|
||||
}
|
||||
}
|
||||
|
||||
const targetPath = join(targetDir, `${commandName}.md`);
|
||||
|
||||
// Check if command already exists
|
||||
if (existsSync(targetPath)) {
|
||||
return {
|
||||
success: false,
|
||||
message: `Command '${commandName}' already exists in ${location} location${group ? ` (group: ${group})` : ''}`,
|
||||
status: 409
|
||||
};
|
||||
}
|
||||
|
||||
// Ensure target directory exists
|
||||
if (!existsSync(targetDir)) {
|
||||
await fsPromises.mkdir(targetDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Build target location display for prompt
|
||||
const targetLocationDisplay = location === 'project'
|
||||
? '.claude/commands/'
|
||||
: '~/.claude/commands/';
|
||||
|
||||
// Build structured command parameters for /command-generator skill
|
||||
const commandParams = {
|
||||
skillName: commandName,
|
||||
description,
|
||||
location,
|
||||
group: group || '',
|
||||
argumentHint: argumentHint || ''
|
||||
};
|
||||
|
||||
// Prompt that invokes /command-generator skill with structured parameters
|
||||
const prompt = `/command-generator
|
||||
|
||||
## Command Parameters (Structured Input)
|
||||
|
||||
\`\`\`json
|
||||
${JSON.stringify(commandParams, null, 2)}
|
||||
\`\`\`
|
||||
|
||||
## User Request
|
||||
|
||||
Create a new Claude Code command with the following specifications:
|
||||
|
||||
- **Command Name**: ${commandName}
|
||||
- **Description**: ${description}
|
||||
- **Target Location**: ${targetLocationDisplay}${group ? `${group}/` : ''}${commandName}.md
|
||||
- **Location Type**: ${location === 'project' ? 'Project-level (.claude/commands/)' : 'User-level (~/.claude/commands/)'}
|
||||
${group ? `- **Group**: ${group}` : ''}
|
||||
${argumentHint ? `- **Argument Hint**: ${argumentHint}` : ''}
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Use the command-generator skill to create a command file with proper YAML frontmatter
|
||||
2. Include name, description in frontmatter${group ? '\n3. Include group in frontmatter' : ''}${argumentHint ? '\n4. Include argument-hint in frontmatter' : ''}
|
||||
3. Generate useful command content and usage examples
|
||||
4. Output the file to: ${targetPath}`;
|
||||
|
||||
// Broadcast CLI_EXECUTION_STARTED event
|
||||
if (broadcastToClients) {
|
||||
broadcastToClients({
|
||||
type: 'CLI_EXECUTION_STARTED',
|
||||
payload: {
|
||||
executionId,
|
||||
tool: cliType,
|
||||
mode: 'write',
|
||||
category: 'internal',
|
||||
context: 'command-generation',
|
||||
commandName
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Create onOutput callback for real-time streaming
|
||||
const onOutput = broadcastToClients
|
||||
? (unit: import('../../tools/cli-output-converter.js').CliOutputUnit) => {
|
||||
const content = SmartContentFormatter.format(unit.content, unit.type);
|
||||
broadcastToClients({
|
||||
type: 'CLI_OUTPUT',
|
||||
payload: {
|
||||
executionId,
|
||||
chunkType: unit.type,
|
||||
data: content
|
||||
}
|
||||
});
|
||||
}
|
||||
: undefined;
|
||||
|
||||
// Execute CLI tool with write mode
|
||||
const startTime = Date.now();
|
||||
const result = await executeCliTool({
|
||||
tool: cliType,
|
||||
prompt,
|
||||
mode: 'write',
|
||||
cd: projectPath,
|
||||
timeout: 600000, // 10 minutes
|
||||
category: 'internal',
|
||||
id: executionId
|
||||
}, onOutput);
|
||||
|
||||
// Broadcast CLI_EXECUTION_COMPLETED event
|
||||
if (broadcastToClients) {
|
||||
broadcastToClients({
|
||||
type: 'CLI_EXECUTION_COMPLETED',
|
||||
payload: {
|
||||
executionId,
|
||||
success: result.success,
|
||||
status: result.execution?.status || (result.success ? 'success' : 'error'),
|
||||
duration_ms: Date.now() - startTime
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Check if execution was successful
|
||||
if (!result.success) {
|
||||
return {
|
||||
success: false,
|
||||
message: `CLI generation failed: ${result.stderr || 'Unknown error'}`,
|
||||
status: 500
|
||||
};
|
||||
}
|
||||
|
||||
// Validate the generated command file exists
|
||||
if (!existsSync(targetPath)) {
|
||||
return {
|
||||
success: false,
|
||||
message: 'Generated command file not found at expected location',
|
||||
status: 500
|
||||
};
|
||||
}
|
||||
|
||||
// Validate the generated command file content
|
||||
const validation = validateCommandFile(targetPath);
|
||||
if (!validation.valid) {
|
||||
return {
|
||||
success: false,
|
||||
message: `Generated command is invalid: ${validation.errors.join(', ')}`,
|
||||
status: 500
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
message: 'Command generated successfully',
|
||||
commandName: validation.commandInfo.name,
|
||||
location,
|
||||
commandInfo: {
|
||||
name: validation.commandInfo.name,
|
||||
description: validation.commandInfo.description,
|
||||
group: validation.commandInfo.group
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
success: false,
|
||||
message: (error as Error).message,
|
||||
status: 500
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get command groups config file path
|
||||
*/
|
||||
@@ -616,5 +1033,103 @@ export async function handleCommandsRoutes(ctx: RouteContext): Promise<boolean>
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/commands/create - Create command (upload or generate)
|
||||
if (pathname === '/api/commands/create' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body) => {
|
||||
if (!isRecord(body)) {
|
||||
return { success: false, message: 'Invalid request body', status: 400 };
|
||||
}
|
||||
|
||||
const mode = body.mode;
|
||||
const locationValue = body.location;
|
||||
const sourcePath = typeof body.sourcePath === 'string' ? body.sourcePath : undefined;
|
||||
const skillName = typeof body.skillName === 'string' ? body.skillName : undefined;
|
||||
const description = typeof body.description === 'string' ? body.description : undefined;
|
||||
const group = typeof body.group === 'string' ? body.group : undefined;
|
||||
const argumentHint = typeof body.argumentHint === 'string' ? body.argumentHint : undefined;
|
||||
const projectPathParam = typeof body.projectPath === 'string' ? body.projectPath : undefined;
|
||||
const cliType = typeof body.cliType === 'string' ? body.cliType : 'claude';
|
||||
|
||||
// Validate mode
|
||||
if (mode !== 'upload' && mode !== 'generate') {
|
||||
return { success: false, message: 'Mode is required and must be "upload" or "generate"', status: 400 };
|
||||
}
|
||||
|
||||
// Validate location
|
||||
if (locationValue !== 'project' && locationValue !== 'user') {
|
||||
return { success: false, message: 'Location is required (project or user)', status: 400 };
|
||||
}
|
||||
|
||||
const location: CommandLocation = locationValue;
|
||||
const projectPath = projectPathParam || initialPath;
|
||||
|
||||
// Validate project path for security
|
||||
let validatedProjectPath = projectPath;
|
||||
if (location === 'project') {
|
||||
try {
|
||||
validatedProjectPath = await validateAllowedPath(projectPath, { mustExist: true, allowedDirectories: [initialPath] });
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
const status = message.includes('Access denied') ? 403 : 400;
|
||||
console.error(`[Commands] Project path validation failed: ${message}`);
|
||||
return { success: false, message: status === 403 ? 'Access denied' : 'Invalid path', status };
|
||||
}
|
||||
}
|
||||
|
||||
if (mode === 'upload') {
|
||||
// Upload mode: copy existing command file
|
||||
if (!sourcePath) {
|
||||
return { success: false, message: 'Source path is required for upload mode', status: 400 };
|
||||
}
|
||||
|
||||
// Validate source path for security
|
||||
let validatedSourcePath: string;
|
||||
try {
|
||||
validatedSourcePath = await validateAllowedPath(sourcePath, { mustExist: true });
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
const status = message.includes('Access denied') ? 403 : 400;
|
||||
console.error(`[Commands] Source path validation failed: ${message}`);
|
||||
return { success: false, message: status === 403 ? 'Access denied' : 'Invalid source path', status };
|
||||
}
|
||||
|
||||
return await uploadCommand(
|
||||
validatedSourcePath,
|
||||
group || '',
|
||||
location,
|
||||
validatedProjectPath
|
||||
);
|
||||
} else if (mode === 'generate') {
|
||||
// Generate mode: use CLI to generate command
|
||||
if (!skillName) {
|
||||
return { success: false, message: 'Skill name is required for generate mode', status: 400 };
|
||||
}
|
||||
if (!description) {
|
||||
return { success: false, message: 'Description is required for generate mode', status: 400 };
|
||||
}
|
||||
|
||||
// Validate skill name for security
|
||||
if (skillName.includes('..') || skillName.includes('/') || skillName.includes('\\')) {
|
||||
return { success: false, message: 'Invalid skill name - path characters not allowed', status: 400 };
|
||||
}
|
||||
|
||||
return await generateCommandViaCLI({
|
||||
commandName: skillName,
|
||||
description,
|
||||
location,
|
||||
projectPath: validatedProjectPath,
|
||||
group,
|
||||
argumentHint,
|
||||
broadcastToClients: ctx.broadcastToClients,
|
||||
cliType
|
||||
});
|
||||
}
|
||||
|
||||
// This should never be reached due to mode validation above
|
||||
return { success: false, message: 'Invalid mode', status: 400 };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -10,6 +10,54 @@ import { StoragePaths } from '../../config/storage-paths.js';
|
||||
import { join } from 'path';
|
||||
import { getDefaultTool } from '../../tools/claude-cli-tools.js';
|
||||
|
||||
// ========================================
|
||||
// Error Handling Utilities
|
||||
// ========================================
|
||||
|
||||
/**
|
||||
* Sanitize error message for client response
|
||||
* Logs full error server-side, returns user-friendly message to client
|
||||
*/
|
||||
function sanitizeErrorMessage(error: unknown, context: string): string {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
|
||||
// Log full error for debugging (server-side only)
|
||||
if (process.env.DEBUG || process.env.NODE_ENV === 'development') {
|
||||
console.error(`[CoreMemoryRoutes] ${context}:`, error);
|
||||
}
|
||||
|
||||
// Map common internal errors to user-friendly messages
|
||||
const lowerMessage = errorMessage.toLowerCase();
|
||||
|
||||
if (lowerMessage.includes('enoent') || lowerMessage.includes('no such file')) {
|
||||
return 'Resource not found';
|
||||
}
|
||||
if (lowerMessage.includes('eacces') || lowerMessage.includes('permission denied')) {
|
||||
return 'Access denied';
|
||||
}
|
||||
if (lowerMessage.includes('sqlite') || lowerMessage.includes('database')) {
|
||||
return 'Database operation failed';
|
||||
}
|
||||
if (lowerMessage.includes('json') || lowerMessage.includes('parse')) {
|
||||
return 'Invalid data format';
|
||||
}
|
||||
|
||||
// Return generic message for unexpected errors (don't expose internals)
|
||||
return 'An unexpected error occurred';
|
||||
}
|
||||
|
||||
/**
|
||||
* Write error response with sanitized message
|
||||
*/
|
||||
function writeErrorResponse(
|
||||
res: http.ServerResponse,
|
||||
statusCode: number,
|
||||
message: string
|
||||
): void {
|
||||
res.writeHead(statusCode, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: message }));
|
||||
}
|
||||
|
||||
/**
|
||||
* Route context interface
|
||||
*/
|
||||
@@ -303,6 +351,190 @@ export async function handleCoreMemoryRoutes(ctx: RouteContext): Promise<boolean
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Preview eligible sessions for selective extraction
|
||||
if (pathname === '/api/core-memory/extract/preview' && req.method === 'GET') {
|
||||
const projectPath = url.searchParams.get('path') || initialPath;
|
||||
const includeNative = url.searchParams.get('includeNative') === 'true';
|
||||
const maxSessionsParam = url.searchParams.get('maxSessions');
|
||||
const maxSessions = maxSessionsParam ? parseInt(maxSessionsParam, 10) : undefined;
|
||||
|
||||
// Validate maxSessions parameter
|
||||
if (maxSessionsParam && (isNaN(maxSessions as number) || (maxSessions as number) < 1)) {
|
||||
writeErrorResponse(res, 400, 'Invalid maxSessions parameter: must be a positive integer');
|
||||
return true;
|
||||
}
|
||||
|
||||
try {
|
||||
const { MemoryExtractionPipeline } = await import('../memory-extraction-pipeline.js');
|
||||
const pipeline = new MemoryExtractionPipeline(projectPath);
|
||||
|
||||
const preview = pipeline.previewEligibleSessions({
|
||||
includeNative,
|
||||
maxSessions,
|
||||
});
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({
|
||||
success: true,
|
||||
sessions: preview.sessions,
|
||||
summary: preview.summary,
|
||||
}));
|
||||
} catch (error: unknown) {
|
||||
// Log full error server-side, return sanitized message to client
|
||||
writeErrorResponse(res, 500, sanitizeErrorMessage(error, 'extract/preview'));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Selective extraction for specific sessions
|
||||
if (pathname === '/api/core-memory/extract/selected' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body) => {
|
||||
const { sessionIds, includeNative, path: projectPath } = body;
|
||||
const basePath = projectPath || initialPath;
|
||||
|
||||
// Validate sessionIds - return 400 for invalid input
|
||||
if (!Array.isArray(sessionIds)) {
|
||||
return { error: 'sessionIds must be an array', status: 400 };
|
||||
}
|
||||
if (sessionIds.length === 0) {
|
||||
return { error: 'sessionIds cannot be empty', status: 400 };
|
||||
}
|
||||
// Validate each sessionId is a non-empty string
|
||||
for (const id of sessionIds) {
|
||||
if (typeof id !== 'string' || id.trim() === '') {
|
||||
return { error: 'Each sessionId must be a non-empty string', status: 400 };
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const store = getCoreMemoryStore(basePath);
|
||||
const scheduler = new MemoryJobScheduler(store.getDb());
|
||||
|
||||
const { MemoryExtractionPipeline, SessionAccessDeniedError } = await import('../memory-extraction-pipeline.js');
|
||||
const pipeline = new MemoryExtractionPipeline(basePath);
|
||||
|
||||
// Get preview to validate sessions (project-scoped)
|
||||
const preview = pipeline.previewEligibleSessions({ includeNative });
|
||||
const validSessionIds = new Set(preview.sessions.map(s => s.sessionId));
|
||||
|
||||
// Return 404 if no eligible sessions exist at all
|
||||
if (validSessionIds.size === 0) {
|
||||
return { error: 'No eligible sessions found for extraction', status: 404 };
|
||||
}
|
||||
|
||||
const queued: string[] = [];
|
||||
const skipped: string[] = [];
|
||||
const invalidIds: string[] = [];
|
||||
const unauthorizedIds: string[] = [];
|
||||
|
||||
for (const sessionId of sessionIds) {
|
||||
// SECURITY: Verify session belongs to this project
|
||||
// This double-checks that the sessionId is from the project-scoped preview
|
||||
if (!validSessionIds.has(sessionId)) {
|
||||
// Check if it's unauthorized (exists but not in this project)
|
||||
if (!pipeline.verifySessionBelongsToProject(sessionId)) {
|
||||
unauthorizedIds.push(sessionId);
|
||||
} else {
|
||||
invalidIds.push(sessionId);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if already extracted
|
||||
const existingOutput = store.getStage1Output(sessionId);
|
||||
if (existingOutput) {
|
||||
skipped.push(sessionId);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Get session info for watermark
|
||||
const historyStore = (await import('../../tools/cli-history-store.js')).getHistoryStore(basePath);
|
||||
const session = historyStore.getConversation(sessionId);
|
||||
if (!session) {
|
||||
invalidIds.push(sessionId);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Enqueue job
|
||||
const watermark = Math.floor(new Date(session.updated_at).getTime() / 1000);
|
||||
scheduler.enqueueJob('phase1_extraction', sessionId, watermark);
|
||||
queued.push(sessionId);
|
||||
}
|
||||
|
||||
// Return 409 Conflict if all sessions were already extracted
|
||||
if (queued.length === 0 && skipped.length === sessionIds.length) {
|
||||
return {
|
||||
error: 'All specified sessions have already been extracted',
|
||||
status: 409,
|
||||
skipped
|
||||
};
|
||||
}
|
||||
|
||||
// Return 404 if no valid sessions were found (all were invalid or unauthorized)
|
||||
if (queued.length === 0 && skipped.length === 0) {
|
||||
return { error: 'No valid sessions found among the provided IDs', status: 404 };
|
||||
}
|
||||
|
||||
// Generate batch job ID
|
||||
const jobId = `batch-${Date.now()}`;
|
||||
|
||||
// Broadcast start event
|
||||
broadcastToClients({
|
||||
type: 'MEMORY_EXTRACTION_STARTED',
|
||||
payload: {
|
||||
timestamp: new Date().toISOString(),
|
||||
jobId,
|
||||
queuedCount: queued.length,
|
||||
selective: true,
|
||||
}
|
||||
});
|
||||
|
||||
// Fire-and-forget: process queued sessions
|
||||
// Sessions already validated above, skip auth check for efficiency
|
||||
(async () => {
|
||||
try {
|
||||
for (const sessionId of queued) {
|
||||
try {
|
||||
await pipeline.runExtractionJob(sessionId, { skipAuthorization: true });
|
||||
} catch (err) {
|
||||
if (process.env.DEBUG) {
|
||||
console.warn(`[SelectiveExtraction] Failed for ${sessionId}:`, (err as Error).message);
|
||||
}
|
||||
}
|
||||
}
|
||||
broadcastToClients({
|
||||
type: 'MEMORY_EXTRACTION_COMPLETED',
|
||||
payload: { timestamp: new Date().toISOString(), jobId }
|
||||
});
|
||||
} catch (err) {
|
||||
broadcastToClients({
|
||||
type: 'MEMORY_EXTRACTION_FAILED',
|
||||
payload: {
|
||||
timestamp: new Date().toISOString(),
|
||||
jobId,
|
||||
error: (err as Error).message,
|
||||
}
|
||||
});
|
||||
}
|
||||
})();
|
||||
|
||||
// Include unauthorizedIds in response for security transparency
|
||||
return {
|
||||
success: true,
|
||||
jobId,
|
||||
queued: queued.length,
|
||||
skipped: skipped.length,
|
||||
invalidIds,
|
||||
...(unauthorizedIds.length > 0 && { unauthorizedIds }),
|
||||
};
|
||||
} catch (error: unknown) {
|
||||
// Log full error server-side, return sanitized message to client
|
||||
return { error: sanitizeErrorMessage(error, 'extract/selected'), status: 500 };
|
||||
}
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Get extraction pipeline status
|
||||
if (pathname === '/api/core-memory/extract/status' && req.method === 'GET') {
|
||||
const projectPath = url.searchParams.get('path') || initialPath;
|
||||
|
||||
@@ -7,6 +7,7 @@ import Database from 'better-sqlite3';
|
||||
import { existsSync, mkdirSync, readdirSync, readFileSync, statSync, unlinkSync, rmdirSync } from 'fs';
|
||||
import { join, dirname, resolve } from 'path';
|
||||
import { parseSessionFile, formatConversation, extractConversationPairs, type ParsedSession, type ParsedTurn } from './session-content-parser.js';
|
||||
import { getDiscoverer, getNativeSessions } from './native-session-discovery.js';
|
||||
import { StoragePaths, ensureStorageDir, getProjectId, getCCWHome } from '../config/storage-paths.js';
|
||||
import type { CliOutputUnit } from './cli-output-converter.js';
|
||||
|
||||
@@ -1065,11 +1066,94 @@ export class CliHistoryStore {
|
||||
*/
|
||||
async getNativeSessionContent(ccwId: string): Promise<ParsedSession | null> {
|
||||
const mapping = this.getNativeSessionMapping(ccwId);
|
||||
if (!mapping || !mapping.native_session_path) {
|
||||
return null;
|
||||
if (mapping?.native_session_path) {
|
||||
const parsed = await parseSessionFile(mapping.native_session_path, mapping.tool);
|
||||
if (parsed) {
|
||||
return parsed;
|
||||
}
|
||||
// If mapping exists but file is missing/invalid, fall through to re-discovery.
|
||||
}
|
||||
|
||||
return parseSessionFile(mapping.native_session_path, mapping.tool);
|
||||
// On-demand discovery/backfill: attempt to locate native session file from conversation metadata.
|
||||
try {
|
||||
const conversation = this.getConversation(ccwId);
|
||||
if (!conversation) return null;
|
||||
|
||||
const tool = conversation.tool;
|
||||
const discoverer = getDiscoverer(tool);
|
||||
if (!discoverer) return null;
|
||||
|
||||
const createdMs = Date.parse(conversation.created_at);
|
||||
const updatedMs = Date.parse(conversation.updated_at || conversation.created_at);
|
||||
const durationMs = conversation.total_duration_ms || 0;
|
||||
|
||||
const endMs = Number.isFinite(updatedMs)
|
||||
? updatedMs
|
||||
: (Number.isFinite(createdMs) ? createdMs + durationMs : NaN);
|
||||
if (!Number.isFinite(endMs)) return null;
|
||||
|
||||
const afterTimestamp = Number.isFinite(createdMs) ? new Date(createdMs - 60_000) : undefined;
|
||||
const sessions = getNativeSessions(tool, { workingDir: this.projectPath, afterTimestamp });
|
||||
if (sessions.length === 0) return null;
|
||||
|
||||
// Prefer sessions whose updatedAt is close to execution end time.
|
||||
const timeWindowMs = Math.max(5 * 60_000, durationMs + 2 * 60_000);
|
||||
const timeCandidates = sessions.filter(s => Math.abs(s.updatedAt.getTime() - endMs) <= timeWindowMs);
|
||||
const candidates = timeCandidates.length > 0
|
||||
? timeCandidates
|
||||
: sessions
|
||||
.map(session => ({ session, timeDiffMs: Math.abs(session.updatedAt.getTime() - endMs) }))
|
||||
.sort((a, b) => a.timeDiffMs - b.timeDiffMs)
|
||||
.slice(0, 50)
|
||||
.map(x => x.session);
|
||||
|
||||
const prompt = conversation.turns[0]?.prompt || '';
|
||||
const promptPrefix = prompt.substring(0, 200).trim();
|
||||
|
||||
const scored = candidates
|
||||
.map(session => {
|
||||
let promptMatch = false;
|
||||
if (promptPrefix) {
|
||||
try {
|
||||
const firstUserMessage = discoverer.extractFirstUserMessage(session.filePath);
|
||||
promptMatch = !!firstUserMessage && firstUserMessage.includes(promptPrefix);
|
||||
} catch {
|
||||
// Ignore extraction errors (still allow time-based match)
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
session,
|
||||
promptMatch,
|
||||
timeDiffMs: Math.abs(session.updatedAt.getTime() - endMs)
|
||||
};
|
||||
})
|
||||
.sort((a, b) => {
|
||||
if (a.promptMatch !== b.promptMatch) return a.promptMatch ? -1 : 1;
|
||||
return a.timeDiffMs - b.timeDiffMs;
|
||||
});
|
||||
|
||||
const best = scored[0]?.session;
|
||||
if (!best) return null;
|
||||
|
||||
// Persist mapping for future loads (best-effort).
|
||||
try {
|
||||
this.saveNativeSessionMapping({
|
||||
ccw_id: ccwId,
|
||||
tool,
|
||||
native_session_id: best.sessionId,
|
||||
native_session_path: best.filePath,
|
||||
project_hash: best.projectHash,
|
||||
created_at: new Date().toISOString()
|
||||
});
|
||||
} catch {
|
||||
// Ignore persistence errors; still attempt to return content.
|
||||
}
|
||||
|
||||
return await parseSessionFile(best.filePath, tool);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
*/
|
||||
|
||||
import { existsSync, readdirSync, readFileSync, statSync } from 'fs';
|
||||
import { join, basename, resolve } from 'path';
|
||||
import { join, basename, dirname, resolve } from 'path';
|
||||
// basename is used for extracting session ID from filename
|
||||
import { createHash } from 'crypto';
|
||||
import { homedir } from 'os';
|
||||
@@ -43,6 +43,48 @@ function getHomePath(): string {
|
||||
return homedir().replace(/\\/g, '/');
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize a project root path for comparing against Gemini's `projects.json` keys
|
||||
* and `.project_root` marker file contents.
|
||||
*
|
||||
* On Windows Gemini uses lowercased absolute paths with backslashes.
|
||||
*/
|
||||
function normalizeGeminiProjectRootPath(projectDir: string): string {
|
||||
const absolutePath = resolve(projectDir);
|
||||
if (process.platform !== 'win32') return absolutePath;
|
||||
return absolutePath.replace(/\//g, '\\').toLowerCase();
|
||||
}
|
||||
|
||||
let geminiProjectsCache:
|
||||
| { configPath: string; mtimeMs: number; projects: Record<string, string> }
|
||||
| null = null;
|
||||
|
||||
/**
|
||||
* Load Gemini project mapping from `~/.gemini/projects.json` (best-effort).
|
||||
* Format: { "projects": { "<projectRoot>": "<projectName>" } }
|
||||
*/
|
||||
function getGeminiProjectsMap(): Record<string, string> | null {
|
||||
const configPath = join(getHomePath(), '.gemini', 'projects.json');
|
||||
|
||||
try {
|
||||
const stat = statSync(configPath);
|
||||
if (geminiProjectsCache?.configPath === configPath && geminiProjectsCache.mtimeMs === stat.mtimeMs) {
|
||||
return geminiProjectsCache.projects;
|
||||
}
|
||||
|
||||
const raw = readFileSync(configPath, 'utf8');
|
||||
const parsed = JSON.parse(raw) as { projects?: Record<string, string> };
|
||||
if (!parsed.projects || typeof parsed.projects !== 'object') {
|
||||
return null;
|
||||
}
|
||||
|
||||
geminiProjectsCache = { configPath, mtimeMs: stat.mtimeMs, projects: parsed.projects };
|
||||
return parsed.projects;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Base session discoverer interface
|
||||
*/
|
||||
@@ -177,12 +219,76 @@ abstract class SessionDiscoverer {
|
||||
|
||||
/**
|
||||
* Gemini Session Discoverer
|
||||
* Path: ~/.gemini/tmp/<projectHash>/chats/session-*.json
|
||||
* Legacy path: ~/.gemini/tmp/<projectHash>/chats/session-*.json
|
||||
* Current path (Gemini CLI): ~/.gemini/tmp/<projectName>/chats/session-*.json
|
||||
*/
|
||||
class GeminiSessionDiscoverer extends SessionDiscoverer {
|
||||
tool = 'gemini';
|
||||
basePath = join(getHomePath(), '.gemini', 'tmp');
|
||||
|
||||
private getProjectFoldersForWorkingDir(workingDir: string): string[] {
|
||||
const folders = new Set<string>();
|
||||
|
||||
// Legacy: hashed folder
|
||||
const projectHash = calculateProjectHash(workingDir);
|
||||
if (existsSync(join(this.basePath, projectHash))) {
|
||||
folders.add(projectHash);
|
||||
}
|
||||
|
||||
// Current: project-name folder resolved via ~/.gemini/projects.json
|
||||
let hasProjectNameFolder = false;
|
||||
const projectsMap = getGeminiProjectsMap();
|
||||
if (projectsMap) {
|
||||
const normalized = normalizeGeminiProjectRootPath(workingDir);
|
||||
|
||||
// Prefer exact match first, then walk up parents (Gemini can map nested roots)
|
||||
let cursor: string | null = normalized;
|
||||
while (cursor) {
|
||||
const mapped = projectsMap[cursor];
|
||||
if (mapped) {
|
||||
const mappedPath = join(this.basePath, mapped);
|
||||
if (existsSync(mappedPath)) {
|
||||
folders.add(mapped);
|
||||
hasProjectNameFolder = true;
|
||||
}
|
||||
break;
|
||||
}
|
||||
const parent = dirname(cursor);
|
||||
cursor = parent !== cursor ? parent : null;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: scan for `.project_root` marker (best-effort; avoids missing mappings)
|
||||
if (!hasProjectNameFolder) {
|
||||
const normalized = normalizeGeminiProjectRootPath(workingDir);
|
||||
try {
|
||||
if (existsSync(this.basePath)) {
|
||||
for (const dirName of readdirSync(this.basePath)) {
|
||||
const fullPath = join(this.basePath, dirName);
|
||||
try {
|
||||
if (!statSync(fullPath).isDirectory()) continue;
|
||||
|
||||
const markerPath = join(fullPath, '.project_root');
|
||||
if (!existsSync(markerPath)) continue;
|
||||
|
||||
const marker = readFileSync(markerPath, 'utf8').trim();
|
||||
if (normalizeGeminiProjectRootPath(marker) === normalized) {
|
||||
folders.add(dirName);
|
||||
break;
|
||||
}
|
||||
} catch {
|
||||
// Ignore invalid entries
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Ignore scan failures
|
||||
}
|
||||
}
|
||||
|
||||
return Array.from(folders);
|
||||
}
|
||||
|
||||
getSessions(options: SessionDiscoveryOptions = {}): NativeSession[] {
|
||||
const { workingDir, limit, afterTimestamp } = options;
|
||||
const sessions: NativeSession[] = [];
|
||||
@@ -193,9 +299,7 @@ class GeminiSessionDiscoverer extends SessionDiscoverer {
|
||||
// If workingDir provided, only look in that project's folder
|
||||
let projectDirs: string[];
|
||||
if (workingDir) {
|
||||
const projectHash = calculateProjectHash(workingDir);
|
||||
const projectPath = join(this.basePath, projectHash);
|
||||
projectDirs = existsSync(projectPath) ? [projectHash] : [];
|
||||
projectDirs = this.getProjectFoldersForWorkingDir(workingDir);
|
||||
} else {
|
||||
projectDirs = readdirSync(this.basePath).filter(d => {
|
||||
const fullPath = join(this.basePath, d);
|
||||
@@ -203,8 +307,8 @@ class GeminiSessionDiscoverer extends SessionDiscoverer {
|
||||
});
|
||||
}
|
||||
|
||||
for (const projectHash of projectDirs) {
|
||||
const chatsDir = join(this.basePath, projectHash, 'chats');
|
||||
for (const projectFolder of projectDirs) {
|
||||
const chatsDir = join(this.basePath, projectFolder, 'chats');
|
||||
if (!existsSync(chatsDir)) continue;
|
||||
|
||||
const sessionFiles = readdirSync(chatsDir)
|
||||
@@ -217,7 +321,10 @@ class GeminiSessionDiscoverer extends SessionDiscoverer {
|
||||
.sort((a, b) => b.stat.mtimeMs - a.stat.mtimeMs);
|
||||
|
||||
for (const file of sessionFiles) {
|
||||
if (afterTimestamp && file.stat.mtime <= afterTimestamp) continue;
|
||||
if (afterTimestamp && file.stat.mtime <= afterTimestamp) {
|
||||
// sessionFiles are sorted descending by mtime, we can stop early
|
||||
break;
|
||||
}
|
||||
|
||||
try {
|
||||
const content = JSON.parse(readFileSync(file.path, 'utf8'));
|
||||
@@ -225,7 +332,7 @@ class GeminiSessionDiscoverer extends SessionDiscoverer {
|
||||
sessionId: content.sessionId,
|
||||
tool: this.tool,
|
||||
filePath: file.path,
|
||||
projectHash,
|
||||
projectHash: content.projectHash,
|
||||
createdAt: new Date(content.startTime || file.stat.birthtime),
|
||||
updatedAt: new Date(content.lastUpdated || file.stat.mtime)
|
||||
});
|
||||
@@ -238,7 +345,14 @@ class GeminiSessionDiscoverer extends SessionDiscoverer {
|
||||
// Sort by updatedAt descending
|
||||
sessions.sort((a, b) => b.updatedAt.getTime() - a.updatedAt.getTime());
|
||||
|
||||
return limit ? sessions.slice(0, limit) : sessions;
|
||||
const seen = new Set<string>();
|
||||
const uniqueSessions = sessions.filter(s => {
|
||||
if (seen.has(s.sessionId)) return false;
|
||||
seen.add(s.sessionId);
|
||||
return true;
|
||||
});
|
||||
|
||||
return limit ? uniqueSessions.slice(0, limit) : uniqueSessions;
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
|
||||
@@ -162,7 +162,7 @@ Message types: plan_ready, plan_approved, plan_revision, task_unblocked, impl_co
|
||||
},
|
||||
team: {
|
||||
type: 'string',
|
||||
description: 'Team name',
|
||||
description: 'Session ID (e.g., TLS-my-project-2026-02-27). Maps to .workflow/.team/{session-id}/.msg/. Use session ID, NOT team name.',
|
||||
},
|
||||
from: { type: 'string', description: '[log/list] Sender role' },
|
||||
to: { type: 'string', description: '[log/list] Recipient role' },
|
||||
|
||||
@@ -18,9 +18,10 @@ import { after, afterEach, before, beforeEach, describe, it, mock } from 'node:t
|
||||
import assert from 'node:assert/strict';
|
||||
import { existsSync, mkdtempSync, mkdirSync, rmSync, readFileSync, writeFileSync } from 'node:fs';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { join } from 'node:path';
|
||||
import { join, resolve } from 'node:path';
|
||||
|
||||
const TEST_CCW_HOME = mkdtempSync(join(tmpdir(), 'ccw-session-discovery-home-'));
|
||||
const TEST_USER_HOME = mkdtempSync(join(tmpdir(), 'ccw-session-discovery-user-home-'));
|
||||
const TEST_PROJECT_ROOT = mkdtempSync(join(tmpdir(), 'ccw-session-discovery-project-'));
|
||||
|
||||
const sessionDiscoveryUrl = new URL('../dist/tools/native-session-discovery.js', import.meta.url);
|
||||
@@ -34,7 +35,17 @@ let mod: any;
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
let cliExecutorMod: any;
|
||||
|
||||
const originalEnv = { CCW_DATA_DIR: process.env.CCW_DATA_DIR };
|
||||
const originalEnv = {
|
||||
CCW_DATA_DIR: process.env.CCW_DATA_DIR,
|
||||
HOME: process.env.HOME,
|
||||
USERPROFILE: process.env.USERPROFILE
|
||||
};
|
||||
|
||||
function normalizeGeminiProjectRootPath(projectDir: string): string {
|
||||
const absolutePath = resolve(projectDir);
|
||||
if (process.platform !== 'win32') return absolutePath;
|
||||
return absolutePath.replace(/\//g, '\\').toLowerCase();
|
||||
}
|
||||
|
||||
function resetDir(dirPath: string): void {
|
||||
if (existsSync(dirPath)) {
|
||||
@@ -49,11 +60,13 @@ function resetDir(dirPath: string): void {
|
||||
function createMockGeminiSession(filePath: string, options: {
|
||||
sessionId: string;
|
||||
startTime: string;
|
||||
projectHash?: string;
|
||||
transactionId?: string;
|
||||
firstPrompt?: string;
|
||||
}): void {
|
||||
const sessionData = {
|
||||
sessionId: options.sessionId,
|
||||
projectHash: options.projectHash,
|
||||
startTime: options.startTime,
|
||||
lastUpdated: new Date().toISOString(),
|
||||
messages: [
|
||||
@@ -121,6 +134,8 @@ function createMockCodexSession(filePath: string, options: {
|
||||
describe('Native Session Discovery - Resume Mechanism Fixes (L0-L2)', async () => {
|
||||
before(async () => {
|
||||
process.env.CCW_DATA_DIR = TEST_CCW_HOME;
|
||||
process.env.HOME = TEST_USER_HOME;
|
||||
process.env.USERPROFILE = TEST_USER_HOME;
|
||||
mod = await import(sessionDiscoveryUrl.href);
|
||||
cliExecutorMod = await import(cliExecutorUrl.href);
|
||||
});
|
||||
@@ -132,6 +147,7 @@ describe('Native Session Discovery - Resume Mechanism Fixes (L0-L2)', async () =
|
||||
mock.method(console, 'log', () => {});
|
||||
|
||||
resetDir(TEST_CCW_HOME);
|
||||
resetDir(TEST_USER_HOME);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
@@ -140,7 +156,10 @@ describe('Native Session Discovery - Resume Mechanism Fixes (L0-L2)', async () =
|
||||
|
||||
after(() => {
|
||||
process.env.CCW_DATA_DIR = originalEnv.CCW_DATA_DIR;
|
||||
process.env.HOME = originalEnv.HOME;
|
||||
process.env.USERPROFILE = originalEnv.USERPROFILE;
|
||||
rmSync(TEST_CCW_HOME, { recursive: true, force: true });
|
||||
rmSync(TEST_USER_HOME, { recursive: true, force: true });
|
||||
rmSync(TEST_PROJECT_ROOT, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
@@ -361,6 +380,39 @@ describe('Native Session Discovery - Resume Mechanism Fixes (L0-L2)', async () =
|
||||
});
|
||||
});
|
||||
|
||||
describe('L1: Gemini discovery - project-name folder layout', () => {
|
||||
it('discovers sessions under ~/.gemini/tmp/<projectName>/chats via projects.json mapping', () => {
|
||||
const projectName = `proj-${Date.now()}`;
|
||||
const projectRootKey = normalizeGeminiProjectRootPath(TEST_PROJECT_ROOT);
|
||||
|
||||
const geminiHome = join(TEST_USER_HOME, '.gemini');
|
||||
const tmpDir = join(geminiHome, 'tmp');
|
||||
const projectDir = join(tmpDir, projectName);
|
||||
const chatsDir = join(projectDir, 'chats');
|
||||
mkdirSync(chatsDir, { recursive: true });
|
||||
|
||||
// Gemini uses both projects.json mapping and a `.project_root` marker.
|
||||
mkdirSync(geminiHome, { recursive: true });
|
||||
writeFileSync(
|
||||
join(geminiHome, 'projects.json'),
|
||||
JSON.stringify({ projects: { [projectRootKey]: projectName } }),
|
||||
'utf8'
|
||||
);
|
||||
writeFileSync(join(projectDir, '.project_root'), projectRootKey, 'utf8');
|
||||
|
||||
const sessionPath = join(chatsDir, `session-test-${Date.now()}.json`);
|
||||
createMockGeminiSession(sessionPath, {
|
||||
sessionId: `uuid-${Date.now()}`,
|
||||
startTime: new Date().toISOString(),
|
||||
projectHash: 'abc123',
|
||||
firstPrompt: 'Test Gemini prompt'
|
||||
});
|
||||
|
||||
const sessions = mod.getNativeSessions('gemini', { workingDir: TEST_PROJECT_ROOT });
|
||||
assert.ok(sessions.some((s: { filePath: string }) => s.filePath === sessionPath));
|
||||
});
|
||||
});
|
||||
|
||||
describe('L1: Prompt-based fallback matching', () => {
|
||||
it('matches sessions by prompt prefix when transaction ID not available', () => {
|
||||
const prompt = 'Implement authentication feature with JWT tokens';
|
||||
|
||||
Reference in New Issue
Block a user