Compare commits

...

9 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
7b21069bda fix: update docs/package-lock.json to include missing dependencies
Co-authored-by: catlog22 <28037070+catlog22@users.noreply.github.com>
2026-03-03 02:16:12 +00:00
copilot-swe-agent[bot]
11bd1ab57e Initial plan 2026-03-03 02:13:48 +00:00
catlog22
59787dc9be feat: enhance responsive design for documentation layout; adjust margins and paddings for better content scaling 2026-03-03 09:30:42 +08:00
catlog22
d7169029ee feat: implement CSRF token helper and update fetch headers; adjust layout styles for responsiveness 2026-03-02 23:27:42 +08:00
catlog22
1bf9006d65 Refactor Chinese documentation for team skills and commands
- Removed outdated table of contents from commands-skills.md
- Updated skills overview in claude-collaboration.md with new skill names and descriptions
- Enhanced clarity and structure of skills details, including roles and pipelines
- Added new team skills: team-arch-opt, team-perf-opt, team-brainstorm, team-frontend, team-uidesign, team-issue, team-iterdev, team-quality-assurance, team-roadmap-dev, team-tech-debt, team-ultra-analyze
- Improved user command section for better usability
- Streamlined best practices for team skills usage
2026-03-02 22:49:52 +08:00
catlog22
99d6438303 feat: add documentation for Checkbox, Input, and Select components; enhance Queue and Terminal features
- Introduced Checkbox component documentation in Chinese, covering usage, properties, and examples.
- Added Input component documentation in Chinese, detailing its attributes and various states.
- Created Select component documentation in Chinese, including subcomponents and usage examples.
- Developed Queue management documentation, outlining its core functionalities and component structure.
- Added Terminal dashboard documentation, describing its layout, core features, and usage examples.
- Documented team workflows, detailing various team skills and their applications in project management.
2026-03-02 19:38:30 +08:00
catlog22
a58aa26a30 fix(uninstall): add manifest tracking for skill hub installations
Fixes #126: ccw uninstall was not cleaning up skills and commands
installed via Skill Hub because cpSync() bypassed manifest tracking.

Changes:
- Add copyDirectoryWithManifest() helper to install.ts and skill-hub-routes.ts
- Track all skill files in manifest during Skill Hub installation (CLI and API)
- Add orphan cleanup logic to uninstall.ts for defense in depth
- Fix installSkillFromRemote() and installSkillFromRemotePath() to track files

Root cause: Skill Hub installation methods used cpSync() which did not
track files in manifest, causing skills/commands to remain after uninstall.
2026-03-02 19:30:34 +08:00
catlog22
2dce4b3e8f fix(docs): resolve hydration mismatch and favicon 404 issues
- Add minimal theme preload script to prevent FOUC
- Fix favicon path to use base variable for GitHub Pages
- Theme script only sets data-theme attribute (no color mode)
- Maintains SSR/client consistency while eliminating flash

Fixes:
- Hydration mismatch error from localStorage access before Vue mount
- Favicon 404 on GitHub Pages deployment
- FOUC when theme applies after hydration

The new script is minimal and safe:
- Runs synchronously in head (before render)
- Only reads localStorage and sets attribute
- Matches what ThemeSwitcher.vue will do after mount
- No DOM manipulation that could cause mismatch
2026-03-02 17:47:27 +08:00
catlog22
b780734649 Add comprehensive command and skill reference documentation in Chinese
- Created a new document for command and skill references, detailing orchestrator commands, workflow session commands, issue workflow commands, IDAW commands, with-file workflows, cycle workflows, CLI commands, memory commands, team skills, workflow skills, utility skills, and Codex capabilities.
- Added a comparison table for workflows, outlining their best uses, levels, self-containment, and automatic chaining behavior.
2026-03-02 17:41:40 +08:00
50 changed files with 5552 additions and 1729 deletions

View File

@@ -20,14 +20,15 @@ Available CLI endpoints are dynamically defined by the config file
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only - **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
### CLI Tool Calls (ccw cli) ### CLI Tool Calls (ccw cli)
- **Default: Use Bash `run_in_background: true`** - Unless otherwise specified, always execute CLI calls in background using Bash tool's background mode: - **Default**: CLI calls (`ccw cli`) default to background execution (`run_in_background: true`):
``` ```
Bash({ Bash({
command: "ccw cli -p '...' --tool gemini", command: "ccw cli -p '...' --tool gemini",
run_in_background: true // Bash tool parameter, not ccw cli parameter run_in_background: true // Bash tool parameter, not ccw cli parameter
}) })
``` ```
- **After CLI call**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results - **CRITICAL — Agent-specific instructions ALWAYS override this default.** If an agent's definition file (`.claude/agents/*.md`) specifies `run_in_background: false`, that instruction takes highest priority. Subagents (Task tool agents) CANNOT receive hook callbacks, so they MUST use `run_in_background: false` for CLI calls that produce required results.
- **After CLI call (main conversation only)**: Stop output immediately - let CLI execute in background. **DO NOT use TaskOutput polling** - wait for hook callback to receive results
### CLI Analysis Calls ### CLI Analysis Calls
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running - **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running

View File

@@ -8,32 +8,14 @@ import 'xterm/css/xterm.css'
import { loadMessagesForLocale, getInitialLocale } from './lib/i18n' import { loadMessagesForLocale, getInitialLocale } from './lib/i18n'
import { logWebVitals } from './lib/webVitals' import { logWebVitals } from './lib/webVitals'
/**
* Initialize CSRF token by fetching from backend
* This ensures the CSRF cookie is set before any mutating API calls
*/
async function initCsrfToken() {
try {
// Fetch CSRF token from backend - this sets the XSRF-TOKEN cookie
await fetch('/api/csrf-token', {
method: 'GET',
credentials: 'same-origin',
})
} catch (error) {
// Log error but don't block app initialization
console.error('Failed to initialize CSRF token:', error)
}
}
async function bootstrapApplication() { async function bootstrapApplication() {
const rootElement = document.getElementById('root') const rootElement = document.getElementById('root')
if (!rootElement) throw new Error('Failed to find the root element') if (!rootElement) throw new Error('Failed to find the root element')
// Parallelize CSRF token fetch and locale detection (independent operations) // CSRF token initialization is deferred to first mutating request
const [, locale] = await Promise.all([ // This eliminates network RTT from app startup path
initCsrfToken(), // See: ccw/frontend/src/lib/api.ts - fetchApi handles lazy token fetch
getInitialLocale() const locale = await getInitialLocale()
])
// Load only the active locale's messages (lazy load secondary on demand) // Load only the active locale's messages (lazy load secondary on demand)
const messages = await loadMessagesForLocale(locale) const messages = await loadMessagesForLocale(locale)

View File

@@ -63,6 +63,12 @@ import type { ExportedSettings } from '@/lib/api';
import { RemoteNotificationSection } from '@/components/settings/RemoteNotificationSection'; import { RemoteNotificationSection } from '@/components/settings/RemoteNotificationSection';
import { A2UIPreferencesSection } from '@/components/settings/A2UIPreferencesSection'; import { A2UIPreferencesSection } from '@/components/settings/A2UIPreferencesSection';
// ========== CSRF Token Helper ==========
function getCsrfToken(): string | null {
const match = document.cookie.match(/XSRF-TOKEN=([^;]+)/);
return match ? decodeURIComponent(match[1]) : null;
}
// ========== File Path Input with Native File Picker ========== // ========== File Path Input with Native File Picker ==========
interface FilePathInputProps { interface FilePathInputProps {
@@ -1282,10 +1288,17 @@ export function SettingsPage() {
body.effort = config.effort || null; body.effort = config.effort || null;
} }
const csrfToken = getCsrfToken();
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
if (csrfToken) {
headers['X-CSRF-Token'] = csrfToken;
}
const res = await fetch(`/api/cli/config/${toolId}`, { const res = await fetch(`/api/cli/config/${toolId}`, {
method: 'PUT', method: 'PUT',
headers: { 'Content-Type': 'application/json' }, headers,
body: JSON.stringify(body), body: JSON.stringify(body),
credentials: 'same-origin',
}); });
if (!res.ok) { if (!res.ok) {

View File

@@ -1,4 +1,4 @@
import { existsSync, mkdirSync, readdirSync, statSync, copyFileSync, readFileSync, writeFileSync, unlinkSync, rmdirSync, appendFileSync, renameSync, cpSync } from 'fs'; import { existsSync, mkdirSync, readdirSync, statSync, copyFileSync, readFileSync, writeFileSync, unlinkSync, rmdirSync, appendFileSync, renameSync } from 'fs';
import { join, dirname, basename } from 'path'; import { join, dirname, basename } from 'path';
import { homedir, platform } from 'os'; import { homedir, platform } from 'os';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
@@ -6,7 +6,7 @@ import { execSync } from 'child_process';
import inquirer from 'inquirer'; import inquirer from 'inquirer';
import chalk from 'chalk'; import chalk from 'chalk';
import { showHeader, createSpinner, info, warning, error, summaryBox, divider } from '../utils/ui.js'; import { showHeader, createSpinner, info, warning, error, summaryBox, divider } from '../utils/ui.js';
import { createManifest, addFileEntry, addDirectoryEntry, saveManifest, findManifest, getAllManifests } from '../core/manifest.js'; import { createManifest, addFileEntry, addDirectoryEntry, saveManifest, findManifest, getAllManifests, type Manifest, type ManifestWithMetadata } from '../core/manifest.js';
import { validatePath } from '../utils/path-resolver.js'; import { validatePath } from '../utils/path-resolver.js';
import type { Ora } from 'ora'; import type { Ora } from 'ora';
@@ -1081,13 +1081,58 @@ function listLocalSkillHubSkills(): Array<{ id: string; name: string; descriptio
return result; return result;
} }
/**
* Copy directory recursively with manifest tracking
* Similar to copyDirectory() but returns file count for skill-hub installation
* @param src - Source directory
* @param dest - Destination directory
* @param manifest - Manifest to track files
* @returns Count of files and directories copied
*/
function copyDirectoryWithManifest(
src: string,
dest: string,
manifest: any
): { files: number; directories: number } {
let files = 0;
let directories = 0;
// Create destination directory
if (!existsSync(dest)) {
mkdirSync(dest, { recursive: true });
directories++;
addDirectoryEntry(manifest, dest);
}
const entries = readdirSync(src);
for (const entry of entries) {
const srcPath = join(src, entry);
const destPath = join(dest, entry);
const stat = statSync(srcPath);
if (stat.isDirectory()) {
const result = copyDirectoryWithManifest(srcPath, destPath, manifest);
files += result.files;
directories += result.directories;
} else {
copyFileSync(srcPath, destPath);
files++;
addFileEntry(manifest, destPath);
}
}
return { files, directories };
}
/** /**
* Install a skill from skill-hub to CLI skills directory * Install a skill from skill-hub to CLI skills directory
* Now tracks installed files in manifest for proper uninstall cleanup
*/ */
async function installSkillFromHub( async function installSkillFromHub(
skillId: string, skillId: string,
cliType: 'claude' | 'codex' cliType: 'claude' | 'codex'
): Promise<{ success: boolean; message: string }> { ): Promise<{ success: boolean; message: string; filesTracked?: number }> {
// Only support local skills for now // Only support local skills for now
if (!skillId.startsWith('local-')) { if (!skillId.startsWith('local-')) {
return { return {
@@ -1119,10 +1164,26 @@ async function installSkillFromHub(
mkdirSync(targetParent, { recursive: true }); mkdirSync(targetParent, { recursive: true });
} }
// Copy skill directory // Get or create manifest for global installation (skill-hub always installs to home directory)
const installPath = homedir();
const existingManifest = findManifest(installPath, 'Global');
// Use existing manifest or create new one for tracking skill-hub installations
// Note: ManifestWithMetadata extends Manifest, so we can use it directly
const manifest = existingManifest || createManifest('Global', installPath);
// Copy skill directory with manifest tracking
try { try {
cpSync(skillDir, targetDir, { recursive: true }); const { files } = copyDirectoryWithManifest(skillDir, targetDir, manifest);
return { success: true, message: `Skill '${skillName}' installed to ${cliType}` };
// Save manifest with tracked files
saveManifest(manifest);
return {
success: true,
message: `Skill '${skillName}' installed to ${cliType}`,
filesTracked: files,
};
} catch (error) { } catch (error) {
return { success: false, message: `Failed to install: ${(error as Error).message}` }; return { success: false, message: `Failed to install: ${(error as Error).message}` };
} }

View File

@@ -4,7 +4,7 @@ import { homedir, platform } from 'os';
import inquirer from 'inquirer'; import inquirer from 'inquirer';
import chalk from 'chalk'; import chalk from 'chalk';
import { showBanner, createSpinner, success, info, warning, error, summaryBox, divider } from '../utils/ui.js'; import { showBanner, createSpinner, success, info, warning, error, summaryBox, divider } from '../utils/ui.js';
import { getAllManifests, deleteManifest } from '../core/manifest.js'; import { getAllManifests, deleteManifest, getFileReferenceCounts } from '../core/manifest.js';
import { removeGitBashFix } from './install.js'; import { removeGitBashFix } from './install.js';
// Global subdirectories that should be protected when Global installation exists // Global subdirectories that should be protected when Global installation exists
@@ -126,6 +126,7 @@ export async function uninstallCommand(options: UninstallOptions): Promise<void>
let removedFiles = 0; let removedFiles = 0;
let removedDirs = 0; let removedDirs = 0;
let failedFiles: FileEntry[] = []; let failedFiles: FileEntry[] = [];
let orphanStats = { removed: 0, scanned: 0 };
try { try {
// Remove files first (in reverse order to handle nested files) // Remove files first (in reverse order to handle nested files)
@@ -202,6 +203,10 @@ export async function uninstallCommand(options: UninstallOptions): Promise<void>
} }
} }
// Orphan cleanup: Scan for skills/commands not tracked in any manifest
// This handles files installed by skill-hub that weren't tracked properly
const orphanStats = await cleanupOrphanFiles(selectedManifest.manifest_id);
spinner.succeed('Uninstall complete!'); spinner.succeed('Uninstall complete!');
} catch (err) { } catch (err) {
@@ -233,6 +238,10 @@ export async function uninstallCommand(options: UninstallOptions): Promise<void>
summaryLines.push(chalk.white(`Global files preserved: ${chalk.cyan(skippedFiles.toString())}`)); summaryLines.push(chalk.white(`Global files preserved: ${chalk.cyan(skippedFiles.toString())}`));
} }
if (orphanStats.removed > 0) {
summaryLines.push(chalk.white(`Orphan files cleaned: ${chalk.magenta(orphanStats.removed.toString())}`));
}
if (failedFiles.length > 0) { if (failedFiles.length > 0) {
summaryLines.push(chalk.white(`Failed: ${chalk.red(failedFiles.length.toString())}`)); summaryLines.push(chalk.white(`Failed: ${chalk.red(failedFiles.length.toString())}`));
summaryLines.push(''); summaryLines.push('');
@@ -281,6 +290,93 @@ export async function uninstallCommand(options: UninstallOptions): Promise<void>
console.log(''); console.log('');
} }
/**
* Clean up orphan files in skills/commands directories that weren't tracked in manifest
* This handles files installed by skill-hub that bypassed manifest tracking
* @param excludeManifestId - Manifest ID being uninstalled (to exclude from reference check)
* @returns Count of removed orphan files and total scanned
*/
async function cleanupOrphanFiles(excludeManifestId: string): Promise<{ removed: number; scanned: number }> {
let removed = 0;
let scanned = 0;
const home = homedir();
// Directories to scan for orphan files
const scanDirs = [
{ base: join(home, '.claude', 'skills'), type: 'skill' },
{ base: join(home, '.claude', 'commands'), type: 'command' },
{ base: join(home, '.codex', 'skills'), type: 'skill' },
{ base: join(home, '.codex', 'commands'), type: 'command' },
];
// Get file reference counts excluding the manifest being uninstalled
const fileRefs = getFileReferenceCounts(excludeManifestId);
for (const { base } of scanDirs) {
if (!existsSync(base)) continue;
try {
// Recursively scan directory for files
const files = scanDirectoryForFiles(base);
for (const filePath of files) {
scanned++;
// Check if file is referenced by any remaining manifest
const normalizedPath = filePath.toLowerCase().replace(/\\/g, '/');
const refs = fileRefs.get(normalizedPath) || [];
// If not referenced, it's an orphan - remove it
if (refs.length === 0) {
try {
unlinkSync(filePath);
removed++;
} catch {
// Ignore removal errors (file may be in use)
}
}
}
// Clean up empty directories after orphan removal
await removeEmptyDirs(base);
} catch {
// Ignore scan errors
}
}
return { removed, scanned };
}
/**
* Recursively scan directory for all files
* @param dirPath - Directory to scan
* @returns Array of file paths
*/
function scanDirectoryForFiles(dirPath: string): string[] {
const files: string[] = [];
if (!existsSync(dirPath)) return files;
try {
const entries = readdirSync(dirPath);
for (const entry of entries) {
const fullPath = join(dirPath, entry);
const stat = statSync(fullPath);
if (stat.isDirectory()) {
files.push(...scanDirectoryForFiles(fullPath));
} else {
files.push(fullPath);
}
}
} catch {
// Ignore scan errors
}
return files;
}
/** /**
* Recursively remove empty directories * Recursively remove empty directories
* @param {string} dirPath - Directory path * @param {string} dirPath - Directory path

View File

@@ -1093,7 +1093,7 @@ export async function handleCliRoutes(ctx: RouteContext): Promise<boolean> {
return { error: 'Execution not found', status: 404 }; return { error: 'Execution not found', status: 404 };
} }
const review = historyStore.saveReview({ const review = await historyStore.saveReview({
execution_id: executionId, execution_id: executionId,
status, status,
rating, rating,

View File

@@ -589,7 +589,7 @@ Return ONLY valid JSON in this exact format (no markdown, no code blocks, just p
const storeModule = await import('../../tools/cli-history-store.js'); const storeModule = await import('../../tools/cli-history-store.js');
const store = storeModule.getHistoryStore(projectPath); const store = storeModule.getHistoryStore(projectPath);
const insightId = `insight-${Date.now()}`; const insightId = `insight-${Date.now()}`;
store.saveInsight({ await store.saveInsight({
id: insightId, id: insightId,
tool, tool,
promptCount: prompts.length, promptCount: prompts.length,

View File

@@ -11,11 +11,12 @@
* - GET /api/skill-hub/updates - Check for available updates * - GET /api/skill-hub/updates - Check for available updates
*/ */
import { readFileSync, existsSync, readdirSync, statSync, mkdirSync, cpSync, rmSync, writeFileSync } from 'fs'; import { readFileSync, existsSync, readdirSync, statSync, mkdirSync, cpSync, rmSync, writeFileSync, copyFileSync } from 'fs';
import { join, dirname } from 'path'; import { join, dirname } from 'path';
import { homedir } from 'os'; import { homedir } from 'os';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
import { validatePath as validateAllowedPath } from '../../utils/path-validator.js'; import { validatePath as validateAllowedPath } from '../../utils/path-validator.js';
import { createManifest, addFileEntry, addDirectoryEntry, saveManifest, findManifest, type Manifest } from '../manifest.js';
import type { RouteContext } from './types.js'; import type { RouteContext } from './types.js';
// ES Module __dirname equivalent // ES Module __dirname equivalent
@@ -711,6 +712,49 @@ function removeInstalledSkill(skillId: string, cliType: CliType): void {
// Skill Installation Helpers // Skill Installation Helpers
// ============================================================================ // ============================================================================
/**
* Copy directory recursively with manifest tracking
* @param src - Source directory
* @param dest - Destination directory
* @param manifest - Manifest to track files
* @returns Count of files and directories copied
*/
function copyDirectoryWithManifest(
src: string,
dest: string,
manifest: Manifest
): { files: number; directories: number } {
let files = 0;
let directories = 0;
// Create destination directory
if (!existsSync(dest)) {
mkdirSync(dest, { recursive: true });
directories++;
addDirectoryEntry(manifest, dest);
}
const entries = readdirSync(src);
for (const entry of entries) {
const srcPath = join(src, entry);
const destPath = join(dest, entry);
const stat = statSync(srcPath);
if (stat.isDirectory()) {
const result = copyDirectoryWithManifest(srcPath, destPath, manifest);
files += result.files;
directories += result.directories;
} else {
copyFileSync(srcPath, destPath);
files++;
addFileEntry(manifest, destPath);
}
}
return { files, directories };
}
/** /**
* Install skill from local path * Install skill from local path
*/ */
@@ -767,12 +811,22 @@ async function installSkillFromLocal(
return { success: false, message: `Skill '${skillName}' already exists in ${cliType}` }; return { success: false, message: `Skill '${skillName}' already exists in ${cliType}` };
} }
// Copy skill directory // Get or create manifest for global installation (skill-hub always installs to home directory)
cpSync(localPath, targetSkillDir, { recursive: true }); const installPath = homedir();
const existingManifest = findManifest(installPath, 'Global');
// Use existing manifest or create new one for tracking skill-hub installations
const manifest = existingManifest || createManifest('Global', installPath);
// Copy skill directory with manifest tracking
const { files } = copyDirectoryWithManifest(localPath, targetSkillDir, manifest);
// Save manifest with tracked files
saveManifest(manifest);
return { return {
success: true, success: true,
message: `Skill '${skillName}' installed to ${cliType}`, message: `Skill '${skillName}' installed to ${cliType} (${files} files tracked)`,
installedPath: targetSkillDir, installedPath: targetSkillDir,
}; };
} catch (error) { } catch (error) {
@@ -836,9 +890,21 @@ async function installSkillFromRemote(
return { success: false, message: `Skill '${skillName}' already exists in ${cliType}` }; return { success: false, message: `Skill '${skillName}' already exists in ${cliType}` };
} }
// Get or create manifest for global installation
const installPath = homedir();
const existingManifest = findManifest(installPath, 'Global');
const manifest = existingManifest || createManifest('Global', installPath);
// Create skill directory and write SKILL.md // Create skill directory and write SKILL.md
mkdirSync(targetSkillDir, { recursive: true }); mkdirSync(targetSkillDir, { recursive: true });
writeFileSync(join(targetSkillDir, 'SKILL.md'), skillContent, 'utf8'); addDirectoryEntry(manifest, targetSkillDir);
const skillMdPath = join(targetSkillDir, 'SKILL.md');
writeFileSync(skillMdPath, skillContent, 'utf8');
addFileEntry(manifest, skillMdPath);
// Save manifest with tracked files
saveManifest(manifest);
// Cache the skill locally // Cache the skill locally
try { try {
@@ -855,7 +921,7 @@ async function installSkillFromRemote(
return { return {
success: true, success: true,
message: `Skill '${skillName}' installed to ${cliType}`, message: `Skill '${skillName}' installed to ${cliType} (1 file tracked)`,
installedPath: targetSkillDir, installedPath: targetSkillDir,
}; };
} catch (error) { } catch (error) {
@@ -897,21 +963,40 @@ async function installSkillFromRemotePath(
return { success: false, message: `Skill '${skillName}' already exists in ${cliType}` }; return { success: false, message: `Skill '${skillName}' already exists in ${cliType}` };
} }
// Get or create manifest for global installation
const installPath = homedir();
const existingManifest = findManifest(installPath, 'Global');
const manifest = existingManifest || createManifest('Global', installPath);
// Create skill directory // Create skill directory
mkdirSync(targetSkillDir, { recursive: true }); mkdirSync(targetSkillDir, { recursive: true });
addDirectoryEntry(manifest, targetSkillDir);
// Download entire skill directory // Download entire skill directory
console.log(`[SkillHub] Downloading skill directory: ${skillPath}`); console.log(`[SkillHub] Downloading skill directory: ${skillPath}`);
const result = await downloadSkillDirectory(skillPath, targetSkillDir); const result = await downloadSkillDirectory(skillPath, targetSkillDir);
if (!result.success || result.files.length === 0) { // Track downloaded files in manifest
let trackedFiles = 0;
if (result.success && result.files.length > 0) {
for (const file of result.files) {
addFileEntry(manifest, file);
trackedFiles++;
}
} else {
// Fallback: download only SKILL.md // Fallback: download only SKILL.md
console.log('[SkillHub] Directory download failed, falling back to SKILL.md only'); console.log('[SkillHub] Directory download failed, falling back to SKILL.md only');
const skillMdUrl = buildDownloadUrlFromPath(skillPath); const skillMdUrl = buildDownloadUrlFromPath(skillPath);
const skillContent = await fetchRemoteSkill(skillMdUrl); const skillContent = await fetchRemoteSkill(skillMdUrl);
writeFileSync(join(targetSkillDir, 'SKILL.md'), skillContent, 'utf8'); const skillMdPath = join(targetSkillDir, 'SKILL.md');
writeFileSync(skillMdPath, skillContent, 'utf8');
addFileEntry(manifest, skillMdPath);
trackedFiles = 1;
} }
// Save manifest with tracked files
saveManifest(manifest);
// Cache the skill locally // Cache the skill locally
try { try {
ensureSkillHubDirs(); ensureSkillHubDirs();
@@ -927,7 +1012,7 @@ async function installSkillFromRemotePath(
return { return {
success: true, success: true,
message: `Skill '${skillName}' installed to ${cliType} (${result.files.length} files)`, message: `Skill '${skillName}' installed to ${cliType} (${trackedFiles} files tracked)`,
installedPath: targetSkillDir, installedPath: targetSkillDir,
}; };
} catch (error) { } catch (error) {

View File

@@ -24,6 +24,208 @@ export const wsClients = new Set<Duplex>();
// Track message counts per client for rate limiting // Track message counts per client for rate limiting
export const wsClientMessageCounts = new Map<Duplex, { count: number; resetTime: number }>(); export const wsClientMessageCounts = new Map<Duplex, { count: number; resetTime: number }>();
/**
* Universal broadcast throttling system
* Reduces WebSocket traffic by deduplicating and rate-limiting broadcast messages
*/
interface ThrottleEntry {
lastSend: number;
pendingData: unknown;
}
type ThrottleCategory = 'state_update' | 'memory_cpu' | 'log_output' | 'immediate';
/** Map of message type to throttle configuration */
const THROTTLE_CONFIG = new Map<string, { interval: number; category: ThrottleCategory }>(
[
// State updates - high frequency, low value when duplicated
['LOOP_STATE_UPDATE', { interval: 1000, category: 'state_update' }],
['ORCHESTRATOR_STATE_UPDATE', { interval: 1000, category: 'state_update' }],
['COORDINATOR_STATE_UPDATE', { interval: 1000, category: 'state_update' }],
['QUEUE_SCHEDULER_STATE_UPDATE', { interval: 1000, category: 'state_update' }],
// Memory/CPU updates - medium frequency
['LOOP_STEP_COMPLETED', { interval: 500, category: 'memory_cpu' }],
['ORCHESTRATOR_NODE_COMPLETED', { interval: 500, category: 'memory_cpu' }],
['COORDINATOR_COMMAND_COMPLETED', { interval: 500, category: 'memory_cpu' }],
['QUEUE_ITEM_UPDATED', { interval: 500, category: 'memory_cpu' }],
// Log/output - higher frequency allowed for real-time streaming
['LOOP_LOG_ENTRY', { interval: 200, category: 'log_output' }],
['ORCHESTRATOR_LOG', { interval: 200, category: 'log_output' }],
['COORDINATOR_LOG_ENTRY', { interval: 200, category: 'log_output' }],
// Item added/removed - send immediately
['QUEUE_ITEM_ADDED', { interval: 0, category: 'immediate' }],
['QUEUE_ITEM_REMOVED', { interval: 0, category: 'immediate' }],
['QUEUE_SCHEDULER_CONFIG_UPDATED', { interval: 0, category: 'immediate' }],
['ORCHESTRATOR_NODE_STARTED', { interval: 0, category: 'immediate' }],
['ORCHESTRATOR_NODE_FAILED', { interval: 0, category: 'immediate' }],
['COORDINATOR_COMMAND_STARTED', { interval: 0, category: 'immediate' }],
['COORDINATOR_COMMAND_FAILED', { interval: 0, category: 'immediate' }],
['COORDINATOR_QUESTION_ASKED', { interval: 0, category: 'immediate' }],
['COORDINATOR_ANSWER_RECEIVED', { interval: 0, category: 'immediate' }],
['LOOP_COMPLETED', { interval: 0, category: 'immediate' }],
] as const
);
/** Per-message-type throttle tracking */
const throttleState = new Map<string, ThrottleEntry>();
/** Metrics for broadcast optimization */
export const broadcastMetrics = {
sent: 0,
throttled: 0,
deduped: 0,
};
/**
* Get throttle configuration for a message type
*/
function getThrottleConfig(messageType: string): { interval: number; category: ThrottleCategory } {
return THROTTLE_CONFIG.get(messageType) || { interval: 0, category: 'immediate' };
}
/**
* Serialize message data for comparison
*/
function serializeMessage(data: unknown): string {
if (typeof data === 'string') return data;
if (typeof data === 'object' && data !== null) {
return JSON.stringify(data, Object.keys(data).sort());
}
return String(data);
}
/**
* Create WebSocket frame
*/
export function createWebSocketFrame(data: unknown): Buffer {
const payload = Buffer.from(JSON.stringify(data), 'utf8');
const length = payload.length;
let frame;
if (length <= 125) {
frame = Buffer.alloc(2 + length);
frame[0] = 0x81; // Text frame, FIN
frame[1] = length;
payload.copy(frame, 2);
} else if (length <= 65535) {
frame = Buffer.alloc(4 + length);
frame[0] = 0x81;
frame[1] = 126;
frame.writeUInt16BE(length, 2);
payload.copy(frame, 4);
} else {
frame = Buffer.alloc(10 + length);
frame[0] = 0x81;
frame[1] = 127;
frame.writeBigUInt64BE(BigInt(length), 2);
payload.copy(frame, 10);
}
return frame;
}
/**
* Broadcast message to all connected WebSocket clients with universal throttling
* - Deduplicates identical messages within throttle window
* - Rate-limits by message type with adaptive intervals
* - Preserves message ordering within each type
*/
export function broadcastToClients(data: unknown): void {
const eventType =
typeof data === 'object' && data !== null && 'type' in data ? (data as { type?: unknown }).type : undefined;
if (!eventType || typeof eventType !== 'string') {
// Unknown message type - send immediately
const frame = createWebSocketFrame(data);
for (const client of wsClients) {
try {
client.write(frame);
} catch (e) {
wsClients.delete(client);
}
}
console.log(`[WS] Broadcast to ${wsClients.size} clients: unknown type`);
return;
}
const config = getThrottleConfig(eventType);
const now = Date.now();
const state = throttleState.get(eventType);
if (config.interval === 0) {
// Immediate - send without throttling
const frame = createWebSocketFrame(data);
for (const client of wsClients) {
try {
client.write(frame);
} catch (e) {
wsClients.delete(client);
}
}
broadcastMetrics.sent++;
throttleState.set(eventType, { lastSend: now, pendingData: data });
console.log(`[WS] Broadcast to ${wsClients.size} clients: ${eventType} (immediate)`);
return;
}
// Check if we should throttle
const currentDataHash = serializeMessage(data);
if (state) {
const timeSinceLastSend = now - state.lastSend;
// Check for duplicate data
if (timeSinceLastSend < config.interval) {
const pendingDataHash = serializeMessage(state.pendingData);
if (currentDataHash === pendingDataHash) {
// Duplicate message - drop it
broadcastMetrics.deduped++;
console.log(`[WS] Throttled duplicate ${eventType} (${timeSinceLastSend}ms since last)`);
return;
}
// Different data but within throttle window - update pending
throttleState.set(eventType, { lastSend: state.lastSend, pendingData: data });
broadcastMetrics.throttled++;
console.log(`[WS] Throttled ${eventType} (${timeSinceLastSend}ms since last, pending updated)`);
return;
}
}
// Send the message
const frame = createWebSocketFrame(data);
for (const client of wsClients) {
try {
client.write(frame);
} catch (e) {
wsClients.delete(client);
}
}
broadcastMetrics.sent++;
throttleState.set(eventType, { lastSend: now, pendingData: data });
console.log(`[WS] Broadcast to ${wsClients.size} clients: ${eventType}`);
}
/**
* Get broadcast throttling metrics
*/
export function getBroadcastMetrics(): Readonly<typeof broadcastMetrics> {
return { ...broadcastMetrics };
}
/**
* Reset broadcast throttling metrics (for testing/monitoring)
*/
export function resetBroadcastMetrics(): void {
broadcastMetrics.sent = 0;
broadcastMetrics.throttled = 0;
broadcastMetrics.deduped = 0;
}
/** /**
* Check if a new WebSocket connection should be accepted * Check if a new WebSocket connection should be accepted
* Returns true if connection allowed, false if limit reached * Returns true if connection allowed, false if limit reached
@@ -357,55 +559,6 @@ export function parseWebSocketFrame(buffer: Buffer): { opcode: number; payload:
return { opcode, payload: payload.toString('utf8'), frameLength }; return { opcode, payload: payload.toString('utf8'), frameLength };
} }
/**
* Create WebSocket frame
*/
export function createWebSocketFrame(data: unknown): Buffer {
const payload = Buffer.from(JSON.stringify(data), 'utf8');
const length = payload.length;
let frame;
if (length <= 125) {
frame = Buffer.alloc(2 + length);
frame[0] = 0x81; // Text frame, FIN
frame[1] = length;
payload.copy(frame, 2);
} else if (length <= 65535) {
frame = Buffer.alloc(4 + length);
frame[0] = 0x81;
frame[1] = 126;
frame.writeUInt16BE(length, 2);
payload.copy(frame, 4);
} else {
frame = Buffer.alloc(10 + length);
frame[0] = 0x81;
frame[1] = 127;
frame.writeBigUInt64BE(BigInt(length), 2);
payload.copy(frame, 10);
}
return frame;
}
/**
* Broadcast message to all connected WebSocket clients
*/
export function broadcastToClients(data: unknown): void {
const frame = createWebSocketFrame(data);
for (const client of wsClients) {
try {
client.write(frame);
} catch (e) {
wsClients.delete(client);
}
}
const eventType =
typeof data === 'object' && data !== null && 'type' in data ? (data as { type?: unknown }).type : undefined;
console.log(`[WS] Broadcast to ${wsClients.size} clients:`, eventType);
}
/** /**
* Extract session ID from file path * Extract session ID from file path
*/ */
@@ -435,12 +588,9 @@ export function extractSessionIdFromPath(filePath: string): string | null {
} }
/** /**
* Loop-specific broadcast with throttling * Loop broadcast types (without timestamp - added automatically)
* Throttles LOOP_STATE_UPDATE messages to avoid flooding clients * Throttling is handled universally in broadcastToClients
*/ */
let lastLoopBroadcast = 0;
const LOOP_BROADCAST_THROTTLE = 1000; // 1 second
export type LoopMessage = export type LoopMessage =
| Omit<LoopStateUpdateMessage, 'timestamp'> | Omit<LoopStateUpdateMessage, 'timestamp'>
| Omit<LoopStepCompletedMessage, 'timestamp'> | Omit<LoopStepCompletedMessage, 'timestamp'>
@@ -448,18 +598,10 @@ export type LoopMessage =
| Omit<LoopLogEntryMessage, 'timestamp'>; | Omit<LoopLogEntryMessage, 'timestamp'>;
/** /**
* Broadcast loop state update with throttling * Broadcast loop update with automatic throttling
* Note: Throttling is now handled universally in broadcastToClients
*/ */
export function broadcastLoopUpdate(message: LoopMessage): void { export function broadcastLoopUpdate(message: LoopMessage): void {
const now = Date.now();
// Throttle LOOP_STATE_UPDATE to reduce WebSocket traffic
if (message.type === 'LOOP_STATE_UPDATE' && now - lastLoopBroadcast < LOOP_BROADCAST_THROTTLE) {
return;
}
lastLoopBroadcast = now;
broadcastToClients({ broadcastToClients({
...message, ...message,
timestamp: new Date().toISOString() timestamp: new Date().toISOString()
@@ -467,8 +609,8 @@ export function broadcastLoopUpdate(message: LoopMessage): void {
} }
/** /**
* Broadcast loop log entry (no throttling) * Broadcast loop log entry
* Used for streaming real-time logs to Dashboard * Note: Throttling is now handled universally in broadcastToClients
*/ */
export function broadcastLoopLog(loop_id: string, step_id: string, line: string): void { export function broadcastLoopLog(loop_id: string, step_id: string, line: string): void {
broadcastToClients({ broadcastToClients({
@@ -482,6 +624,7 @@ export function broadcastLoopLog(loop_id: string, step_id: string, line: string)
/** /**
* Union type for Orchestrator messages (without timestamp - added automatically) * Union type for Orchestrator messages (without timestamp - added automatically)
* Throttling is handled universally in broadcastToClients
*/ */
export type OrchestratorMessage = export type OrchestratorMessage =
| Omit<OrchestratorStateUpdateMessage, 'timestamp'> | Omit<OrchestratorStateUpdateMessage, 'timestamp'>
@@ -491,29 +634,10 @@ export type OrchestratorMessage =
| Omit<OrchestratorLogMessage, 'timestamp'>; | Omit<OrchestratorLogMessage, 'timestamp'>;
/** /**
* Orchestrator-specific broadcast with throttling * Broadcast orchestrator update with automatic throttling
* Throttles ORCHESTRATOR_STATE_UPDATE messages to avoid flooding clients * Note: Throttling is now handled universally in broadcastToClients
*/
let lastOrchestratorBroadcast = 0;
const ORCHESTRATOR_BROADCAST_THROTTLE = 1000; // 1 second
/**
* Broadcast orchestrator update with throttling
* STATE_UPDATE messages are throttled to 1 per second
* Other message types are sent immediately
*/ */
export function broadcastOrchestratorUpdate(message: OrchestratorMessage): void { export function broadcastOrchestratorUpdate(message: OrchestratorMessage): void {
const now = Date.now();
// Throttle ORCHESTRATOR_STATE_UPDATE to reduce WebSocket traffic
if (message.type === 'ORCHESTRATOR_STATE_UPDATE' && now - lastOrchestratorBroadcast < ORCHESTRATOR_BROADCAST_THROTTLE) {
return;
}
if (message.type === 'ORCHESTRATOR_STATE_UPDATE') {
lastOrchestratorBroadcast = now;
}
broadcastToClients({ broadcastToClients({
...message, ...message,
timestamp: new Date().toISOString() timestamp: new Date().toISOString()
@@ -521,8 +645,8 @@ export function broadcastOrchestratorUpdate(message: OrchestratorMessage): void
} }
/** /**
* Broadcast orchestrator log entry (no throttling) * Broadcast orchestrator log entry
* Used for streaming real-time execution logs to Dashboard * Note: Throttling is now handled universally in broadcastToClients
*/ */
export function broadcastOrchestratorLog(execId: string, log: Omit<ExecutionLog, 'timestamp'>): void { export function broadcastOrchestratorLog(execId: string, log: Omit<ExecutionLog, 'timestamp'>): void {
broadcastToClients({ broadcastToClients({
@@ -639,6 +763,7 @@ export interface CoordinatorAnswerReceivedMessage {
/** /**
* Union type for Coordinator messages (without timestamp - added automatically) * Union type for Coordinator messages (without timestamp - added automatically)
* Throttling is handled universally in broadcastToClients
*/ */
export type CoordinatorMessage = export type CoordinatorMessage =
| Omit<CoordinatorStateUpdateMessage, 'timestamp'> | Omit<CoordinatorStateUpdateMessage, 'timestamp'>
@@ -650,29 +775,10 @@ export type CoordinatorMessage =
| Omit<CoordinatorAnswerReceivedMessage, 'timestamp'>; | Omit<CoordinatorAnswerReceivedMessage, 'timestamp'>;
/** /**
* Coordinator-specific broadcast with throttling * Broadcast coordinator update with automatic throttling
* Throttles COORDINATOR_STATE_UPDATE messages to avoid flooding clients * Note: Throttling is now handled universally in broadcastToClients
*/
let lastCoordinatorBroadcast = 0;
const COORDINATOR_BROADCAST_THROTTLE = 1000; // 1 second
/**
* Broadcast coordinator update with throttling
* STATE_UPDATE messages are throttled to 1 per second
* Other message types are sent immediately
*/ */
export function broadcastCoordinatorUpdate(message: CoordinatorMessage): void { export function broadcastCoordinatorUpdate(message: CoordinatorMessage): void {
const now = Date.now();
// Throttle COORDINATOR_STATE_UPDATE to reduce WebSocket traffic
if (message.type === 'COORDINATOR_STATE_UPDATE' && now - lastCoordinatorBroadcast < COORDINATOR_BROADCAST_THROTTLE) {
return;
}
if (message.type === 'COORDINATOR_STATE_UPDATE') {
lastCoordinatorBroadcast = now;
}
broadcastToClients({ broadcastToClients({
...message, ...message,
timestamp: new Date().toISOString() timestamp: new Date().toISOString()
@@ -680,8 +786,8 @@ export function broadcastCoordinatorUpdate(message: CoordinatorMessage): void {
} }
/** /**
* Broadcast coordinator log entry (no throttling) * Broadcast coordinator log entry
* Used for streaming real-time coordinator logs to Dashboard * Note: Throttling is now handled universally in broadcastToClients
*/ */
export function broadcastCoordinatorLog( export function broadcastCoordinatorLog(
executionId: string, executionId: string,
@@ -715,6 +821,7 @@ export type {
/** /**
* Union type for Queue messages (without timestamp - added automatically) * Union type for Queue messages (without timestamp - added automatically)
* Throttling is handled universally in broadcastToClients
*/ */
export type QueueMessage = export type QueueMessage =
| Omit<QueueSchedulerStateUpdateMessage, 'timestamp'> | Omit<QueueSchedulerStateUpdateMessage, 'timestamp'>
@@ -724,29 +831,10 @@ export type QueueMessage =
| Omit<QueueSchedulerConfigUpdatedMessage, 'timestamp'>; | Omit<QueueSchedulerConfigUpdatedMessage, 'timestamp'>;
/** /**
* Queue-specific broadcast with throttling * Broadcast queue update with automatic throttling
* Throttles QUEUE_SCHEDULER_STATE_UPDATE messages to avoid flooding clients * Note: Throttling is now handled universally in broadcastToClients
*/
let lastQueueBroadcast = 0;
const QUEUE_BROADCAST_THROTTLE = 1000; // 1 second
/**
* Broadcast queue update with throttling
* STATE_UPDATE messages are throttled to 1 per second
* Other message types are sent immediately
*/ */
export function broadcastQueueUpdate(message: QueueMessage): void { export function broadcastQueueUpdate(message: QueueMessage): void {
const now = Date.now();
// Throttle QUEUE_SCHEDULER_STATE_UPDATE to reduce WebSocket traffic
if (message.type === 'QUEUE_SCHEDULER_STATE_UPDATE' && now - lastQueueBroadcast < QUEUE_BROADCAST_THROTTLE) {
return;
}
if (message.type === 'QUEUE_SCHEDULER_STATE_UPDATE') {
lastQueueBroadcast = now;
}
broadcastToClients({ broadcastToClients({
...message, ...message,
timestamp: new Date().toISOString() timestamp: new Date().toISOString()

View File

@@ -128,7 +128,7 @@ export function ensureHistoryDir(baseDir: string): string {
*/ */
async function saveConversationAsync(baseDir: string, conversation: ConversationRecord): Promise<void> { async function saveConversationAsync(baseDir: string, conversation: ConversationRecord): Promise<void> {
const store = await getSqliteStore(baseDir); const store = await getSqliteStore(baseDir);
store.saveConversation(conversation); await store.saveConversation(conversation);
} }
/** /**
@@ -138,7 +138,10 @@ async function saveConversationAsync(baseDir: string, conversation: Conversation
export function saveConversation(baseDir: string, conversation: ConversationRecord): void { export function saveConversation(baseDir: string, conversation: ConversationRecord): void {
try { try {
const store = getSqliteStoreSync(baseDir); const store = getSqliteStoreSync(baseDir);
store.saveConversation(conversation); // Fire and forget - don't block on async save in sync context
store.saveConversation(conversation).catch(err => {
console.error('[CLI Executor] Failed to save conversation:', err.message);
});
} catch { } catch {
// If sync not available, queue for async save // If sync not available, queue for async save
saveConversationAsync(baseDir, conversation).catch(err => { saveConversationAsync(baseDir, conversation).catch(err => {
@@ -399,7 +402,7 @@ export function getExecutionDetail(baseDir: string, executionId: string): Execut
*/ */
export async function deleteExecutionAsync(baseDir: string, executionId: string): Promise<{ success: boolean; error?: string }> { export async function deleteExecutionAsync(baseDir: string, executionId: string): Promise<{ success: boolean; error?: string }> {
const store = await getSqliteStore(baseDir); const store = await getSqliteStore(baseDir);
return store.deleteConversation(executionId); return await store.deleteConversation(executionId);
} }
/** /**
@@ -408,7 +411,11 @@ export async function deleteExecutionAsync(baseDir: string, executionId: string)
export function deleteExecution(baseDir: string, executionId: string): { success: boolean; error?: string } { export function deleteExecution(baseDir: string, executionId: string): { success: boolean; error?: string } {
try { try {
const store = getSqliteStoreSync(baseDir); const store = getSqliteStoreSync(baseDir);
return store.deleteConversation(executionId); // Fire and forget - don't block on async delete in sync context
store.deleteConversation(executionId).catch(err => {
console.error('[CLI Executor] Failed to delete execution:', err.message);
});
return { success: true }; // Optimistic response
} catch { } catch {
return { success: false, error: 'SQLite store not initialized' }; return { success: false, error: 'SQLite store not initialized' };
} }
@@ -424,7 +431,7 @@ export async function batchDeleteExecutionsAsync(baseDir: string, ids: string[])
errors?: string[]; errors?: string[];
}> { }> {
const store = await getSqliteStore(baseDir); const store = await getSqliteStore(baseDir);
const result = store.batchDelete(ids); const result = await store.batchDelete(ids);
return { ...result, total: ids.length }; return { ...result, total: ids.length };
} }

View File

@@ -121,7 +121,8 @@ export class CliHistoryStore {
this.db = new Database(this.dbPath); this.db = new Database(this.dbPath);
this.db.pragma('journal_mode = WAL'); this.db.pragma('journal_mode = WAL');
this.db.pragma('synchronous = NORMAL'); this.db.pragma('synchronous = NORMAL');
this.db.pragma('busy_timeout = 5000'); // Wait up to 5 seconds for locks this.db.pragma('busy_timeout = 10000'); // Wait up to 10 seconds for locks (increased for write-heavy scenarios)
this.db.pragma('wal_autocheckpoint = 1000'); // Optimize WAL checkpointing
this.initSchema(); this.initSchema();
this.migrateFromJson(historyDir); this.migrateFromJson(historyDir);
@@ -266,53 +267,37 @@ export class CliHistoryStore {
const hasProjectRoot = tableInfo.some(col => col.name === 'project_root'); const hasProjectRoot = tableInfo.some(col => col.name === 'project_root');
const hasRelativePath = tableInfo.some(col => col.name === 'relative_path'); const hasRelativePath = tableInfo.some(col => col.name === 'relative_path');
// Silent migrations - only log warnings/errors
if (!hasCategory) { if (!hasCategory) {
console.log('[CLI History] Migrating database: adding category column...'); this.db.exec(`ALTER TABLE conversations ADD COLUMN category TEXT DEFAULT 'user';`);
this.db.exec(`
ALTER TABLE conversations ADD COLUMN category TEXT DEFAULT 'user';
`);
// Create index separately to handle potential errors
try { try {
this.db.exec(`CREATE INDEX IF NOT EXISTS idx_conversations_category ON conversations(category);`); this.db.exec(`CREATE INDEX IF NOT EXISTS idx_conversations_category ON conversations(category);`);
} catch (indexErr) { } catch (indexErr) {
console.warn('[CLI History] Category index creation warning:', (indexErr as Error).message); console.warn('[CLI History] Category index creation warning:', (indexErr as Error).message);
} }
console.log('[CLI History] Migration complete: category column added');
} }
if (!hasParentExecutionId) { if (!hasParentExecutionId) {
console.log('[CLI History] Migrating database: adding parent_execution_id column...'); this.db.exec(`ALTER TABLE conversations ADD COLUMN parent_execution_id TEXT;`);
this.db.exec(`
ALTER TABLE conversations ADD COLUMN parent_execution_id TEXT;
`);
try { try {
this.db.exec(`CREATE INDEX IF NOT EXISTS idx_conversations_parent ON conversations(parent_execution_id);`); this.db.exec(`CREATE INDEX IF NOT EXISTS idx_conversations_parent ON conversations(parent_execution_id);`);
} catch (indexErr) { } catch (indexErr) {
console.warn('[CLI History] Parent execution index creation warning:', (indexErr as Error).message); console.warn('[CLI History] Parent execution index creation warning:', (indexErr as Error).message);
} }
console.log('[CLI History] Migration complete: parent_execution_id column added');
} }
// Add hierarchical storage support columns // Add hierarchical storage support columns
if (!hasProjectRoot) { if (!hasProjectRoot) {
console.log('[CLI History] Migrating database: adding project_root column for hierarchical storage...'); this.db.exec(`ALTER TABLE conversations ADD COLUMN project_root TEXT;`);
this.db.exec(`
ALTER TABLE conversations ADD COLUMN project_root TEXT;
`);
try { try {
this.db.exec(`CREATE INDEX IF NOT EXISTS idx_conversations_project_root ON conversations(project_root);`); this.db.exec(`CREATE INDEX IF NOT EXISTS idx_conversations_project_root ON conversations(project_root);`);
} catch (indexErr) { } catch (indexErr) {
console.warn('[CLI History] Project root index creation warning:', (indexErr as Error).message); console.warn('[CLI History] Project root index creation warning:', (indexErr as Error).message);
} }
console.log('[CLI History] Migration complete: project_root column added');
} }
if (!hasRelativePath) { if (!hasRelativePath) {
console.log('[CLI History] Migrating database: adding relative_path column for hierarchical storage...'); this.db.exec(`ALTER TABLE conversations ADD COLUMN relative_path TEXT;`);
this.db.exec(`
ALTER TABLE conversations ADD COLUMN relative_path TEXT;
`);
console.log('[CLI History] Migration complete: relative_path column added');
} }
// Add missing timestamp index for turns table (for time-based queries) // Add missing timestamp index for turns table (for time-based queries)
@@ -323,9 +308,7 @@ export class CliHistoryStore {
`).get(); `).get();
if (!indexExists) { if (!indexExists) {
console.log('[CLI History] Adding missing timestamp index to turns table...');
this.db.exec(`CREATE INDEX IF NOT EXISTS idx_turns_timestamp ON turns(timestamp DESC);`); this.db.exec(`CREATE INDEX IF NOT EXISTS idx_turns_timestamp ON turns(timestamp DESC);`);
console.log('[CLI History] Migration complete: turns timestamp index added');
} }
} catch (indexErr) { } catch (indexErr) {
console.warn('[CLI History] Turns timestamp index creation warning:', (indexErr as Error).message); console.warn('[CLI History] Turns timestamp index creation warning:', (indexErr as Error).message);
@@ -352,15 +335,11 @@ export class CliHistoryStore {
} }
} }
// Batch migration - only output log if there are columns to migrate // Batch migration - silent
if (missingTurnsColumns.length > 0) { if (missingTurnsColumns.length > 0) {
console.log(`[CLI History] Migrating turns table: adding ${missingTurnsColumns.length} columns (${missingTurnsColumns.join(', ')})...`);
for (const col of missingTurnsColumns) { for (const col of missingTurnsColumns) {
this.db.exec(`ALTER TABLE turns ADD COLUMN ${col} ${turnsColumnDefs[col]};`); this.db.exec(`ALTER TABLE turns ADD COLUMN ${col} ${turnsColumnDefs[col]};`);
} }
console.log('[CLI History] Migration complete: turns table updated');
} }
// Add transaction_id column to native_session_mapping table for concurrent session disambiguation // Add transaction_id column to native_session_mapping table for concurrent session disambiguation
@@ -368,16 +347,12 @@ export class CliHistoryStore {
const hasTransactionId = mappingInfo.some(col => col.name === 'transaction_id'); const hasTransactionId = mappingInfo.some(col => col.name === 'transaction_id');
if (!hasTransactionId) { if (!hasTransactionId) {
console.log('[CLI History] Migrating database: adding transaction_id column to native_session_mapping...'); this.db.exec(`ALTER TABLE native_session_mapping ADD COLUMN transaction_id TEXT;`);
this.db.exec(`
ALTER TABLE native_session_mapping ADD COLUMN transaction_id TEXT;
`);
try { try {
this.db.exec(`CREATE INDEX IF NOT EXISTS idx_native_transaction_id ON native_session_mapping(transaction_id);`); this.db.exec(`CREATE INDEX IF NOT EXISTS idx_native_transaction_id ON native_session_mapping(transaction_id);`);
} catch (indexErr) { } catch (indexErr) {
console.warn('[CLI History] Transaction ID index creation warning:', (indexErr as Error).message); console.warn('[CLI History] Transaction ID index creation warning:', (indexErr as Error).message);
} }
console.log('[CLI History] Migration complete: transaction_id column added');
} }
} catch (err) { } catch (err) {
console.error('[CLI History] Migration error:', (err as Error).message); console.error('[CLI History] Migration error:', (err as Error).message);
@@ -386,12 +361,13 @@ export class CliHistoryStore {
} }
/** /**
* Execute a database operation with retry logic for SQLITE_BUSY errors * Execute a database operation with retry logic for SQLITE_BUSY errors (async, non-blocking)
* @param operation - Function to execute * @param operation - Function to execute
* @param maxRetries - Maximum retry attempts (default: 3) * @param maxRetries - Maximum retry attempts (default: 3)
* @param baseDelay - Base delay in ms for exponential backoff (default: 100) * @param baseDelay - Base delay in ms for exponential backoff (default: 100)
*/ */
private withRetry<T>(operation: () => T, maxRetries = 3, baseDelay = 100): T { private async withRetry<T>(operation: () => T, maxRetries = 3, baseDelay = 100): Promise<T> {
const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms));
let lastError: Error | null = null; let lastError: Error | null = null;
for (let attempt = 0; attempt <= maxRetries; attempt++) { for (let attempt = 0; attempt <= maxRetries; attempt++) {
@@ -405,10 +381,8 @@ export class CliHistoryStore {
if (attempt < maxRetries) { if (attempt < maxRetries) {
// Exponential backoff: 100ms, 200ms, 400ms // Exponential backoff: 100ms, 200ms, 400ms
const delay = baseDelay * Math.pow(2, attempt); const delay = baseDelay * Math.pow(2, attempt);
// Sync sleep using Atomics (works in Node.js) // Async non-blocking sleep
const sharedBuffer = new SharedArrayBuffer(4); await sleep(delay);
const sharedArray = new Int32Array(sharedBuffer);
Atomics.wait(sharedArray, 0, 0, delay);
} }
} else { } else {
// Non-BUSY error, throw immediately // Non-BUSY error, throw immediately
@@ -421,7 +395,7 @@ export class CliHistoryStore {
} }
/** /**
* Migrate existing JSON files to SQLite * Migrate existing JSON files to SQLite (async, non-blocking)
*/ */
private migrateFromJson(historyDir: string): void { private migrateFromJson(historyDir: string): void {
const migrationMarker = join(historyDir, '.migrated'); const migrationMarker = join(historyDir, '.migrated');
@@ -429,41 +403,50 @@ export class CliHistoryStore {
return; // Already migrated return; // Already migrated
} }
// Find all date directories // Fire-and-forget async migration
const dateDirs = readdirSync(historyDir).filter(d => { (async () => {
const dirPath = join(historyDir, d); try {
return statSync(dirPath).isDirectory() && /^\d{4}-\d{2}-\d{2}$/.test(d); // Find all date directories
}); const dateDirs = readdirSync(historyDir).filter(d => {
const dirPath = join(historyDir, d);
return statSync(dirPath).isDirectory() && /^\d{4}-\d{2}-\d{2}$/.test(d);
});
let migratedCount = 0; let migratedCount = 0;
for (const dateDir of dateDirs) { for (const dateDir of dateDirs) {
const dirPath = join(historyDir, dateDir); const dirPath = join(historyDir, dateDir);
const files = readdirSync(dirPath).filter(f => f.endsWith('.json')); const files = readdirSync(dirPath).filter(f => f.endsWith('.json'));
for (const file of files) { for (const file of files) {
try { try {
const filePath = join(dirPath, file); const filePath = join(dirPath, file);
const data = JSON.parse(readFileSync(filePath, 'utf8')); const data = JSON.parse(readFileSync(filePath, 'utf8'));
// Convert to conversation record if legacy format // Convert to conversation record if legacy format
const conversation = this.normalizeRecord(data); const conversation = this.normalizeRecord(data);
this.saveConversation(conversation); await this.saveConversation(conversation);
migratedCount++; migratedCount++;
// Optionally delete the JSON file after migration // Optionally delete the JSON file after migration
// unlinkSync(filePath); // unlinkSync(filePath);
} catch (err) { } catch (err) {
console.error(`Failed to migrate ${file}:`, (err as Error).message); console.error(`Failed to migrate ${file}:`, (err as Error).message);
}
}
} }
}
}
// Create migration marker // Create migration marker
if (migratedCount > 0) { if (migratedCount > 0) {
require('fs').writeFileSync(migrationMarker, new Date().toISOString()); require('fs').writeFileSync(migrationMarker, new Date().toISOString());
console.log(`[CLI History] Migrated ${migratedCount} records to SQLite`); console.log(`[CLI History] Migrated ${migratedCount} records to SQLite`);
} }
} catch (err) {
console.error('[CLI History] Migration failed:', (err as Error).message);
}
})().catch(err => {
console.error('[CLI History] Migration error:', (err as Error).message);
});
} }
/** /**
@@ -499,9 +482,9 @@ export class CliHistoryStore {
} }
/** /**
* Save or update a conversation * Save or update a conversation (async for non-blocking retry)
*/ */
saveConversation(conversation: ConversationRecord): void { async saveConversation(conversation: ConversationRecord): Promise<void> {
// Ensure prompt is a string before calling substring // Ensure prompt is a string before calling substring
const lastTurn = conversation.turns.length > 0 ? conversation.turns[conversation.turns.length - 1] : null; const lastTurn = conversation.turns.length > 0 ? conversation.turns[conversation.turns.length - 1] : null;
const rawPrompt = lastTurn?.prompt ?? ''; const rawPrompt = lastTurn?.prompt ?? '';
@@ -579,7 +562,7 @@ export class CliHistoryStore {
} }
}); });
this.withRetry(() => transaction()); await this.withRetry(() => transaction());
} }
/** /**
@@ -852,11 +835,11 @@ export class CliHistoryStore {
} }
/** /**
* Delete a conversation * Delete a conversation (async for non-blocking retry)
*/ */
deleteConversation(id: string): { success: boolean; error?: string } { async deleteConversation(id: string): Promise<{ success: boolean; error?: string }> {
try { try {
const result = this.withRetry(() => const result = await this.withRetry(() =>
this.db.prepare('DELETE FROM conversations WHERE id = ?').run(id) this.db.prepare('DELETE FROM conversations WHERE id = ?').run(id)
); );
return { success: result.changes > 0 }; return { success: result.changes > 0 };
@@ -866,9 +849,9 @@ export class CliHistoryStore {
} }
/** /**
* Batch delete conversations * Batch delete conversations (async for non-blocking retry)
*/ */
batchDelete(ids: string[]): { success: boolean; deleted: number; errors?: string[] } { async batchDelete(ids: string[]): Promise<{ success: boolean; deleted: number; errors?: string[] }> {
const deleteStmt = this.db.prepare('DELETE FROM conversations WHERE id = ?'); const deleteStmt = this.db.prepare('DELETE FROM conversations WHERE id = ?');
const errors: string[] = []; const errors: string[] = [];
let deleted = 0; let deleted = 0;
@@ -884,7 +867,7 @@ export class CliHistoryStore {
} }
}); });
this.withRetry(() => transaction()); await this.withRetry(() => transaction());
return { return {
success: true, success: true,
@@ -947,9 +930,9 @@ export class CliHistoryStore {
// ========== Native Session Mapping Methods ========== // ========== Native Session Mapping Methods ==========
/** /**
* Save or update native session mapping * Save or update native session mapping (async for non-blocking retry)
*/ */
saveNativeSessionMapping(mapping: NativeSessionMapping): void { async saveNativeSessionMapping(mapping: NativeSessionMapping): Promise<void> {
const stmt = this.db.prepare(` const stmt = this.db.prepare(`
INSERT INTO native_session_mapping (ccw_id, tool, native_session_id, native_session_path, project_hash, transaction_id, created_at) INSERT INTO native_session_mapping (ccw_id, tool, native_session_id, native_session_path, project_hash, transaction_id, created_at)
VALUES (@ccw_id, @tool, @native_session_id, @native_session_path, @project_hash, @transaction_id, @created_at) VALUES (@ccw_id, @tool, @native_session_id, @native_session_path, @project_hash, @transaction_id, @created_at)
@@ -960,7 +943,7 @@ export class CliHistoryStore {
transaction_id = @transaction_id transaction_id = @transaction_id
`); `);
this.withRetry(() => stmt.run({ await this.withRetry(() => stmt.run({
ccw_id: mapping.ccw_id, ccw_id: mapping.ccw_id,
tool: mapping.tool, tool: mapping.tool,
native_session_id: mapping.native_session_id, native_session_id: mapping.native_session_id,
@@ -1144,7 +1127,7 @@ export class CliHistoryStore {
// Persist mapping for future loads (best-effort). // Persist mapping for future loads (best-effort).
try { try {
this.saveNativeSessionMapping({ await this.saveNativeSessionMapping({
ccw_id: ccwId, ccw_id: ccwId,
tool, tool,
native_session_id: best.sessionId, native_session_id: best.sessionId,
@@ -1290,9 +1273,9 @@ export class CliHistoryStore {
// ========== Insights Methods ========== // ========== Insights Methods ==========
/** /**
* Save an insights analysis result * Save an insights analysis result (async for non-blocking retry)
*/ */
saveInsight(insight: { async saveInsight(insight: {
id: string; id: string;
tool: string; tool: string;
promptCount: number; promptCount: number;
@@ -1301,13 +1284,13 @@ export class CliHistoryStore {
rawOutput?: string; rawOutput?: string;
executionId?: string; executionId?: string;
lang?: string; lang?: string;
}): void { }): Promise<void> {
const stmt = this.db.prepare(` const stmt = this.db.prepare(`
INSERT OR REPLACE INTO insights (id, created_at, tool, prompt_count, patterns, suggestions, raw_output, execution_id, lang) INSERT OR REPLACE INTO insights (id, created_at, tool, prompt_count, patterns, suggestions, raw_output, execution_id, lang)
VALUES (@id, @created_at, @tool, @prompt_count, @patterns, @suggestions, @raw_output, @execution_id, @lang) VALUES (@id, @created_at, @tool, @prompt_count, @patterns, @suggestions, @raw_output, @execution_id, @lang)
`); `);
this.withRetry(() => stmt.run({ await this.withRetry(() => stmt.run({
id: insight.id, id: insight.id,
created_at: new Date().toISOString(), created_at: new Date().toISOString(),
tool: insight.tool, tool: insight.tool,
@@ -1391,9 +1374,9 @@ export class CliHistoryStore {
} }
/** /**
* Save or update a review for an execution * Save or update a review for an execution (async for non-blocking retry)
*/ */
saveReview(review: Omit<ReviewRecord, 'id' | 'created_at' | 'updated_at'> & { created_at?: string; updated_at?: string }): ReviewRecord { async saveReview(review: Omit<ReviewRecord, 'id' | 'created_at' | 'updated_at'> & { created_at?: string; updated_at?: string }): Promise<ReviewRecord> {
const now = new Date().toISOString(); const now = new Date().toISOString();
const created_at = review.created_at || now; const created_at = review.created_at || now;
const updated_at = review.updated_at || now; const updated_at = review.updated_at || now;
@@ -1409,7 +1392,7 @@ export class CliHistoryStore {
updated_at = @updated_at updated_at = @updated_at
`); `);
const result = this.withRetry(() => stmt.run({ const result = await this.withRetry(() => stmt.run({
execution_id: review.execution_id, execution_id: review.execution_id,
status: review.status, status: review.status,
rating: review.rating ?? null, rating: review.rating ?? null,

View File

@@ -516,6 +516,36 @@ async function ensureLiteLLMEmbedderReady(): Promise<BootstrapResult> {
// Fallback: Use pip for installation // Fallback: Use pip for installation
const pipPath = getCodexLensPip(); const pipPath = getCodexLensPip();
// UV-created venvs may not ship with pip.exe. Ensure pip exists before using pip fallback.
if (!existsSync(pipPath)) {
const venvPython = getCodexLensPython();
console.warn(`[CodexLens] pip not found at: ${pipPath}. Attempting to bootstrap pip with ensurepip...`);
try {
execSync(`\"${venvPython}\" -m ensurepip --upgrade`, {
stdio: 'inherit',
timeout: EXEC_TIMEOUTS.PACKAGE_INSTALL,
});
} catch (err) {
console.warn(`[CodexLens] ensurepip failed: ${(err as Error).message}`);
}
}
if (!existsSync(pipPath)) {
return {
success: false,
error: `pip not found at ${pipPath}. Delete ${getCodexLensVenvDir()} and retry, or reinstall using UV.`,
diagnostics: {
packagePath: localPath || undefined,
venvPath: getCodexLensVenvDir(),
installer: 'pip',
editable,
searchedPaths: !localPath ? discovery.searchedPaths : undefined,
},
warnings: warnings.length > 0 ? warnings : undefined,
};
}
try { try {
if (localPath) { if (localPath) {
const pipFlag = editable ? '-e' : ''; const pipFlag = editable ? '-e' : '';
@@ -1011,7 +1041,45 @@ async function bootstrapVenv(): Promise<BootstrapResult> {
// Install codex-lens // Install codex-lens
try { try {
console.log('[CodexLens] Installing codex-lens package...'); console.log('[CodexLens] Installing codex-lens package...');
const pipPath = getCodexLensPip(); const pipPath = getCodexLensPip();
// UV-created venvs may not ship with pip.exe. Ensure pip exists before using pip fallback.
if (!existsSync(pipPath)) {
const venvPython = getCodexLensPython();
console.warn(`[CodexLens] pip not found at: ${pipPath}. Attempting to bootstrap pip with ensurepip...`);
try {
execSync(`\"${venvPython}\" -m ensurepip --upgrade`, {
stdio: 'inherit',
timeout: EXEC_TIMEOUTS.PACKAGE_INSTALL,
});
} catch (err) {
console.warn(`[CodexLens] ensurepip failed: ${(err as Error).message}`);
}
}
// If pip is still missing, recreate the venv using system Python (guarantees pip).
if (!existsSync(pipPath)) {
console.warn('[CodexLens] pip still missing after ensurepip; recreating venv with system Python...');
try {
rmSync(venvDir, { recursive: true, force: true });
const pythonCmd = getSystemPython();
execSync(`${pythonCmd} -m venv \"${venvDir}\"`, { stdio: 'inherit', timeout: EXEC_TIMEOUTS.PROCESS_SPAWN });
} catch (err) {
return {
success: false,
error: `Failed to recreate venv for pip fallback: ${(err as Error).message}`,
warnings: warnings.length > 0 ? warnings : undefined,
};
}
if (!existsSync(pipPath)) {
return {
success: false,
error: `pip not found at ${pipPath} after venv recreation. Ensure your Python installation includes ensurepip/pip.`,
warnings: warnings.length > 0 ? warnings : undefined,
};
}
}
// Try local path using unified discovery // Try local path using unified discovery
const discovery = findCodexLensPath(); const discovery = findCodexLensPath();

View File

@@ -0,0 +1,120 @@
/**
* Regression test: CodexLens bootstrap should recover when UV bootstrap fails
* and the existing venv is missing pip (common with UV-created venvs).
*
* We simulate "UV available but broken" by pointing CCW_UV_PATH to the current Node
* executable. `node --version` exits 0 so isUvAvailable() returns true, but any
* `node pip install ...` calls fail, forcing bootstrapVenv() to fall back to pip.
*
* Before running bootstrapVenv(), we pre-create the venv and delete its pip entrypoint
* to mimic a venv that has Python but no pip executable. bootstrapVenv() should
* re-bootstrap pip (ensurepip) or recreate the venv, then succeed.
*/
import { describe, it } from 'node:test';
import assert from 'node:assert/strict';
import { spawn } from 'node:child_process';
import { mkdtempSync, rmSync } from 'node:fs';
import { dirname, join } from 'node:path';
import { tmpdir } from 'node:os';
import { fileURLToPath } from 'node:url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// repo root: <repo>/ccw/tests -> <repo>
const REPO_ROOT = join(__dirname, '..', '..');
function runNodeEvalModule(script, env) {
return new Promise((resolve, reject) => {
const child = spawn(process.execPath, ['--input-type=module', '-e', script], {
cwd: REPO_ROOT,
env,
stdio: ['ignore', 'pipe', 'pipe'],
windowsHide: true,
});
let stdout = '';
let stderr = '';
child.stdout.on('data', (d) => { stdout += d.toString(); });
child.stderr.on('data', (d) => { stderr += d.toString(); });
child.on('error', (err) => reject(err));
child.on('close', (code) => resolve({ code, stdout, stderr }));
});
}
describe('CodexLens bootstrap pip repair', () => {
it('repairs missing pip in existing venv during pip fallback', { timeout: 10 * 60 * 1000 }, async () => {
const dataDir = mkdtempSync(join(tmpdir(), 'codexlens-bootstrap-pip-missing-'));
try {
const script = `
import { execSync } from 'node:child_process';
import { existsSync, rmSync, mkdirSync } from 'node:fs';
import { join } from 'node:path';
import { getSystemPython } from './ccw/dist/utils/python-utils.js';
import { bootstrapVenv } from './ccw/dist/tools/codex-lens.js';
const dataDir = process.env.CODEXLENS_DATA_DIR;
if (!dataDir) throw new Error('Missing CODEXLENS_DATA_DIR');
mkdirSync(dataDir, { recursive: true });
const venvDir = join(dataDir, 'venv');
// Create a venv up-front so UV bootstrap will skip venv creation and fail on install.
const pythonCmd = getSystemPython();
execSync(pythonCmd + ' -m venv "' + venvDir + '"', { stdio: 'inherit' });
// Simulate a "pip-less" venv by deleting the pip entrypoint.
const pipPath = process.platform === 'win32'
? join(venvDir, 'Scripts', 'pip.exe')
: join(venvDir, 'bin', 'pip');
if (existsSync(pipPath)) {
rmSync(pipPath, { force: true });
}
const result = await bootstrapVenv();
const pipRestored = existsSync(pipPath);
console.log('@@RESULT@@' + JSON.stringify({ result, pipRestored }));
`.trim();
const env = {
...process.env,
// Isolate test venv + dependencies from user/global CodexLens state.
CODEXLENS_DATA_DIR: dataDir,
// Make isUvAvailable() return true, but installFromProject() fail.
CCW_UV_PATH: process.execPath,
};
const { code, stdout, stderr } = await runNodeEvalModule(script, env);
assert.equal(code, 0, `bootstrapVenv child process failed:\nSTDOUT:\n${stdout}\nSTDERR:\n${stderr}`);
const marker = '@@RESULT@@';
const idx = stdout.lastIndexOf(marker);
assert.ok(idx !== -1, `Missing result marker in stdout:\n${stdout}`);
const jsonText = stdout.slice(idx + marker.length).trim();
const parsed = JSON.parse(jsonText);
assert.equal(parsed?.result?.success, true, `Expected success=true, got:\n${jsonText}`);
assert.equal(parsed?.pipRestored, true, `Expected pipRestored=true, got:\n${jsonText}`);
// Best-effort: confirm we exercised the missing-pip repair path.
assert.ok(
String(stderr).includes('pip not found at:') || String(stdout).includes('pip not found at:'),
`Expected missing-pip warning in output. STDERR:\n${stderr}\nSTDOUT:\n${stdout}`
);
} finally {
try {
rmSync(dataDir, { recursive: true, force: true });
} catch {
// Best effort cleanup; leave artifacts only if Windows locks prevent removal.
}
}
});
});

View File

@@ -30,7 +30,7 @@ dependencies = [
[project.optional-dependencies] [project.optional-dependencies]
# Semantic search using fastembed (ONNX-based, lightweight ~200MB) # Semantic search using fastembed (ONNX-based, lightweight ~200MB)
semantic = [ semantic = [
"numpy~=1.24.0", "numpy~=1.26.0",
"fastembed~=0.2.0", "fastembed~=0.2.0",
"hnswlib~=0.8.0", "hnswlib~=0.8.0",
] ]
@@ -38,7 +38,7 @@ semantic = [
# GPU acceleration for semantic search (NVIDIA CUDA) # GPU acceleration for semantic search (NVIDIA CUDA)
# Install with: pip install codexlens[semantic-gpu] # Install with: pip install codexlens[semantic-gpu]
semantic-gpu = [ semantic-gpu = [
"numpy~=1.24.0", "numpy~=1.26.0",
"fastembed~=0.2.0", "fastembed~=0.2.0",
"hnswlib~=0.8.0", "hnswlib~=0.8.0",
"onnxruntime-gpu~=1.15.0", # CUDA support "onnxruntime-gpu~=1.15.0", # CUDA support
@@ -47,7 +47,7 @@ semantic-gpu = [
# GPU acceleration for Windows (DirectML - supports NVIDIA/AMD/Intel) # GPU acceleration for Windows (DirectML - supports NVIDIA/AMD/Intel)
# Install with: pip install codexlens[semantic-directml] # Install with: pip install codexlens[semantic-directml]
semantic-directml = [ semantic-directml = [
"numpy~=1.24.0", "numpy~=1.26.0",
"fastembed~=0.2.0", "fastembed~=0.2.0",
"hnswlib~=0.8.0", "hnswlib~=0.8.0",
"onnxruntime-directml~=1.15.0", # DirectML support "onnxruntime-directml~=1.15.0", # DirectML support

View File

@@ -19,16 +19,16 @@ Requires-Dist: pathspec~=0.11.0
Requires-Dist: watchdog~=3.0.0 Requires-Dist: watchdog~=3.0.0
Requires-Dist: ast-grep-py~=0.40.0 Requires-Dist: ast-grep-py~=0.40.0
Provides-Extra: semantic Provides-Extra: semantic
Requires-Dist: numpy~=1.24.0; extra == "semantic" Requires-Dist: numpy~=1.26.0; extra == "semantic"
Requires-Dist: fastembed~=0.2.0; extra == "semantic" Requires-Dist: fastembed~=0.2.0; extra == "semantic"
Requires-Dist: hnswlib~=0.8.0; extra == "semantic" Requires-Dist: hnswlib~=0.8.0; extra == "semantic"
Provides-Extra: semantic-gpu Provides-Extra: semantic-gpu
Requires-Dist: numpy~=1.24.0; extra == "semantic-gpu" Requires-Dist: numpy~=1.26.0; extra == "semantic-gpu"
Requires-Dist: fastembed~=0.2.0; extra == "semantic-gpu" Requires-Dist: fastembed~=0.2.0; extra == "semantic-gpu"
Requires-Dist: hnswlib~=0.8.0; extra == "semantic-gpu" Requires-Dist: hnswlib~=0.8.0; extra == "semantic-gpu"
Requires-Dist: onnxruntime-gpu~=1.15.0; extra == "semantic-gpu" Requires-Dist: onnxruntime-gpu~=1.15.0; extra == "semantic-gpu"
Provides-Extra: semantic-directml Provides-Extra: semantic-directml
Requires-Dist: numpy~=1.24.0; extra == "semantic-directml" Requires-Dist: numpy~=1.26.0; extra == "semantic-directml"
Requires-Dist: fastembed~=0.2.0; extra == "semantic-directml" Requires-Dist: fastembed~=0.2.0; extra == "semantic-directml"
Requires-Dist: hnswlib~=0.8.0; extra == "semantic-directml" Requires-Dist: hnswlib~=0.8.0; extra == "semantic-directml"
Requires-Dist: onnxruntime-directml~=1.15.0; extra == "semantic-directml" Requires-Dist: onnxruntime-directml~=1.15.0; extra == "semantic-directml"

View File

@@ -42,18 +42,18 @@ onnxruntime~=1.15.0
transformers~=4.36.0 transformers~=4.36.0
[semantic] [semantic]
numpy~=1.24.0 numpy~=1.26.0
fastembed~=0.2.0 fastembed~=0.2.0
hnswlib~=0.8.0 hnswlib~=0.8.0
[semantic-directml] [semantic-directml]
numpy~=1.24.0 numpy~=1.26.0
fastembed~=0.2.0 fastembed~=0.2.0
hnswlib~=0.8.0 hnswlib~=0.8.0
onnxruntime-directml~=1.15.0 onnxruntime-directml~=1.15.0
[semantic-gpu] [semantic-gpu]
numpy~=1.24.0 numpy~=1.26.0
fastembed~=0.2.0 fastembed~=0.2.0
hnswlib~=0.8.0 hnswlib~=0.8.0
onnxruntime-gpu~=1.15.0 onnxruntime-gpu~=1.15.0

View File

@@ -19,20 +19,15 @@ export default withMermaid(defineConfig({
// Ignore dead links for incomplete docs // Ignore dead links for incomplete docs
ignoreDeadLinks: true, ignoreDeadLinks: true,
head: [ head: [
['link', { rel: 'icon', href: '/favicon.svg', type: 'image/svg+xml' }], ['link', { rel: 'icon', href: `${base}favicon.svg`, type: 'image/svg+xml' }],
[ [
'script', 'script',
{}, {},
`(() => { `(function() {
try { try {
const theme = localStorage.getItem('ccw-theme') || 'blue' var theme = localStorage.getItem('ccw-theme') || 'blue';
document.documentElement.setAttribute('data-theme', theme) document.documentElement.setAttribute('data-theme', theme);
} catch (e) {}
const mode = localStorage.getItem('ccw-color-mode') || 'auto'
const prefersDark = window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches
const isDark = mode === 'dark' || (mode === 'auto' && prefersDark)
document.documentElement.classList.toggle('dark', isDark)
} catch {}
})()` })()`
], ],
['meta', { name: 'theme-color', content: '#3b82f6' }], ['meta', { name: 'theme-color', content: '#3b82f6' }],
@@ -80,8 +75,7 @@ export default withMermaid(defineConfig({
{ text: 'Guide', link: '/guide/ch01-what-is-claude-dms3' }, { text: 'Guide', link: '/guide/ch01-what-is-claude-dms3' },
{ text: 'Commands', link: '/commands/claude/' }, { text: 'Commands', link: '/commands/claude/' },
{ text: 'Skills', link: '/skills/' }, { text: 'Skills', link: '/skills/' },
{ text: 'Features', link: '/features/spec' }, { text: 'Features', link: '/features/spec' }
{ text: 'Components', link: '/components/' }
], ],
// Sidebar - 优化导航结构,增加二级标题和归类 // Sidebar - 优化导航结构,增加二级标题和归类
@@ -107,6 +101,15 @@ export default withMermaid(defineConfig({
{ text: 'First Workflow', link: '/guide/first-workflow' }, { text: 'First Workflow', link: '/guide/first-workflow' },
{ text: 'CLI Tools', link: '/guide/cli-tools' } { text: 'CLI Tools', link: '/guide/cli-tools' }
] ]
},
{
text: '📝 CLAUDE.md & MCP',
collapsible: true,
items: [
{ text: 'CLAUDE.md Guide', link: '/guide/claude-md' },
{ text: 'CCW MCP Tools', link: '/guide/ccwmcp' },
{ text: 'MCP Setup', link: '/guide/mcp-setup' }
]
} }
], ],
'/commands/': [ '/commands/': [
@@ -133,6 +136,13 @@ export default withMermaid(defineConfig({
{ text: 'Prep', link: '/commands/codex/prep' }, { text: 'Prep', link: '/commands/codex/prep' },
{ text: 'Review', link: '/commands/codex/review' } { text: 'Review', link: '/commands/codex/review' }
] ]
},
{
text: '🔧 CLI Reference',
collapsible: true,
items: [
{ text: 'CLI Commands', link: '/cli/commands' }
]
} }
], ],
'/skills/': [ '/skills/': [
@@ -180,6 +190,34 @@ export default withMermaid(defineConfig({
{ text: 'Core Skills', link: '/skills/core-skills' }, { text: 'Core Skills', link: '/skills/core-skills' },
{ text: 'Reference', link: '/skills/reference' } { text: 'Reference', link: '/skills/reference' }
] ]
},
{
text: '📋 Skill Specifications',
collapsible: true,
items: [
{ text: 'Document Standards', link: '/skills/specs/document-standards' },
{ text: 'Issue Classification', link: '/skills/specs/issue-classification' },
{ text: 'Quality Gates', link: '/skills/specs/quality-gates' },
{ text: 'Quality Standards', link: '/skills/specs/quality-standards' },
{ text: 'Reference Docs Spec', link: '/skills/specs/reference-docs-spec' },
{ text: 'Review Dimensions', link: '/skills/specs/review-dimensions' }
]
},
{
text: '📄 Skill Templates',
collapsible: true,
items: [
{ text: 'Architecture Doc', link: '/skills/templates/architecture-doc' },
{ text: 'Autonomous Action', link: '/skills/templates/autonomous-action' },
{ text: 'Autonomous Orchestrator', link: '/skills/templates/autonomous-orchestrator' },
{ text: 'Epics Template', link: '/skills/templates/epics-template' },
{ text: 'Issue Template', link: '/skills/templates/issue-template' },
{ text: 'Product Brief', link: '/skills/templates/product-brief' },
{ text: 'Requirements PRD', link: '/skills/templates/requirements-prd' },
{ text: 'Review Report', link: '/skills/templates/review-report' },
{ text: 'Sequential Phase', link: '/skills/templates/sequential-phase' },
{ text: 'Skill Markdown', link: '/skills/templates/skill-md' }
]
} }
], ],
'/features/': [ '/features/': [
@@ -199,7 +237,35 @@ export default withMermaid(defineConfig({
collapsible: true, collapsible: true,
items: [ items: [
{ text: 'API Settings', link: '/features/api-settings' }, { text: 'API Settings', link: '/features/api-settings' },
{ text: 'System Settings', link: '/features/system-settings' } { text: 'System Settings', link: '/features/system-settings' },
{ text: 'Settings', link: '/features/settings' }
]
},
{
text: '🔍 Discovery & Explorer',
collapsible: true,
items: [
{ text: 'Discovery', link: '/features/discovery' },
{ text: 'Explorer', link: '/features/explorer' },
{ text: 'Extensions', link: '/features/extensions' }
]
},
{
text: '📋 Issues & Tasks',
collapsible: true,
items: [
{ text: 'Issue Hub', link: '/features/issue-hub' },
{ text: 'Orchestrator', link: '/features/orchestrator' },
{ text: 'Queue', link: '/features/queue' },
{ text: 'Sessions', link: '/features/sessions' },
{ text: 'Tasks History', link: '/features/tasks-history' }
]
},
{
text: '🖥️ Terminal',
collapsible: true,
items: [
{ text: 'Terminal Dashboard', link: '/features/terminal' }
] ]
} }
], ],
@@ -227,6 +293,17 @@ export default withMermaid(defineConfig({
] ]
} }
], ],
'/reference/': [
{
text: '📚 Reference',
collapsible: true,
items: [
{ text: 'Commands & Skills', link: '/reference/commands-skills' },
{ text: 'Claude Code Hooks', link: '/reference/claude-code-hooks-guide' },
{ text: 'Hook Templates Analysis', link: '/reference/hook-templates-analysis' }
]
}
],
'/agents/': [ '/agents/': [
{ {
text: '🤖 Agents', text: '🤖 Agents',
@@ -244,6 +321,7 @@ export default withMermaid(defineConfig({
collapsible: true, collapsible: true,
items: [ items: [
{ text: 'Overview', link: '/workflows/' }, { text: 'Overview', link: '/workflows/' },
{ text: 'Comparison Table', link: '/workflows/comparison-table' },
{ text: '4-Level System', link: '/workflows/4-level' }, { text: '4-Level System', link: '/workflows/4-level' },
{ text: 'Examples', link: '/workflows/examples' }, { text: 'Examples', link: '/workflows/examples' },
{ text: 'Best Practices', link: '/workflows/best-practices' }, { text: 'Best Practices', link: '/workflows/best-practices' },
@@ -343,7 +421,8 @@ export default withMermaid(defineConfig({
{ text: '指南', link: '/zh/guide/ch01-what-is-claude-dms3' }, { text: '指南', link: '/zh/guide/ch01-what-is-claude-dms3' },
{ text: '命令', link: '/zh/commands/claude/' }, { text: '命令', link: '/zh/commands/claude/' },
{ text: '技能', link: '/zh/skills/claude-index' }, { text: '技能', link: '/zh/skills/claude-index' },
{ text: '功能', link: '/zh/features/spec' } { text: '功能', link: '/zh/features/spec' },
{ text: '参考', link: '/zh/reference/commands-skills' }
], ],
sidebar: { sidebar: {
'/zh/guide/': [ '/zh/guide/': [
@@ -451,6 +530,8 @@ export default withMermaid(defineConfig({
{ text: 'Memory 记忆系统', link: '/zh/features/memory' }, { text: 'Memory 记忆系统', link: '/zh/features/memory' },
{ text: 'CLI 调用', link: '/zh/features/cli' }, { text: 'CLI 调用', link: '/zh/features/cli' },
{ text: 'Dashboard 面板', link: '/zh/features/dashboard' }, { text: 'Dashboard 面板', link: '/zh/features/dashboard' },
{ text: 'Terminal 终端仪表板', link: '/zh/features/terminal' },
{ text: 'Queue 队列管理', link: '/zh/features/queue' },
{ text: 'CodexLens', link: '/zh/features/codexlens' } { text: 'CodexLens', link: '/zh/features/codexlens' }
] ]
}, },
@@ -463,48 +544,45 @@ export default withMermaid(defineConfig({
] ]
} }
], ],
'/zh/components/': [
{
text: 'UI 组件',
collapsible: true,
items: [
{ text: '概述', link: '/zh/components/index' },
{ text: 'Button 按钮', link: '/zh/components/ui/button' },
{ text: 'Card 卡片', link: '/zh/components/ui/card' },
{ text: 'Input 输入框', link: '/zh/components/ui/input' },
{ text: 'Select 选择器', link: '/zh/components/ui/select' },
{ text: 'Checkbox 复选框', link: '/zh/components/ui/checkbox' },
{ text: 'Badge 徽章', link: '/zh/components/ui/badge' }
]
}
],
'/zh/workflows/': [ '/zh/workflows/': [
{ {
text: '🔄 工作流系统', text: '🔄 工作流系统',
collapsible: true, collapsible: true,
items: [ items: [
{ text: '概述', link: '/zh/workflows/' }, { text: '概述', link: '/zh/workflows/' },
{ text: '工作流对比', link: '/zh/workflows/comparison-table' },
{ text: '四级体系', link: '/zh/workflows/4-level' }, { text: '四级体系', link: '/zh/workflows/4-level' },
{ text: '最佳实践', link: '/zh/workflows/best-practices' }, { text: '最佳实践', link: '/zh/workflows/best-practices' },
{ text: '团队协作', link: '/zh/workflows/teams' } { text: '团队协作', link: '/zh/workflows/teams' }
] ]
} }
],
'/zh/reference/': [
{
text: '📚 参考',
collapsible: true,
items: [
{ text: '命令与技能参考', link: '/zh/reference/commands-skills' }
]
}
] ]
} }
} }
}, },
'zh-CN': {
label: '简体中文',
lang: 'zh-CN',
title: 'Claude Code Workflow 文档',
description: 'Claude Code Workspace - 高级 AI 驱动开发环境',
themeConfig: {
outline: {
level: [2, 3],
label: '本页目录'
},
nav: [
{ text: '功能', link: '/zh-CN/features/dashboard' }
],
sidebar: {
'/zh-CN/features/': [
{
text: '⚙️ 核心功能',
collapsible: false,
items: [
{ text: 'Dashboard 面板', link: '/zh-CN/features/dashboard' },
{ text: 'Terminal 终端监控', link: '/zh-CN/features/terminal' },
{ text: 'Queue 队列管理', link: '/zh-CN/features/queue' }
]
}
]
}
}
}
} }
})) }))

View File

@@ -80,7 +80,7 @@ onMounted(async () => {
} }
} else { } else {
// Dynamically import demo component from file // Dynamically import demo component from file
const demoModule = await import(`../demos/${props.name}.tsx`) const demoModule = await import(`../../demos/${props.name}.tsx`)
const DemoComponent = demoModule.default || demoModule[props.name] const DemoComponent = demoModule.default || demoModule[props.name]
if (!DemoComponent) { if (!DemoComponent) {
@@ -94,7 +94,7 @@ onMounted(async () => {
// Extract source code // Extract source code
try { try {
const rawModule = await import(`../demos/${props.name}.tsx?raw`) const rawModule = await import(`../../demos/${props.name}.tsx?raw`)
sourceCode.value = rawModule.default || rawModule sourceCode.value = rawModule.default || rawModule
} catch { } catch {
sourceCode.value = '// Source code not available' sourceCode.value = '// Source code not available'

View File

@@ -27,13 +27,20 @@ const currentLocale = computed(() => {
// Get alternate language link for current page // Get alternate language link for current page
const getAltLink = (localeCode: string) => { const getAltLink = (localeCode: string) => {
if (localeCode === 'root') localeCode = '' // Get current path and strip any existing locale prefix
let currentPath = page.value.relativePath
// Get current page path without locale prefix // Strip locale prefixes (zh/, zh-CN/) from path
const currentPath = page.value.relativePath currentPath = currentPath.replace(/^(zh-CN|zh)\//, '')
const altPath = localeCode ? `/${localeCode}/${currentPath}` : `/${currentPath}`
return altPath // Remove .md extension for clean URL (VitePress uses .html)
currentPath = currentPath.replace(/\.md$/, '')
// Construct target path with locale prefix
if (localeCode === 'root' || localeCode === '') {
return `/${currentPath}`
}
return `/${localeCode}/${currentPath}`
} }
const switchLanguage = (localeCode: string) => { const switchLanguage = (localeCode: string) => {

View File

@@ -196,7 +196,7 @@ onBeforeUnmount(() => {
/* Container queries in mobile.css provide additional responsiveness */ /* Container queries in mobile.css provide additional responsiveness */
/* Mobile-specific styles */ /* Mobile-specific styles */
@media (max-width: var(--bp-sm)) { @media (max-width: 767px) {
.hero-extensions { .hero-extensions {
margin-top: 1rem; margin-top: 1rem;
padding: 0 12px; padding: 0 12px;

View File

@@ -1,7 +1,7 @@
/** /**
* VitePress Custom Styles * VitePress Custom Styles
* Overrides and extensions for default VitePress theme * Overrides and extensions for default VitePress theme
* Design System: ui-ux-pro-max — dark-mode-first, developer-focused * Design System: ui-ux-pro-max — flat design, developer-focused, documentation-optimized
*/ */
/* ============================================ /* ============================================
@@ -33,10 +33,14 @@
--vp-custom-block-danger-border: #fecaca; --vp-custom-block-danger-border: #fecaca;
--vp-custom-block-danger-text: #b91c1c; --vp-custom-block-danger-text: #b91c1c;
/* Layout Width Adjustments */ /* Layout Width - Wider content area (using rem for zoom responsiveness) */
--vp-layout-max-width: 1600px; --vp-layout-max-width: 100rem; /* 1600px / 16 */
--vp-content-width: 1000px; --vp-content-width: 57.5rem; /* 920px / 16 */
--vp-sidebar-width: 272px; --vp-sidebar-width: 17.5rem; /* 280px / 16 */
--vp-toc-width: 12.5rem; /* 200px / 16 */
/* Prose width for optimal readability */
--vp-prose-width: 51.25rem; /* 820px / 16 */
} }
.dark { .dark {
@@ -55,12 +59,9 @@
/* ============================================ /* ============================================
* Layout Container Adjustments * Layout Container Adjustments
* Use CSS variables in :root to control layout widths
* Do not override VitePress layout calculations with !important
* ============================================ */ * ============================================ */
.VPDoc .content-container,
.VPDoc.has-aside .content-container {
max-width: 90% !important;
padding: 0 32px;
}
/* Adjust sidebar and content layout */ /* Adjust sidebar and content layout */
/* NOTE: Removed duplicate padding-left - VitePress already handles sidebar layout */ /* NOTE: Removed duplicate padding-left - VitePress already handles sidebar layout */
@@ -77,6 +78,18 @@
padding: 4px 8px; padding: 4px 8px;
} }
/* ============================================
* Navigation Bar Border
* ============================================ */
.VPNav {
border-bottom: 1px solid var(--vp-c-divider);
}
/* ============================================
* Desktop Layout Adjustments
* Grid-based layout is defined in mobile.css @media (min-width: 1024px)
* ============================================ */
/* ============================================ /* ============================================
* Home Page Override * Home Page Override
* ============================================ */ * ============================================ */
@@ -97,6 +110,12 @@
max-width: 100vw; max-width: 100vw;
} }
/* Home page navbar - reduce title left margin since no sidebar */
.Layout:has(.pro-home) .VPNavBar .content-body,
.Layout:has(.pro-home) .VPNavBar .title {
padding-left: 0 !important;
}
/* Ensure VPFooter doesn't cause overflow */ /* Ensure VPFooter doesn't cause overflow */
.Layout:has(.pro-home) .VPFooter { .Layout:has(.pro-home) .VPFooter {
max-width: 100vw; max-width: 100vw;
@@ -136,54 +155,111 @@
/* ============================================ /* ============================================
* Documentation Content Typography * Documentation Content Typography
* Optimized for readability (65-75 chars per line)
* ============================================ */ * ============================================ */
.vp-doc {
font-family: var(--vp-font-family-base);
line-height: var(--vp-line-height-relaxed);
color: var(--vp-c-text-1);
}
/* Heading hierarchy with clear visual distinction */
.vp-doc h1 { .vp-doc h1 {
font-family: var(--vp-font-family-heading);
font-size: var(--vp-font-size-4xl);
font-weight: 800; font-weight: 800;
letter-spacing: -0.02em; letter-spacing: -0.025em;
margin-bottom: 1.5rem; line-height: var(--vp-line-height-tight);
margin-bottom: var(--vp-spacing-8);
color: var(--vp-c-text-1);
} }
.vp-doc h2 { .vp-doc h2 {
font-family: var(--vp-font-family-heading);
font-size: var(--vp-font-size-2xl);
font-weight: 700; font-weight: 700;
margin-top: 3rem;
padding-top: 2rem;
letter-spacing: -0.01em; letter-spacing: -0.01em;
line-height: var(--vp-line-height-tight);
margin-top: var(--vp-spacing-12);
padding-top: var(--vp-spacing-8);
border-top: 1px solid var(--vp-c-divider); border-top: 1px solid var(--vp-c-divider);
color: var(--vp-c-text-1);
} }
.vp-doc h2:first-of-type { .vp-doc h2:first-of-type {
margin-top: 1.5rem; margin-top: var(--vp-spacing-6);
border-top: none; border-top: none;
} }
.vp-doc h3 { .vp-doc h3 {
font-family: var(--vp-font-family-heading);
font-size: var(--vp-font-size-xl);
font-weight: 600; font-weight: 600;
margin-top: 2.5rem; line-height: var(--vp-line-height-snug);
margin-top: var(--vp-spacing-10);
color: var(--vp-c-text-1);
} }
.vp-doc h4 { .vp-doc h4 {
font-family: var(--vp-font-family-heading);
font-size: var(--vp-font-size-lg);
font-weight: 600; font-weight: 600;
margin-top: 2rem; line-height: var(--vp-line-height-snug);
margin-top: var(--vp-spacing-8);
color: var(--vp-c-text-2);
} }
.vp-doc h5 {
font-size: var(--vp-font-size-base);
font-weight: 600;
line-height: var(--vp-line-height-snug);
margin-top: var(--vp-spacing-6);
color: var(--vp-c-text-2);
}
.vp-doc h6 {
font-size: var(--vp-font-size-sm);
font-weight: 600;
line-height: var(--vp-line-height-snug);
margin-top: var(--vp-spacing-4);
color: var(--vp-c-text-3);
text-transform: uppercase;
letter-spacing: 0.05em;
}
/* Body text with optimal line height */
.vp-doc p { .vp-doc p {
line-height: 1.8; font-size: var(--vp-font-size-base);
margin: 1.25rem 0; line-height: var(--vp-line-height-relaxed);
margin: var(--vp-spacing-5) 0;
max-width: 100%;
} }
/* Lists with proper spacing */
.vp-doc ul, .vp-doc ul,
.vp-doc ol { .vp-doc ol {
margin: 1.25rem 0; font-size: var(--vp-font-size-base);
padding-left: 1.5rem; line-height: var(--vp-line-height-relaxed);
margin: var(--vp-spacing-5) 0;
padding-left: var(--vp-spacing-6);
max-width: 100%;
} }
.vp-doc li { .vp-doc li {
line-height: 1.8; line-height: var(--vp-line-height-relaxed);
margin: 0.5rem 0; margin: var(--vp-spacing-2) 0;
} }
.vp-doc li + li { .vp-doc li + li {
margin-top: 0.5rem; margin-top: var(--vp-spacing-2);
}
/* Nested lists */
.vp-doc ul ul,
.vp-doc ol ol,
.vp-doc ul ol,
.vp-doc ol ul {
margin: var(--vp-spacing-2) 0;
} }
/* Better spacing for code blocks in lists */ /* Better spacing for code blocks in lists */
@@ -191,6 +267,31 @@
margin: 0 2px; margin: 0 2px;
} }
/* Blockquotes */
.vp-doc blockquote {
margin: var(--vp-spacing-6) 0;
padding: var(--vp-spacing-4) var(--vp-spacing-6);
border-left: 4px solid var(--vp-c-primary);
background: var(--vp-c-bg-soft);
border-radius: 0 var(--vp-radius-lg) var(--vp-radius-lg) 0;
color: var(--vp-c-text-2);
font-style: italic;
}
.vp-doc blockquote p {
margin: var(--vp-spacing-2) 0;
}
/* Strong and emphasis */
.vp-doc strong {
font-weight: 600;
color: var(--vp-c-text-1);
}
.vp-doc em {
font-style: italic;
}
/* ============================================ /* ============================================
* Command Reference Specific Styles * Command Reference Specific Styles
* ============================================ */ * ============================================ */
@@ -212,12 +313,15 @@
/* ============================================ /* ============================================
* Custom Container Blocks * Custom Container Blocks
* Flat design with left accent border
* ============================================ */ * ============================================ */
.custom-container { .custom-container {
margin: 20px 0; margin: var(--vp-spacing-6) 0;
padding: 16px 20px; padding: var(--vp-spacing-4) var(--vp-spacing-5);
border-radius: 12px; border-radius: var(--vp-radius-xl);
border-left: 4px solid; border-left: 4px solid;
background: var(--vp-c-bg-soft);
box-shadow: var(--vp-shadow-xs);
} }
.custom-container.info { .custom-container.info {
@@ -231,103 +335,318 @@
} }
.dark .custom-container.success { .dark .custom-container.success {
background: rgba(16, 185, 129, 0.1); background: rgba(16, 185, 129, 0.08);
} }
.custom-container.tip { .custom-container.tip {
border-radius: 12px; border-radius: var(--vp-radius-xl);
background: rgba(59, 130, 246, 0.05);
} }
.custom-container.warning { .custom-container.warning {
border-radius: 12px; border-radius: var(--vp-radius-xl);
background: rgba(245, 158, 11, 0.05);
} }
.custom-container.danger { .custom-container.danger {
border-radius: 12px; border-radius: var(--vp-radius-xl);
background: rgba(239, 68, 68, 0.05);
}
/* Custom container titles */
.custom-container .custom-container-title {
font-family: var(--vp-font-family-heading);
font-weight: 600;
font-size: var(--vp-font-size-sm);
text-transform: uppercase;
letter-spacing: 0.05em;
margin-bottom: var(--vp-spacing-2);
}
/* VitePress default custom blocks */
.vp-doc .tip,
.vp-doc .warning,
.vp-doc .danger,
.vp-doc .info {
border-radius: var(--vp-radius-xl);
margin: var(--vp-spacing-6) 0;
padding: var(--vp-spacing-4) var(--vp-spacing-5);
box-shadow: var(--vp-shadow-xs);
}
.vp-doc .tip {
background: rgba(59, 130, 246, 0.05);
border-color: var(--vp-c-primary);
}
.vp-doc .warning {
background: rgba(245, 158, 11, 0.05);
border-color: var(--vp-c-accent);
}
.vp-doc .danger {
background: rgba(239, 68, 68, 0.05);
border-color: #ef4444;
}
.vp-doc .info {
background: var(--vp-c-bg-soft);
border-color: var(--vp-c-primary);
} }
/* ============================================ /* ============================================
* Code Block Improvements * Code Block Improvements
* Clean, flat design with subtle borders
* ============================================ */ * ============================================ */
.vp-code-group { .vp-code-group {
margin: 20px 0; margin: var(--vp-spacing-6) 0;
border-radius: 12px; border-radius: var(--vp-radius-xl);
overflow: hidden; overflow: hidden;
border: 1px solid var(--vp-c-divider);
box-shadow: var(--vp-shadow-sm);
} }
.vp-code-group .tabs { .vp-code-group .tabs {
background: var(--vp-c-bg-soft); background: var(--vp-c-bg-soft);
border-bottom: 1px solid var(--vp-c-divider); border-bottom: 1px solid var(--vp-c-divider);
padding: 0 var(--vp-spacing-2);
} }
.vp-code-group div[class*='language-'] { .vp-code-group div[class*='language-'] {
margin: 0; margin: 0;
border-radius: 0; border-radius: 0;
border: none;
} }
div[class*='language-'] { div[class*='language-'] {
border-radius: 12px; border-radius: var(--vp-radius-xl);
margin: 20px 0; margin: var(--vp-spacing-6) 0;
border: 1px solid var(--vp-c-divider);
background: var(--vp-c-bg-soft) !important;
box-shadow: var(--vp-shadow-sm);
} }
div[class*='language-'] pre { div[class*='language-'] pre {
line-height: 1.65; line-height: var(--vp-line-height-relaxed);
padding: var(--vp-spacing-5) var(--vp-spacing-6);
} }
/* Inline code */ div[class*='language-'] code {
font-family: var(--vp-font-family-mono);
font-size: var(--vp-font-size-sm);
font-weight: 400;
letter-spacing: -0.01em;
}
/* Code block header with language label */
div[class*='language-'] > span.lang {
font-size: var(--vp-font-size-xs);
font-weight: 500;
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--vp-c-text-3);
padding: var(--vp-spacing-2) var(--vp-spacing-4);
background: var(--vp-c-bg-soft);
border-radius: var(--vp-radius-md);
}
/* Inline code - subtle styling */
.vp-doc :not(pre) > code { .vp-doc :not(pre) > code {
border-radius: 6px; font-family: var(--vp-font-family-mono);
border-radius: var(--vp-radius-md);
padding: 2px 6px; padding: 2px 6px;
font-size: 0.875em; font-size: 0.875em;
font-weight: 500; font-weight: 500;
background: var(--vp-c-bg-soft);
border: 1px solid var(--vp-c-divider);
color: var(--vp-c-primary);
}
/* Copy button styling */
.vp-code-group .vp-copy,
div[class*='language-'] .vp-copy {
border-radius: var(--vp-radius-md);
transition: all var(--vp-transition-color);
}
.vp-code-group .vp-copy:hover,
div[class*='language-'] .vp-copy:hover {
background: var(--vp-c-bg-mute);
} }
/* ============================================ /* ============================================
* Table Styling * Table Styling
* Clean, flat design with subtle borders
* ============================================ */ * ============================================ */
table { .vp-doc table {
border-collapse: collapse; border-collapse: collapse;
width: 100%; width: 100%;
margin: 20px 0; margin: var(--vp-spacing-6) 0;
border-radius: 12px;
overflow: hidden;
} }
table th, .vp-doc table th,
table td { .vp-doc table td {
padding: 12px 16px; padding: var(--vp-spacing-3) var(--vp-spacing-4);
border: 1px solid var(--vp-c-divider); border: 1px solid var(--vp-c-divider);
text-align: left; text-align: left;
font-size: var(--vp-font-size-sm);
line-height: var(--vp-line-height-normal);
} }
table th { .vp-doc table th {
background: var(--vp-c-bg-soft); background: var(--vp-c-bg-soft);
font-family: var(--vp-font-family-heading);
font-weight: 600; font-weight: 600;
font-size: 0.875rem; font-size: var(--vp-font-size-xs);
text-transform: uppercase; text-transform: uppercase;
letter-spacing: 0.03em; letter-spacing: 0.05em;
color: var(--vp-c-text-2); color: var(--vp-c-text-2);
border-bottom-width: 2px;
} }
table tr:hover { .vp-doc table td {
color: var(--vp-c-text-1);
vertical-align: top;
}
.vp-doc table tr:hover {
background: var(--vp-c-bg-soft); background: var(--vp-c-bg-soft);
} }
.vp-doc table tr:hover td {
color: var(--vp-c-text-1);
}
/* Responsive tables */
@media (max-width: 767px) {
.vp-doc table {
display: block;
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
.vp-doc table th,
.vp-doc table td {
padding: var(--vp-spacing-2) var(--vp-spacing-3);
font-size: var(--vp-font-size-xs);
white-space: nowrap;
}
}
/* ============================================ /* ============================================
* Sidebar Polish * Sidebar Polish
* Clean, organized navigation
* ============================================ */ * ============================================ */
.VPSidebar {
background: var(--vp-c-bg);
border-right: 1px solid var(--vp-c-divider);
}
.VPSidebar .group + .group { .VPSidebar .group + .group {
margin-top: 0.5rem; margin-top: var(--vp-spacing-4);
padding-top: 0.5rem; padding-top: var(--vp-spacing-4);
border-top: 1px solid var(--vp-c-divider); border-top: 1px solid var(--vp-c-divider);
} }
/* Sidebar item styling */
.VPSidebar .VPSidebarItem .text {
font-size: var(--vp-font-size-sm);
line-height: var(--vp-line-height-normal);
transition: color var(--vp-transition-color);
}
.VPSidebar .VPSidebarItem .link:hover .text {
color: var(--vp-c-primary);
}
.VPSidebar .VPSidebarItem.is-active .link .text {
color: var(--vp-c-primary);
font-weight: 600;
}
/* Sidebar group titles */
.VPSidebar .VPSidebarGroup .title {
font-family: var(--vp-font-family-heading);
font-size: var(--vp-font-size-xs);
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--vp-c-text-3);
padding: var(--vp-spacing-2) var(--vp-spacing-4);
}
/* Sidebar numbering and indentation */
/* Initialize counter on the sidebar container */
.VPSidebar .VPSidebarGroup,
.VPSidebar nav {
counter-reset: sidebar-item;
}
/* Increment counter for level-1 items only */
.VPSidebar .VPSidebarItem.level-1 {
counter-increment: sidebar-item;
}
/* Add numbers to sidebar items */
.VPSidebar .VPSidebarItem.level-1 .link::before {
content: counter(sidebar-item) ". ";
color: var(--vp-c-text-3);
font-weight: 500;
margin-right: 8px;
}
/* Indentation for different levels */
.VPSidebar .VPSidebarItem.level-1 {
padding-left: 16px;
}
.VPSidebar .VPSidebarItem.level-2 {
padding-left: 32px;
}
.VPSidebar .VPSidebarItem.level-3 {
padding-left: 48px;
}
/* ============================================
* Navigation Polish
* ============================================ */
.VPNav {
background: var(--vp-c-bg);
border-bottom: 1px solid var(--vp-c-divider);
box-shadow: var(--vp-shadow-xs);
}
.VPNavBar .title {
font-family: var(--vp-font-family-heading);
font-weight: 700;
font-size: var(--vp-font-size-lg);
color: var(--vp-c-text-1);
}
.VPNavBarMenu .VPNavBarMenuLink {
font-family: var(--vp-font-family-heading);
font-weight: 500;
font-size: var(--vp-font-size-sm);
transition: color var(--vp-transition-color);
}
.VPNavBarMenu .VPNavBarMenuLink:hover {
color: var(--vp-c-primary);
}
.VPNavBarMenu .VPNavBarMenuLink.active {
color: var(--vp-c-primary);
font-weight: 600;
}
/* ============================================ /* ============================================
* Scrollbar Styling * Scrollbar Styling
* Subtle, flat design
* ============================================ */ * ============================================ */
::-webkit-scrollbar { ::-webkit-scrollbar {
width: 6px; width: 8px;
height: 6px; height: 8px;
} }
::-webkit-scrollbar-track { ::-webkit-scrollbar-track {
@@ -337,30 +656,153 @@ table tr:hover {
::-webkit-scrollbar-thumb { ::-webkit-scrollbar-thumb {
background: var(--vp-c-surface-2); background: var(--vp-c-surface-2);
border-radius: var(--vp-radius-full); border-radius: var(--vp-radius-full);
border: 2px solid transparent;
background-clip: padding-box;
} }
::-webkit-scrollbar-thumb:hover { ::-webkit-scrollbar-thumb:hover {
background: var(--vp-c-surface-3); background: var(--vp-c-surface-3);
border: 2px solid transparent;
background-clip: padding-box;
}
/* Firefox scrollbar */
* {
scrollbar-width: thin;
scrollbar-color: var(--vp-c-surface-2) transparent;
}
/* ============================================
* TOC (Table of Contents) Polish
* Sticky positioning handled by grid layout in mobile.css
* ============================================ */
.VPDocOutline {
padding-left: 24px;
}
.VPDocOutline .outline-link {
font-family: var(--vp-font-family-base);
font-size: var(--vp-font-size-sm);
line-height: var(--vp-line-height-normal);
padding: var(--vp-spacing-1) var(--vp-spacing-2);
color: var(--vp-c-text-3);
transition: color var(--vp-transition-color);
border-radius: var(--vp-radius-md);
}
.VPDocOutline .outline-link:hover {
color: var(--vp-c-primary);
background: var(--vp-c-bg-soft);
}
/* Match VitePress actual DOM structure: <li><a class="outline-link active">...</a></li> */
/* Override VitePress scoped styles with higher specificity */
.VPDocAside .VPDocAsideOutline .outline-link.active {
color: var(--vp-c-brand) !important;
font-weight: 600 !important;
background: var(--vp-c-bg-soft) !important;
padding: 4px 8px !important;
border-radius: 4px !important;
transition: all 0.25s !important;
}
.VPDocOutline .outline-marker {
width: 3px;
background: var(--vp-c-brand);
border-radius: var(--vp-radius-full);
} }
/* ============================================ /* ============================================
* Link Improvements * Link Improvements
* Clean, accessible link styling
* ============================================ */ * ============================================ */
a { a {
text-decoration: none; text-decoration: none;
transition: color 0.2s ease; color: var(--vp-c-primary);
transition: color var(--vp-transition-color);
cursor: pointer;
} }
a:hover { a:hover {
text-decoration: underline; text-decoration: underline;
color: var(--vp-c-primary-600);
}
a:visited {
color: var(--vp-c-primary-700);
}
.dark a:visited {
color: var(--vp-c-primary-300);
}
/* External link indicator */
.vp-doc a[target="_blank"]::after {
content: " ↗";
font-size: 0.75em;
opacity: 0.7;
}
/* ============================================
* Button Improvements
* ============================================ */
.VPButton {
font-family: var(--vp-font-family-heading);
font-weight: 600;
font-size: var(--vp-font-size-sm);
padding: var(--vp-spacing-2-5) var(--vp-spacing-5);
border-radius: var(--vp-radius-lg);
transition: all var(--vp-transition-color);
cursor: pointer;
}
.VPButton.brand {
background: var(--vp-c-primary);
color: white;
border: none;
}
.VPButton.brand:hover {
background: var(--vp-c-primary-600);
transform: translateY(-1px);
box-shadow: var(--vp-shadow-md);
}
.VPButton.alt {
background: var(--vp-c-bg-soft);
color: var(--vp-c-text-1);
border: 1px solid var(--vp-c-divider);
}
.VPButton.alt:hover {
background: var(--vp-c-bg-mute);
border-color: var(--vp-c-primary);
} }
/* ============================================ /* ============================================
* Focus States — Accessibility * Focus States — Accessibility
* Clear visible focus for keyboard navigation
* ============================================ */ * ============================================ */
:focus-visible { :focus-visible {
outline: 2px solid var(--vp-c-primary); outline: 2px solid var(--vp-c-primary);
outline-offset: 2px; outline-offset: 2px;
border-radius: var(--vp-radius-md);
}
/* Skip focus outline for mouse users */
:focus:not(:focus-visible) {
outline: none;
}
/* Better focus for interactive elements */
a:focus-visible,
button:focus-visible,
input:focus-visible,
select:focus-visible,
textarea:focus-visible {
outline: 2px solid var(--vp-c-primary);
outline-offset: 2px;
box-shadow: 0 0 0 4px rgba(59, 130, 246, 0.15);
} }
/* ============================================ /* ============================================
@@ -395,3 +837,86 @@ a:hover {
padding: 0 !important; padding: 0 !important;
} }
} }
/*
* ===================================================================
* 智能响应式内容宽度方案 (Intelligent Responsive Content Width)
* ===================================================================
* 目的: 根据视口宽度动态调整内容区域占比,优化空间利用。
* 原理: 使用rem单位的padding在不同视口宽度下提供合适的留白。
* 窄屏使用较小padding超宽屏使用较大padding。
*/
/* 步骤1: 在所有桌面视图下,让.container可以填满父级空间 */
@media (min-width: 1024px) {
.VPDoc.has-aside .container {
max-width: none !important;
width: 100% !important;
margin: 0 !important;
padding: 0 var(--vp-spacing-8) !important;
}
.VPContent.has-sidebar {
margin-left: var(--vp-sidebar-width) !important;
margin-right: var(--vp-toc-width) !important;
width: calc(100vw - var(--vp-sidebar-width) - var(--vp-toc-width)) !important;
box-sizing: border-box;
}
}
/* 窄屏 (1024px - 1439px): 使用2rem padding */
@media (min-width: 1024px) and (max-width: 1439px) {
.VPContent.has-sidebar {
padding-left: 2rem !important;
padding-right: 2rem !important;
}
}
/* 标准宽屏 (1440px - 1919px): 使用3rem padding */
@media (min-width: 1440px) and (max-width: 1919px) {
.VPContent.has-sidebar {
padding-left: 3rem !important;
padding-right: 3rem !important;
}
}
/* 超宽屏 (>= 1920px): 使用5rem padding */
@media (min-width: 1920px) {
.VPContent.has-sidebar {
padding-left: 5rem !important;
padding-right: 5rem !important;
}
}
/* ============================================
* Widen Doc Content to Fill Container
* ============================================ */
/*
* Overrides VitePress's default readability width limit for .vp-doc
* and .content-container on desktop layouts, allowing content to use
* the full available space defined by the responsive padding in the
* "Intelligent Responsive Content Width" section.
*/
@media (min-width: 1024px) {
/* Expand .content to fill available space */
.VPDoc.has-aside .container .content {
flex-grow: 1 !important;
max-width: none !important;
}
/* Use multiple selectors to increase specificity and override scoped styles */
.VPDoc.has-aside .container .content-container,
.VPDoc.has-aside .content-container[class],
.content-container {
max-width: none !important;
width: 100% !important;
min-width: 100% !important;
flex-grow: 1 !important;
flex-basis: 100% !important;
}
.vp-doc {
max-width: 100% !important;
width: 100% !important;
}
}

View File

@@ -35,7 +35,7 @@
} }
/* Responsive design */ /* Responsive design */
@media (max-width: var(--bp-sm)) { @media (max-width: 767px) {
.demo-header { .demo-header {
flex-direction: column; flex-direction: column;
align-items: flex-start; align-items: flex-start;

View File

@@ -1,8 +1,11 @@
/** /**
* Mobile-Responsive Styles * Mobile-Responsive Styles
* Design System: ui-ux-pro-max — flat design, mobile-first
* Breakpoints: < 480px (xs), < 768px (sm/mobile), 768px-1024px (md/tablet), > 1024px (lg/desktop) * Breakpoints: < 480px (xs), < 768px (sm/mobile), 768px-1024px (md/tablet), > 1024px (lg/desktop)
* Uses CSS custom properties from variables.css: --bp-xs, --bp-sm, --bp-md, --bp-lg
* WCAG 2.1 AA compliant * WCAG 2.1 AA compliant
*
* NOTE: Media/container queries MUST use literal pixel values (CSS spec limitation)
* --bp-xs: 480px, --bp-sm: 768px, --bp-md: 1024px, --bp-lg: 1440px
*/ */
/* ============================================ /* ============================================
@@ -19,7 +22,7 @@
max-width: 320px; max-width: 320px;
} }
@media (min-width: var(--bp-sm)) { @media (min-width: 768px) {
.VPSidebar { .VPSidebar {
width: var(--vp-sidebar-width, 272px); width: var(--vp-sidebar-width, 272px);
max-width: none; max-width: none;
@@ -31,13 +34,13 @@
padding: 16px; padding: 16px;
} }
@media (min-width: var(--bp-sm)) { @media (min-width: 768px) {
.VPContent { .VPContent {
padding: 24px; padding: 24px;
} }
} }
@media (min-width: var(--bp-md)) { @media (min-width: 1024px) {
.VPContent { .VPContent {
padding: 32px 48px; padding: 32px 48px;
} }
@@ -48,14 +51,14 @@
display: none; display: none;
} }
@media (min-width: var(--bp-sm)) { @media (min-width: 768px) {
.VPDocOutline { .VPDocOutline {
display: block; display: block;
width: 200px; width: 200px;
} }
} }
@media (min-width: var(--bp-md)) { @media (min-width: 1024px) {
.VPDocOutline { .VPDocOutline {
width: 256px; width: 256px;
} }
@@ -65,7 +68,7 @@
/* Container Query Rules (modern browsers) */ /* Container Query Rules (modern browsers) */
@supports (container-type: inline-size) { @supports (container-type: inline-size) {
/* Sidebar Container Queries */ /* Sidebar Container Queries */
@container sidebar (max-width: var(--bp-xs)) { @container sidebar (max-width: 480px) {
.VPSidebar .group { .VPSidebar .group {
padding: 12px 16px; padding: 12px 16px;
} }
@@ -75,7 +78,7 @@
} }
} }
@container sidebar (min-width: var(--bp-xs)) and (max-width: var(--bp-sm)) { @container sidebar (min-width: 480px) and (max-width: 768px) {
.VPSidebar .group { .VPSidebar .group {
padding: 16px 20px; padding: 16px 20px;
} }
@@ -85,7 +88,7 @@
} }
} }
@container sidebar (min-width: var(--bp-sm)) { @container sidebar (min-width: 768px) {
.VPSidebar .group { .VPSidebar .group {
padding: 16px 24px; padding: 16px 24px;
} }
@@ -221,25 +224,25 @@
} }
/* Generic Container-Responsive Utility Class */ /* Generic Container-Responsive Utility Class */
@container (max-width: var(--bp-xs)) { @container (max-width: 480px) {
.container-responsive { .container-responsive {
padding: 0 var(--spacing-fluid-xs); padding: 0 var(--spacing-fluid-xs);
} }
} }
@container (min-width: var(--bp-xs)) and (max-width: var(--bp-sm)) { @container (min-width: 480px) and (max-width: 768px) {
.container-responsive { .container-responsive {
padding: 0 var(--spacing-fluid-sm); padding: 0 var(--spacing-fluid-sm);
} }
} }
@container (min-width: var(--bp-sm)) and (max-width: var(--bp-md)) { @container (min-width: 768px) and (max-width: 1024px) {
.container-responsive { .container-responsive {
padding: 0 var(--spacing-fluid-md); padding: 0 var(--spacing-fluid-md);
} }
} }
@container (min-width: var(--bp-md)) { @container (min-width: 1024px) {
.container-responsive { .container-responsive {
padding: 0 var(--spacing-fluid-lg); padding: 0 var(--spacing-fluid-lg);
} }
@@ -250,16 +253,17 @@
* ============================================ */ * ============================================ */
/* Base Mobile Styles (320px+) */ /* Base Mobile Styles (320px+) */
@media (max-width: var(--bp-sm)) { @media (max-width: 767px) {
/* Typography */ /* Typography - smaller base for mobile */
:root { :root {
--vp-font-size-base: 14px; --vp-font-size-base: 14px;
--vp-content-width: 100%; --vp-content-width: 100%;
--vp-prose-width: 100%;
} }
/* Container */ /* Container */
.container { .container {
padding: 0 12px; padding: 0 var(--vp-spacing-3);
} }
/* Navigation - ensure hamburger menu is visible */ /* Navigation - ensure hamburger menu is visible */
@@ -269,10 +273,15 @@
} }
.VPNavBar { .VPNavBar {
padding: 0 12px; padding: 0;
overflow: visible !important; overflow: visible !important;
} }
/* VPNavBar content padding for alignment */
.VPNavBar .content {
padding: 0 var(--vp-spacing-3);
}
/* Navigation bar content wrapper */ /* Navigation bar content wrapper */
.VPNavBar .content { .VPNavBar .content {
overflow: visible !important; overflow: visible !important;
@@ -295,8 +304,8 @@
/* Reduce nav-extensions gap on mobile */ /* Reduce nav-extensions gap on mobile */
.VPNavBar .nav-extensions { .VPNavBar .nav-extensions {
gap: 0.25rem; gap: var(--vp-spacing-1);
padding-left: 0.25rem; padding-left: var(--vp-spacing-1);
overflow: visible !important; overflow: visible !important;
} }
@@ -331,10 +340,10 @@
overflow-y: auto; overflow-y: auto;
background: var(--vp-c-bg); background: var(--vp-c-bg);
border: 1px solid var(--vp-c-divider); border: 1px solid var(--vp-c-divider);
border-radius: 8px; border-radius: var(--vp-radius-lg);
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.15); box-shadow: var(--vp-shadow-lg);
z-index: 100; z-index: 100;
padding: 8px 0; padding: var(--vp-spacing-2) 0;
} }
/* Language switcher dropdown fix */ /* Language switcher dropdown fix */
@@ -352,7 +361,7 @@
min-width: 200px !important; min-width: 200px !important;
max-width: calc(100vw - 24px) !important; max-width: calc(100vw - 24px) !important;
z-index: 1000 !important; z-index: 1000 !important;
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.2) !important; box-shadow: var(--vp-shadow-xl) !important;
} }
/* Sidebar - fix display issues */ /* Sidebar - fix display issues */
@@ -363,12 +372,19 @@
top: 56px !important; top: 56px !important;
height: calc(100vh - 56px) !important; height: calc(100vh - 56px) !important;
max-height: calc(100vh - 56px) !important; max-height: calc(100vh - 56px) !important;
overflow: visible !important; overflow-y: auto !important;
overflow-x: hidden !important;
position: fixed !important; position: fixed !important;
left: 0 !important; left: 0 !important;
z-index: 40 !important; z-index: 40 !important;
background: var(--vp-c-bg) !important; background: var(--vp-c-bg) !important;
transition: transform 0.25s ease !important; transition: transform 0.25s ease !important;
border-right: 1px solid var(--vp-c-divider);
}
/* Add padding to sidebar nav to prevent scrollbar overlap with navbar */
.VPSidebar .VPSidebarNav {
padding-top: var(--vp-spacing-4) !important;
} }
/* Sidebar when open */ /* Sidebar when open */
@@ -384,7 +400,7 @@
/* Sidebar nav container */ /* Sidebar nav container */
.VPSidebar .VPSidebarNav { .VPSidebar .VPSidebarNav {
padding: 12px 0; padding: var(--vp-spacing-3) 0;
height: 100%; height: 100%;
min-height: auto; min-height: auto;
overflow-y: auto; overflow-y: auto;
@@ -393,17 +409,17 @@
/* Sidebar groups */ /* Sidebar groups */
.VPSidebar .VPSidebarGroup { .VPSidebar .VPSidebarGroup {
padding: 8px 16px; padding: var(--vp-spacing-2) var(--vp-spacing-4);
} }
/* Sidebar items */ /* Sidebar items */
.VPSidebar .VPSidebarItem { .VPSidebar .VPSidebarItem {
padding: 6px 0; padding: var(--vp-spacing-1-5) 0;
} }
/* Ensure sidebar links are properly sized */ /* Ensure sidebar links are properly sized */
.VPSidebar .link { .VPSidebar .link {
padding: 8px 12px; padding: var(--vp-spacing-2) var(--vp-spacing-3);
display: block; display: block;
} }
@@ -432,21 +448,21 @@
/* Make sure all sidebar content is visible */ /* Make sure all sidebar content is visible */
.VPSidebar .group { .VPSidebar .group {
margin: 0; margin: 0;
padding: 12px 16px; padding: var(--vp-spacing-3) var(--vp-spacing-4);
} }
.VPSidebar .title { .VPSidebar .title {
font-size: 14px; font-size: var(--vp-font-size-sm);
font-weight: 600; font-weight: 600;
padding: 4px 0; padding: var(--vp-spacing-1) 0;
color: var(--vp-c-text-1); color: var(--vp-c-text-1);
} }
/* Sidebar text styling */ /* Sidebar text styling */
.VPSidebar .text { .VPSidebar .text {
font-size: 14px; font-size: var(--vp-font-size-sm);
line-height: 1.5; line-height: var(--vp-line-height-normal);
padding: 6px 12px; padding: var(--vp-spacing-1-5) var(--vp-spacing-3);
} }
/* Ensure nested items are visible */ /* Ensure nested items are visible */
@@ -465,12 +481,12 @@
/* Content - reduce padding for better space usage */ /* Content - reduce padding for better space usage */
.VPContent { .VPContent {
padding: 12px; padding: var(--vp-spacing-3);
} }
/* Doc content adjustments - reduce padding */ /* Doc content adjustments - reduce padding */
.VPDoc .content-container { .VPDoc .content-container {
padding: 0 12px; padding: 0 var(--vp-spacing-3);
} }
/* Hide outline on mobile */ /* Hide outline on mobile */
@@ -480,27 +496,27 @@
/* Hero Section */ /* Hero Section */
.VPHomeHero { .VPHomeHero {
padding: 40px 12px; padding: var(--vp-spacing-10) var(--vp-spacing-3);
} }
.VPHomeHero h1 { .VPHomeHero h1 {
font-size: 28px; font-size: var(--vp-font-size-2xl);
line-height: 1.2; line-height: var(--vp-line-height-tight);
} }
.VPHomeHero p { .VPHomeHero p {
font-size: 14px; font-size: var(--vp-font-size-sm);
} }
/* Code Blocks - reduce margins */ /* Code Blocks - reduce margins */
div[class*='language-'] { div[class*='language-'] {
margin: 12px -12px; margin: var(--vp-spacing-3) calc(var(--vp-spacing-3) * -1);
border-radius: 0; border-radius: 0;
} }
div[class*='language-'] pre { div[class*='language-'] pre {
padding: 12px; padding: var(--vp-spacing-3);
font-size: 12px; font-size: var(--vp-font-size-xs);
} }
/* Tables - make them scrollable */ /* Tables - make them scrollable */
@@ -512,23 +528,23 @@
} }
table { table {
font-size: 12px; font-size: var(--vp-font-size-xs);
} }
table th, table th,
table td { table td {
padding: 8px 12px; padding: var(--vp-spacing-2) var(--vp-spacing-3);
} }
/* Buttons */ /* Buttons */
.VPButton { .VPButton {
padding: 8px 16px; padding: var(--vp-spacing-2) var(--vp-spacing-4);
font-size: 14px; font-size: var(--vp-font-size-sm);
} }
/* Cards */ /* Cards */
.VPFeature { .VPFeature {
padding: 16px; padding: var(--vp-spacing-4);
} }
/* Touch-friendly tap targets (min 44x44px per WCAG) */ /* Touch-friendly tap targets (min 44x44px per WCAG) */
@@ -548,13 +564,13 @@
/* Theme Switcher */ /* Theme Switcher */
.theme-switcher { .theme-switcher {
padding: 12px; padding: var(--vp-spacing-3);
} }
/* Breadcrumbs */ /* Breadcrumbs */
.breadcrumb { .breadcrumb {
padding: 8px 0; padding: var(--vp-spacing-2) 0;
font-size: 12px; font-size: var(--vp-font-size-xs);
} }
/* Table of Contents - hidden on mobile */ /* Table of Contents - hidden on mobile */
@@ -564,147 +580,218 @@
/* Typography adjustments for mobile */ /* Typography adjustments for mobile */
.vp-doc h1 { .vp-doc h1 {
font-size: 1.75rem; font-size: var(--vp-font-size-2xl);
margin-bottom: 1rem; margin-bottom: var(--vp-spacing-4);
} }
.vp-doc h2 { .vp-doc h2 {
font-size: 1.375rem; font-size: var(--vp-font-size-xl);
margin-top: 2rem; margin-top: var(--vp-spacing-8);
padding-top: 1.5rem; padding-top: var(--vp-spacing-6);
} }
.vp-doc h3 { .vp-doc h3 {
font-size: 1.125rem; font-size: var(--vp-font-size-lg);
margin-top: 1.5rem; margin-top: var(--vp-spacing-6);
} }
.vp-doc p { .vp-doc p {
line-height: 1.7; line-height: var(--vp-line-height-relaxed);
margin: 1rem 0; margin: var(--vp-spacing-4) 0;
} }
.vp-doc ul, .vp-doc ul,
.vp-doc ol { .vp-doc ol {
margin: 1rem 0; margin: var(--vp-spacing-4) 0;
padding-left: 1.25rem; padding-left: var(--vp-spacing-5);
} }
.vp-doc li { .vp-doc li {
margin: 0.375rem 0; margin: var(--vp-spacing-1-5) 0;
} }
} }
/* ============================================ /* ============================================
* Tablet Styles (768px - 1024px) * Tablet Styles (768px - 1024px)
* ============================================ */ * ============================================ */
@media (min-width: var(--bp-sm)) and (max-width: var(--bp-md)) { @media (min-width: 768px) and (max-width: 1023px) {
:root { :root {
--vp-content-width: 760px; --vp-content-width: 720px;
--vp-sidebar-width: 240px; --vp-sidebar-width: 240px;
--vp-prose-width: 640px;
} }
.VPContent { .VPContent {
padding: 24px; padding: var(--vp-spacing-6);
} }
.VPDoc .content-container { .VPDoc .content-container {
padding: 0 24px; padding: 0 var(--vp-spacing-6);
max-width: 90% !important; max-width: 98% !important;
} }
.VPHomeHero { .VPHomeHero {
padding: 60px 24px; padding: var(--vp-spacing-16) var(--vp-spacing-6);
} }
.VPHomeHero h1 { .VPHomeHero h1 {
font-size: 36px; font-size: var(--vp-font-size-3xl);
} }
div[class*='language-'] { div[class*='language-'] {
margin: 12px 0; margin: var(--vp-spacing-3) 0;
} }
/* Outline visible but narrower */ /* Outline visible but narrower */
.VPDocOutline { .VPDocOutline {
width: 200px; width: 200px;
padding-left: 16px; padding-left: var(--vp-spacing-4);
} }
.VPDocOutline .outline-link { .VPDocOutline .outline-link {
font-size: 12px; font-size: var(--vp-font-size-xs);
} }
} }
/* ============================================ /* ============================================
* Desktop Styles (1024px+) * Desktop Styles (1024px+)
* ============================================ */ * ============================================ */
@media (min-width: var(--bp-md)) { @media (min-width: 1024px) {
:root { :root {
--vp-layout-max-width: 1600px; --vp-layout-max-width: 90rem; /* 1440px / 16 */
--vp-content-width: 960px; --vp-content-width: 53.75rem; /* 860px / 16 */
--vp-sidebar-width: 272px; --vp-sidebar-width: 17.5rem; /* 280px / 16 */
--vp-prose-width: 45rem; /* 720px / 16 */
--vp-toc-width: 13.75rem; /* 220px / 16 */
} }
.VPContent { .VPContent {
padding: 32px 48px; padding: var(--vp-spacing-8) var(--vp-spacing-12);
max-width: var(--vp-layout-max-width); /* Remove max-width to allow full viewport width */
} }
.VPDoc .content-container { /* Desktop sidebar - restore fixed positioning but with proper width */
max-width: 90% !important; .VPSidebar {
padding: 0 40px; position: fixed !important;
left: 0 !important;
top: var(--vp-nav-height, 56px) !important;
width: var(--vp-sidebar-width, 280px) !important;
height: calc(100vh - var(--vp-nav-height, 56px)) !important;
padding: 0 !important;
overflow-y: auto !important;
border-right: 1px solid var(--vp-c-divider) !important;
z-index: 10 !important;
} }
/* Outline - sticky on desktop with good width */ /* Desktop sidebar - add padding to inner nav */
.VPDocOutline { .VPSidebar nav.nav {
position: sticky; padding: var(--vp-spacing-4) !important;
top: calc(var(--vp-nav-height) + 24px); height: auto;
width: 256px; }
padding-left: 24px;
max-height: calc(100vh - var(--vp-nav-height) - 48px); /* Ensure content has proper margin-left to clear the sidebar */
.VPContent.has-sidebar {
margin-left: var(--vp-sidebar-width) !important;
margin-right: var(--vp-toc-width) !important;
/* padding moved to custom.css for responsive width control */
}
/* Adjust doc container - allow content to scale with zoom */
.VPDoc.has-aside .content-container {
width: 100%;
padding: 0 var(--vp-spacing-10);
}
/* Right TOC - fixed position with right margin */
.VPDocAside {
position: fixed;
right: 24px;
top: calc(var(--vp-nav-height, 56px) + 16px);
width: var(--vp-toc-width, 220px);
height: auto;
max-height: calc(100vh - var(--vp-nav-height, 56px) - 64px);
overflow-y: auto; overflow-y: auto;
} }
.aside-container {
position: fixed;
right: 24px;
top: calc(var(--vp-nav-height, 56px) + 16px);
width: var(--vp-toc-width, 220px);
height: auto;
max-height: calc(100vh - var(--vp-nav-height, 56px) - 64px);
overflow-y: auto;
}
.VPDocOutline {
position: relative;
height: auto;
max-height: 100%;
overflow-y: auto;
padding: var(--vp-spacing-4);
}
/* Navbar title - anchor to left edge */
.VPNavBarTitle {
position: relative !important;
margin-left: 0 !important;
padding-left: 0 !important;
}
/* Ensure navbar content-body has proper left padding */
.VPNavBar .content-body {
padding-left: var(--vp-spacing-4) !important;
}
/* Fix title position */
.VPNavBar .title {
padding-left: var(--vp-spacing-4) !important;
left: 0 !important;
}
/* Home page navbar - reduce title left margin since no sidebar */
.Layout:has(.pro-home) .VPNavBar .content-body,
.Layout:has(.pro-home) .VPNavBar .title {
padding-left: 0 !important;
}
/* Home page - no sidebar margin */
.Layout:has(.pro-home) .VPContent {
margin-left: 0 !important;
}
.VPDocOutline .outline-marker { .VPDocOutline .outline-marker {
display: block; display: block;
} }
.VPDocOutline .outline-link { .VPDocOutline .outline-link {
font-size: 13px; font-size: var(--vp-font-size-sm);
line-height: 1.6; line-height: var(--vp-line-height-normal);
padding: 4px 12px; padding: var(--vp-spacing-1) var(--vp-spacing-3);
transition: color 0.2s ease; transition: color var(--vp-transition-color);
} }
.VPDocOutline .outline-link:hover { .VPDocOutline .outline-link:hover {
color: var(--vp-c-primary); color: var(--vp-c-primary);
} }
/* Two-column layout for content + TOC */
.content-with-toc {
display: grid;
grid-template-columns: 1fr 280px;
gap: 32px;
}
} }
/* ============================================ /* ============================================
* Large Desktop (1440px+) * Large Desktop (1440px+)
* ============================================ */ * ============================================ */
@media (min-width: var(--bp-lg)) { @media (min-width: 1440px) {
:root { :root {
--vp-content-width: 1040px; --vp-content-width: 57.5rem; /* 920px / 16 */
--vp-sidebar-width: 280px; --vp-sidebar-width: 18.75rem; /* 300px / 16 */
--vp-prose-width: 47.5rem; /* 760px / 16 */
--vp-toc-width: 16.25rem; /* 260px / 16 */
} }
.VPDoc .content-container { .VPDoc.has-aside .content-container {
max-width: 90% !important; width: 100%;
padding: 0 48px; padding: 0 var(--vp-spacing-12);
} margin-left: 0;
margin-right: 0;
.VPDocOutline {
width: 280px;
} }
} }
@@ -749,7 +836,7 @@
/* ============================================ /* ============================================
* ProfessionalHome Component - Mobile Optimizations * ProfessionalHome Component - Mobile Optimizations
* ============================================ */ * ============================================ */
@media (max-width: var(--bp-sm)) { @media (max-width: 767px) {
/* Root level overflow prevention */ /* Root level overflow prevention */
html, body { html, body {
max-width: 100vw; max-width: 100vw;
@@ -921,7 +1008,7 @@
} }
/* ProfessionalHome - Tablet Optimizations */ /* ProfessionalHome - Tablet Optimizations */
@media (min-width: var(--bp-sm)) and (max-width: var(--bp-md)) { @media (min-width: 768px) and (max-width: 1023px) {
.pro-home .hero-section { .pro-home .hero-section {
padding: 3rem 0 2.5rem; padding: 3rem 0 2.5rem;
} }
@@ -941,7 +1028,7 @@
} }
/* ProfessionalHome - Small Mobile (< 480px) */ /* ProfessionalHome - Small Mobile (< 480px) */
@media (max-width: var(--bp-xs)) { @media (max-width: 479px) {
.pro-home .hero-badge { .pro-home .hero-badge {
font-size: 0.7rem; font-size: 0.7rem;
padding: 0.2rem 0.5rem; padding: 0.2rem 0.5rem;
@@ -965,7 +1052,7 @@
/* ============================================ /* ============================================
* Dark Mode Specific * Dark Mode Specific
* ============================================ */ * ============================================ */
@media (max-width: var(--bp-sm)) { @media (max-width: 767px) {
.dark { .dark {
--vp-c-bg: #0f172a; --vp-c-bg: #0f172a;
--vp-c-text-1: #f1f5f9; --vp-c-text-1: #f1f5f9;
@@ -975,7 +1062,7 @@
/* ============================================ /* ============================================
* Print Styles for Mobile * Print Styles for Mobile
* ============================================ */ * ============================================ */
@media print and (max-width: var(--bp-sm)) { @media print and (max-width: 767px) {
.VPContent { .VPContent {
font-size: 10pt; font-size: 10pt;
} }

View File

@@ -79,23 +79,58 @@
--vp-c-text-4: #9ca3af; --vp-c-text-4: #9ca3af;
--vp-c-text-code: #ef4444; --vp-c-text-code: #ef4444;
/* Typography */ /* Typography - Developer-focused font pairing */
--vp-font-family-base: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Fira Sans', 'Droid Sans', 'Helvetica Neue', sans-serif; --vp-font-family-base: 'Inter', 'IBM Plex Sans', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;
--vp-font-family-mono: 'Fira Code', 'Cascadia Code', 'JetBrains Mono', Consolas, 'Courier New', monospace; --vp-font-family-mono: 'JetBrains Mono', 'Fira Code', 'Cascadia Code', Consolas, 'Courier New', monospace;
--vp-font-family-heading: 'Inter', 'IBM Plex Sans', -apple-system, BlinkMacSystemFont, sans-serif;
/* Font Sizes */ /* Font Sizes - Modular scale (1.25 ratio) */
--vp-font-size-base: 16px; --vp-font-size-xs: 0.75rem; /* 12px */
--vp-font-size-sm: 14px; --vp-font-size-sm: 0.875rem; /* 14px */
--vp-font-size-lg: 18px; --vp-font-size-base: 1rem; /* 16px */
--vp-font-size-xl: 20px; --vp-font-size-lg: 1.125rem; /* 18px */
--vp-font-size-xl: 1.25rem; /* 20px */
--vp-font-size-2xl: 1.5rem; /* 24px */
--vp-font-size-3xl: 1.875rem; /* 30px */
--vp-font-size-4xl: 2.25rem; /* 36px */
/* Spacing (Fixed) */ /* Line Heights - Optimized for readability */
--vp-spacing-xs: 0.25rem; /* 4px */ --vp-line-height-tight: 1.25;
--vp-spacing-sm: 0.5rem; /* 8px */ --vp-line-height-snug: 1.4;
--vp-spacing-md: 1rem; /* 16px */ --vp-line-height-normal: 1.6;
--vp-spacing-lg: 1.5rem; /* 24px */ --vp-line-height-relaxed: 1.75;
--vp-spacing-xl: 2rem; /* 32px */ --vp-line-height-loose: 2;
--vp-spacing-2xl: 3rem; /* 48px */
/* Spacing (Fixed) - 8px base grid */
--vp-spacing-0: 0;
--vp-spacing-px: 1px;
--vp-spacing-0-5: 0.125rem; /* 2px */
--vp-spacing-1: 0.25rem; /* 4px */
--vp-spacing-1-5: 0.375rem; /* 6px */
--vp-spacing-2: 0.5rem; /* 8px */
--vp-spacing-2-5: 0.625rem; /* 10px */
--vp-spacing-3: 0.75rem; /* 12px */
--vp-spacing-3-5: 0.875rem; /* 14px */
--vp-spacing-4: 1rem; /* 16px */
--vp-spacing-5: 1.25rem; /* 20px */
--vp-spacing-6: 1.5rem; /* 24px */
--vp-spacing-7: 1.75rem; /* 28px */
--vp-spacing-8: 2rem; /* 32px */
--vp-spacing-9: 2.25rem; /* 36px */
--vp-spacing-10: 2.5rem; /* 40px */
--vp-spacing-12: 3rem; /* 48px */
--vp-spacing-14: 3.5rem; /* 56px */
--vp-spacing-16: 4rem; /* 64px */
--vp-spacing-20: 5rem; /* 80px */
--vp-spacing-24: 6rem; /* 96px */
/* Legacy aliases for backward compatibility */
--vp-spacing-xs: var(--vp-spacing-1);
--vp-spacing-sm: var(--vp-spacing-2);
--vp-spacing-md: var(--vp-spacing-4);
--vp-spacing-lg: var(--vp-spacing-6);
--vp-spacing-xl: var(--vp-spacing-8);
--vp-spacing-2xl: var(--vp-spacing-12);
/* Fluid Spacing (Responsive with clamp()) /* Fluid Spacing (Responsive with clamp())
* Scales smoothly between viewport widths * Scales smoothly between viewport widths
@@ -116,18 +151,24 @@
--container-outline: outline; --container-outline: outline;
--container-nav: nav; --container-nav: nav;
/* Border Radius */ /* Border Radius - Subtle, flat design friendly */
--vp-radius-none: 0;
--vp-radius-sm: 0.25rem; /* 4px */ --vp-radius-sm: 0.25rem; /* 4px */
--vp-radius-md: 0.375rem; /* 6px */ --vp-radius-md: 0.375rem; /* 6px */
--vp-radius-lg: 0.5rem; /* 8px */ --vp-radius-lg: 0.5rem; /* 8px */
--vp-radius-xl: 0.75rem; /* 12px */ --vp-radius-xl: 0.75rem; /* 12px */
--vp-radius-2xl: 1rem; /* 16px */
--vp-radius-3xl: 1.5rem; /* 24px */
--vp-radius-full: 9999px; --vp-radius-full: 9999px;
/* Shadows */ /* Shadows - Subtle for flat design */
--vp-shadow-sm: 0 1px 2px 0 rgb(0 0 0 / 0.05); --vp-shadow-xs: 0 1px 2px 0 rgb(0 0 0 / 0.03);
--vp-shadow-md: 0 4px 6px -1px rgb(0 0 0 / 0.1); --vp-shadow-sm: 0 1px 3px 0 rgb(0 0 0 / 0.05), 0 1px 2px -1px rgb(0 0 0 / 0.05);
--vp-shadow-lg: 0 10px 15px -3 rgb(0 0 0 / 0.1); --vp-shadow-md: 0 4px 6px -1px rgb(0 0 0 / 0.07), 0 2px 4px -2px rgb(0 0 0 / 0.05);
--vp-shadow-xl: 0 20px 25px -5 rgb(0 0 0 / 0.1); --vp-shadow-lg: 0 10px 15px -3px rgb(0 0 0 / 0.08), 0 4px 6px -4px rgb(0 0 0 / 0.05);
--vp-shadow-xl: 0 20px 25px -5px rgb(0 0 0 / 0.08), 0 8px 10px -6px rgb(0 0 0 / 0.05);
--vp-shadow-inner: inset 0 2px 4px 0 rgb(0 0 0 / 0.05);
--vp-shadow-none: 0 0 #0000;
/* Transitions */ /* Transitions */
--vp-transition-color: 0.2s ease; --vp-transition-color: 0.2s ease;

View File

@@ -2,255 +2,180 @@
## One-Line Positioning ## One-Line Positioning
**Advanced Tips are the key to efficiency improvement** — Deep CLI toolchain usage, multi-model collaboration optimization, memory management best practices. **Drive AI tool orchestration with natural language** — Semantic CLI invocation, multi-model collaboration, intelligent memory management.
--- ---
## 5.1 CLI Toolchain Usage ## 5.1 Semantic Tool Orchestration
### 5.1.1 CLI Configuration ### 5.1.1 Core Concept
CLI tool configuration file: `~/.claude/cli-tools.json` CCW's CLI tools are **AI-automated capability extensions**. Users simply describe needs in natural language, and AI automatically selects and invokes the appropriate tools.
::: tip Key Understanding
- User says: "Use Gemini to analyze this code"
- AI automatically: Invokes Gemini CLI + applies analysis rules + returns results
- Users don't need to know `ccw cli` command details
:::
### 5.1.2 Available Tools & Capabilities
| Tool | Strengths | Typical Trigger Words |
| --- | --- | --- |
| **Gemini** | Deep analysis, architecture design, bug diagnosis | "use Gemini", "deep understanding" |
| **Qwen** | Code generation, feature implementation | "let Qwen implement", "code generation" |
| **Codex** | Code review, Git operations | "use Codex", "code review" |
| **OpenCode** | Open-source multi-model | "use OpenCode" |
### 5.1.3 Semantic Trigger Examples
Simply express naturally in conversation, AI will automatically invoke the corresponding tool:
| Goal | User Semantic Description | AI Auto-Executes |
| :--- | :--- | :--- |
| **Security Assessment** | "Use Gemini to scan auth module for security vulnerabilities" | Gemini + Security analysis rule |
| **Code Implementation** | "Let Qwen implement a rate limiting middleware" | Qwen + Feature implementation rule |
| **Code Review** | "Use Codex to review this PR's changes" | Codex + Review rule |
| **Bug Diagnosis** | "Use Gemini to analyze the root cause of this memory leak" | Gemini + Diagnosis rule |
### 5.1.4 Underlying Configuration (Optional)
AI tool invocation configuration file at `~/.claude/cli-tools.json`:
```json ```json
{ {
"version": "3.3.0",
"tools": { "tools": {
"gemini": { "gemini": {
"enabled": true, "enabled": true,
"primaryModel": "gemini-2.5-flash", "primaryModel": "gemini-2.5-flash",
"secondaryModel": "gemini-2.5-flash", "tags": ["analysis", "Debug"]
"tags": ["analysis", "Debug"],
"type": "builtin"
}, },
"qwen": { "qwen": {
"enabled": true, "enabled": true,
"primaryModel": "coder-model", "primaryModel": "coder-model",
"secondaryModel": "coder-model", "tags": ["implementation"]
"tags": [],
"type": "builtin"
},
"codex": {
"enabled": true,
"primaryModel": "gpt-5.2",
"secondaryModel": "gpt-5.2",
"tags": [],
"type": "builtin"
} }
} }
} }
``` ```
### 5.1.2 Tag Routing ::: info Note
Tags help AI automatically select the most suitable tool based on task type. Users typically don't need to modify this configuration.
Automatically select models based on task type: :::
| Tag | Applicable Model | Task Type |
| --- | --- | --- |
| **analysis** | Gemini | Code analysis, architecture design |
| **Debug** | Gemini | Root cause analysis, problem diagnosis |
| **implementation** | Qwen | Feature development, code generation |
| **review** | Codex | Code review, Git operations |
### 5.1.3 CLI Command Templates
#### Analysis Task
```bash
ccw cli -p "PURPOSE: Identify security vulnerabilities
TASK: • Scan for SQL injection • Check XSS • Verify CSRF
MODE: analysis
CONTEXT: @src/auth/**/*
EXPECTED: Security report with severity grading and fix recommendations
CONSTRAINTS: Focus on auth module" --tool gemini --mode analysis --rule analysis-assess-security-risks
```
#### Implementation Task
```bash
ccw cli -p "PURPOSE: Implement rate limiting
TASK: • Create middleware • Configure routes • Redis backend
MODE: write
CONTEXT: @src/middleware/**/* @src/config/**/*
EXPECTED: Production code + unit tests + integration tests
CONSTRAINTS: Follow existing middleware patterns" --tool qwen --mode write --rule development-implement-feature
```
### 5.1.4 Rule Templates
| Rule | Purpose |
| --- | --- |
| **analysis-diagnose-bug-root-cause** | Bug root cause analysis |
| **analysis-analyze-code-patterns** | Code pattern analysis |
| **analysis-review-architecture** | Architecture review |
| **analysis-assess-security-risks** | Security assessment |
| **development-implement-feature** | Feature implementation |
| **development-refactor-codebase** | Code refactoring |
| **development-generate-tests** | Test generation |
--- ---
## 5.2 Multi-Model Collaboration ## 5.2 Multi-Model Collaboration
### 5.2.1 Model Selection Guide ### 5.2.1 Collaboration Patterns
| Task | Recommended Model | Reason | Through semantic descriptions, multiple AI models can work together:
| Pattern | Description Style | Use Case |
| --- | --- | --- | | --- | --- | --- |
| **Code Analysis** | Gemini | Strong at deep code understanding and pattern recognition | | **Collaborative** | "Let Gemini and Codex jointly analyze architecture issues" | Multi-perspective analysis of the same problem |
| **Pipeline** | "Gemini designs, Qwen implements, Codex reviews" | Stage-by-stage complex task completion |
| **Iterative** | "Use Gemini to diagnose, Codex to fix, iterate until tests pass" | Bug fix loop |
| **Parallel** | "Let Gemini and Qwen each provide optimization suggestions" | Compare different approaches |
### 5.2.2 Semantic Examples
**Collaborative Analysis**
```
User: Let Gemini and Codex jointly analyze security and performance of src/auth module
AI: [Automatically invokes both models, synthesizes analysis results]
```
**Pipeline Development**
```
User: I need to implement a WebSocket real-time notification feature.
Please have Gemini design the architecture, Qwen implement the code, and Codex review.
AI: [Sequentially invokes three models, completing design→implement→review flow]
```
**Iterative Fix**
```
User: Tests failed. Use Gemini to diagnose the issue, have Qwen fix it, loop until tests pass.
AI: [Automatically iterates diagnose-fix loop until problem is resolved]
```
### 5.2.3 Model Selection Guide
| Task Type | Recommended Model | Reason |
| --- | --- | --- |
| **Architecture Analysis** | Gemini | Strong at deep understanding and pattern recognition |
| **Bug Diagnosis** | Gemini | Powerful root cause analysis capability | | **Bug Diagnosis** | Gemini | Powerful root cause analysis capability |
| **Feature Development** | Qwen | High code generation efficiency | | **Code Generation** | Qwen | High code generation efficiency |
| **Code Review** | Codex (GPT) | Good Git integration, standard review format | | **Code Review** | Codex | Good Git integration, standard review format |
| **Long Text** | Claude | Large context window | | **Long Text Processing** | Claude | Large context window |
### 5.2.2 Collaboration Patterns
#### Serial Collaboration
```bash
# Step 1: Gemini analysis
ccw cli -p "Analyze code architecture" --tool gemini --mode analysis
# Step 2: Qwen implementation
ccw cli -p "Implement feature based on analysis" --tool qwen --mode write
# Step 3: Codex review
ccw cli -p "Review implementation code" --tool codex --mode review
```
#### Parallel Collaboration
Use `--tool gemini` and `--tool qwen` to analyze the same problem simultaneously:
```bash
# Terminal 1
ccw cli -p "Analyze from performance perspective" --tool gemini --mode analysis &
# Terminal 2
ccw cli -p "Analyze from security perspective" --tool codex --mode analysis &
```
### 5.2.3 Session Resume
Cross-model session resume:
```bash
# First call
ccw cli -p "Analyze authentication module" --tool gemini --mode analysis
# Resume session to continue
ccw cli -p "Based on previous analysis, design improvement plan" --tool qwen --mode write --resume
```
--- ---
## 5.3 Memory Management ## 5.3 Intelligent Memory Management
### 5.3.1 Memory Categories ### 5.3.1 Memory System Overview
| Category | Purpose | Example Content | CCW's memory system is an **AI self-managed** knowledge base, including:
| Category | Purpose | Example |
| --- | --- | --- | | --- | --- | --- |
| **learnings** | Learning insights | New technology usage experience | | **learnings** | Learning insights | New technology usage experience, best practices |
| **decisions** | Architecture decisions | Technology selection rationale | | **decisions** | Architecture decisions | Technology selection rationale, design tradeoffs |
| **conventions** | Coding standards | Naming conventions, patterns | | **conventions** | Coding standards | Naming conventions, code style |
| **issues** | Known issues | Bugs, limitations, TODOs | | **issues** | Known issues | Bug records, limitations |
### 5.3.2 Memory Commands ### 5.3.2 Automatic Memory Usage
| Command | Function | Example | AI automatically retrieves and applies relevant memories when executing tasks:
| --- | --- | --- |
| **list** | List all memories | `ccw memory list` |
| **search** | Search memories | `ccw memory search "authentication"` |
| **export** | Export memory | `ccw memory export <id>` |
| **import** | Import memory | `ccw memory import "..."` |
| **embed** | Generate embeddings | `ccw memory embed <id>` |
### 5.3.3 Memory Best Practices
::: tip Tip
- **Regular cleanup**: Organize Memory weekly, delete outdated content
- **Structure**: Use standard format for easy search and reuse
- **Context**: Record decision background, not just conclusions
- **Linking**: Cross-reference related content
:::
### 5.3.4 Memory Template
```markdown
## Title
### Background
- **Problem**: ...
- **Impact**: ...
### Decision
- **Solution**: ...
- **Rationale**: ...
### Result
- **Effect**: ...
- **Lessons Learned**: ...
### Related
- [Related Memory 1](memory-id-1)
- [Related Documentation](link)
``` ```
User: Help me implement the user authentication module
AI: [Automatically retrieves authentication-related decisions and conventions from memory]
Based on previous technical decisions, we use JWT + bcrypt approach...
```
### 5.3.3 How Users Guide Memory
While AI manages memory automatically, users can actively reinforce:
**Explicitly Request to Remember**
```
User: Remember this naming convention: all API routes use kebab-case
AI: [Stores this convention in conventions memory]
```
**Request to Review Decisions**
```
User: Why did we choose Redis for caching before?
AI: [Retrieves from decisions memory and responds]
```
**Correct Wrong Memory**
```
User: The previous decision changed, we now use PostgreSQL instead of MongoDB
AI: [Updates related decision memory]
```
### 5.3.4 Memory File Locations
- **Global Memory**: `~/.claude/projects/{project-name}/memory/`
- **Project Memory**: `.claude/memory/` or `MEMORY.md`
--- ---
## 5.4 CodexLens Advanced Usage ## 5.4 Hook Automation
### 5.4.1 Hybrid Search ### 5.4.1 Hook Concept
Combine vector search and keyword search: Hooks are automated processes before and after AI executes tasks, users don't need to trigger manually:
```bash
# Pure vector search
ccw search --mode vector "user authentication"
# Hybrid search (default)
ccw search --mode hybrid "user authentication"
# Pure keyword search
ccw search --mode keyword "authenticate"
```
### 5.4.2 Call Chain Tracing
Trace complete call chains of functions:
```bash
# Trace up (who called me)
ccw search --trace-up "authenticateUser"
# Trace down (who I called)
ccw search --trace-down "authenticateUser"
# Full call chain
ccw search --trace-full "authenticateUser"
```
### 5.4.3 Semantic Search Techniques
| Technique | Example | Effect |
| --- | --- | --- |
| **Functional description** | "handle user login" | Find login-related code |
| **Problem description** | "memory leak locations" | Find potential issues |
| **Pattern description** | "singleton implementation" | Find design patterns |
| **Technical description** | "using React Hooks" | Find related usage |
---
## 5.5 Hook Auto-Injection
### 5.5.1 Hook Types
| Hook Type | Trigger Time | Purpose | | Hook Type | Trigger Time | Purpose |
| --- | --- | --- | | --- | --- | --- |
| **pre-command** | Before command execution | Inject specifications, load context | | **pre-command** | Before AI thinking | Load project specs, retrieve memory |
| **post-command** | After command execution | Save Memory, update state | | **post-command** | After AI completion | Save decisions, update index |
| **pre-commit** | Before Git commit | Code review, standard checks | | **pre-commit** | Before Git commit | Code review, standard checks |
| **file-change** | On file change | Auto-format, update index |
### 5.5.2 Hook Configuration ### 5.4.2 Configuration Example
Configure in `.claude/hooks.json`: Configure in `.claude/hooks.json`:
@@ -258,19 +183,14 @@ Configure in `.claude/hooks.json`:
{ {
"pre-command": [ "pre-command": [
{ {
"name": "inject-specs", "name": "load-project-specs",
"description": "Inject project specifications", "description": "Load project specifications",
"command": "cat .workflow/specs/project-constraints.md" "command": "cat .workflow/specs/project-constraints.md"
},
{
"name": "load-memory",
"description": "Load related Memory",
"command": "ccw memory search \"{query}\""
} }
], ],
"post-command": [ "post-command": [
{ {
"name": "save-memory", "name": "save-decisions",
"description": "Save important decisions", "description": "Save important decisions",
"command": "ccw memory import \"{content}\"" "command": "ccw memory import \"{content}\""
} }
@@ -280,49 +200,54 @@ Configure in `.claude/hooks.json`:
--- ---
## 5.6 Performance Optimization ## 5.5 ACE Semantic Search
### 5.6.1 Indexing Optimization ### 5.5.1 What is ACE
| Optimization | Description | ACE (Augment Context Engine) is AI's **code perception capability**, enabling AI to understand the entire codebase semantically.
### 5.5.2 How AI Uses ACE
When users ask questions, AI automatically uses ACE to search for relevant code:
```
User: How is the authentication flow implemented?
AI: [Uses ACE semantic search for auth-related code]
Based on code analysis, the authentication flow is...
```
### 5.5.3 Configuration Reference
| Configuration Method | Link |
| --- | --- | | --- | --- |
| **Incremental indexing** | Only index changed files | | **Official Docs** | [Augment MCP Documentation](https://docs.augmentcode.com/context-services/mcp/overview) |
| **Parallel indexing** | Multi-process parallel processing | | **Proxy Tool** | [ace-tool (GitHub)](https://github.com/eastxiaodong/ace-tool) |
| **Caching strategy** | Vector embedding cache |
### 5.6.2 Search Optimization
| Optimization | Description |
| --- | --- |
| **Result caching** | Same query returns cached results |
| **Paginated loading** | Large result sets paginated |
| **Smart deduplication** | Auto-duplicate similar results |
--- ---
## 5.7 Quick Reference ## 5.6 Semantic Prompt Cheatsheet
### CLI Command Cheatsheet ### Common Semantic Patterns
| Command | Function | | Goal | Semantic Description Example |
| --- | --- | | --- | --- |
| `ccw cli -p "..." --tool gemini --mode analysis` | Analysis task | | **Analyze Code** | "Use Gemini to analyze the architecture design of src/auth" |
| `ccw cli -p "..." --tool qwen --mode write` | Implementation task | | **Security Audit** | "Use Gemini to scan for security vulnerabilities, focus on OWASP Top 10" |
| `ccw cli -p "..." --tool codex --mode review` | Review task | | **Implement Feature** | "Let Qwen implement a cached user repository" |
| `ccw memory list` | List memories | | **Code Review** | "Use Codex to review recent changes" |
| `ccw memory search "..."` | Search memories | | **Bug Diagnosis** | "Use Gemini to analyze the root cause of this memory leak" |
| `ccw search "..."` | Semantic search | | **Multi-Model Collaboration** | "Gemini designs, Qwen implements, Codex reviews" |
| `ccw search --trace "..."` | Call chain tracing | | **Remember Convention** | "Remember: all APIs use RESTful style" |
| **Review Decision** | "Why did we choose this tech stack before?" |
### Rule Template Cheatsheet ### Collaboration Pattern Cheatsheet
| Rule | Purpose | | Pattern | Semantic Example |
| --- | --- | | --- | --- |
| `analysis-diagnose-bug-root-cause` | Bug analysis | | **Collaborative** | "Let Gemini and Codex jointly analyze..." |
| `analysis-assess-security-risks` | Security assessment | | **Pipeline** | "Gemini designs, Qwen implements, Codex reviews" |
| `development-implement-feature` | Feature implementation | | **Iterative** | "Diagnose and fix until tests pass" |
| `development-refactor-codebase` | Code refactoring | | **Parallel** | "Let multiple models each provide suggestions" |
| `development-generate-tests` | Test generation |
--- ---

View File

@@ -178,26 +178,100 @@ cp -r agents/* ~/.claude/agents/
## Uninstallation ## Uninstallation
CCW provides a smart uninstall command that automatically handles installation manifests, orphan file cleanup, and global file protection.
### Using CCW Uninstall Command (Recommended)
```bash ```bash
# Remove CCW commands ccw uninstall
rm ~/.claude/commands/ccw.md ```
rm ~/.claude/commands/ccw-coordinator.md
Uninstallation flow:
1. **Scan Installation Manifests** - Automatically detects all installed CCW instances (Global and Path modes)
2. **Interactive Selection** - Displays installation list for you to choose which to uninstall
3. **Smart Protection** - When uninstalling Path mode, if a Global installation exists, global files (workflows, scripts, templates) are automatically preserved
4. **Orphan File Cleanup** - Automatically cleans up skills and commands files no longer referenced by any installation
5. **Empty Directory Cleanup** - Removes empty directories left behind
6. **Git Bash Fix Removal** - On Windows, after the last installation is removed, asks whether to remove the Git Bash multi-line prompt fix
### Uninstall Output Example
```
Found installations:
1. Global
Path: /Users/username/my-project
Date: 2026/3/2
Version: 7.0.5
Files: 156 | Dirs: 23
──────────────────────────────────────
? Select installation to uninstall: Global - /Users/username/my-project
? Are you sure you want to uninstall Global installation? Yes
✔ Removing files...
✔ Uninstall complete!
╔══════════════════════════════════════╗
║ Uninstall Summary ║
╠══════════════════════════════════════╣
║ ✓ Successfully Uninstalled ║
║ ║
║ Files removed: 156 ║
║ Directories removed: 23 ║
║ Orphan files cleaned: 3 ║
║ ║
║ Manifest removed ║
╚══════════════════════════════════════╝
```
### Manual Uninstallation
If you need to manually remove CCW files (not recommended):
```bash
# CCW installed directories (safe to remove)
rm -rf ~/.claude/commands/ccw.md
rm -rf ~/.claude/commands/ccw-coordinator.md
rm -rf ~/.claude/commands/workflow rm -rf ~/.claude/commands/workflow
rm -rf ~/.claude/commands/issue rm -rf ~/.claude/commands/issue
rm -rf ~/.claude/commands/cli rm -rf ~/.claude/commands/cli
rm -rf ~/.claude/commands/memory rm -rf ~/.claude/commands/memory
rm -rf ~/.claude/commands/idaw
# Remove CCW skills and agents
rm -rf ~/.claude/skills/workflow-* rm -rf ~/.claude/skills/workflow-*
rm -rf ~/.claude/skills/team-* rm -rf ~/.claude/skills/team-*
rm -rf ~/.claude/skills/review-*
rm -rf ~/.claude/agents/team-worker.md rm -rf ~/.claude/agents/team-worker.md
rm -rf ~/.claude/agents/cli-*-agent.md rm -rf ~/.claude/agents/cli-*-agent.md
rm -rf ~/.claude/workflows
rm -rf ~/.claude/scripts
rm -rf ~/.claude/templates
rm -rf ~/.claude/manifests
rm -rf ~/.claude/version.json
# Remove configuration (optional) # Codex related directories
rm -rf ~/.claude/cli-tools.json rm -rf ~/.codex/prompts
rm -rf .workflow/ rm -rf ~/.codex/skills
rm -rf ~/.codex/agents
# Other CLI directories
rm -rf ~/.gemini
rm -rf ~/.qwen
# CCW core directory
rm -rf ~/.ccw
``` ```
::: danger Danger
**Do NOT** run `rm -rf ~/.claude` - this will delete your Claude Code personal configurations:
- `~/.claude/settings.json` - Your Claude Code settings
- `~/.claude/settings.local.json` - Local override settings
- MCP server configurations, etc.
Always use `ccw uninstall` for controlled uninstallation.
:::
## Troubleshooting ## Troubleshooting
### Command Not Found ### Command Not Found

88
docs/package-lock.json generated
View File

@@ -8,6 +8,11 @@
"name": "ccw-docs", "name": "ccw-docs",
"version": "1.0.0", "version": "1.0.0",
"dependencies": { "dependencies": {
"class-variance-authority": "^0.7.0",
"clsx": "^2.1.0",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"tailwind-merge": "^2.2.0",
"vue-i18n": "^10.0.0" "vue-i18n": "^10.0.0"
}, },
"devDependencies": { "devDependencies": {
@@ -2248,6 +2253,27 @@
"chevrotain": "^11.0.0" "chevrotain": "^11.0.0"
} }
}, },
"node_modules/class-variance-authority": {
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/class-variance-authority/-/class-variance-authority-0.7.1.tgz",
"integrity": "sha512-Ka+9Trutv7G8M6WT6SeiRWz792K5qEqIGEGzXKhAE6xOWAY6pPH8U+9IY3oCMv6kqTmLsv7Xh/2w2RigkePMsg==",
"license": "Apache-2.0",
"dependencies": {
"clsx": "^2.1.1"
},
"funding": {
"url": "https://polar.sh/cva"
}
},
"node_modules/clsx": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/clsx/-/clsx-2.1.1.tgz",
"integrity": "sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA==",
"license": "MIT",
"engines": {
"node": ">=6"
}
},
"node_modules/comma-separated-tokens": { "node_modules/comma-separated-tokens": {
"version": "2.0.3", "version": "2.0.3",
"resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-2.0.3.tgz", "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-2.0.3.tgz",
@@ -3105,6 +3131,12 @@
"url": "https://github.com/sponsors/mesqueeb" "url": "https://github.com/sponsors/mesqueeb"
} }
}, },
"node_modules/js-tokens": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz",
"integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==",
"license": "MIT"
},
"node_modules/katex": { "node_modules/katex": {
"version": "0.16.33", "version": "0.16.33",
"resolved": "https://registry.npmjs.org/katex/-/katex-0.16.33.tgz", "resolved": "https://registry.npmjs.org/katex/-/katex-0.16.33.tgz",
@@ -3174,6 +3206,18 @@
"license": "MIT", "license": "MIT",
"peer": true "peer": true
}, },
"node_modules/loose-envify": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz",
"integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==",
"license": "MIT",
"dependencies": {
"js-tokens": "^3.0.0 || ^4.0.0"
},
"bin": {
"loose-envify": "cli.js"
}
},
"node_modules/magic-string": { "node_modules/magic-string": {
"version": "0.30.21", "version": "0.30.21",
"resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.21.tgz", "resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.21.tgz",
@@ -3536,6 +3580,31 @@
"url": "https://github.com/sponsors/wooorm" "url": "https://github.com/sponsors/wooorm"
} }
}, },
"node_modules/react": {
"version": "18.3.1",
"resolved": "https://registry.npmjs.org/react/-/react-18.3.1.tgz",
"integrity": "sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ==",
"license": "MIT",
"dependencies": {
"loose-envify": "^1.1.0"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/react-dom": {
"version": "18.3.1",
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-18.3.1.tgz",
"integrity": "sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw==",
"license": "MIT",
"dependencies": {
"loose-envify": "^1.1.0",
"scheduler": "^0.23.2"
},
"peerDependencies": {
"react": "^18.3.1"
}
},
"node_modules/regex": { "node_modules/regex": {
"version": "5.1.1", "version": "5.1.1",
"resolved": "https://registry.npmjs.org/regex/-/regex-5.1.1.tgz", "resolved": "https://registry.npmjs.org/regex/-/regex-5.1.1.tgz",
@@ -3651,6 +3720,15 @@
"dev": true, "dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/scheduler": {
"version": "0.23.2",
"resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.23.2.tgz",
"integrity": "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==",
"license": "MIT",
"dependencies": {
"loose-envify": "^1.1.0"
}
},
"node_modules/search-insights": { "node_modules/search-insights": {
"version": "2.17.3", "version": "2.17.3",
"resolved": "https://registry.npmjs.org/search-insights/-/search-insights-2.17.3.tgz", "resolved": "https://registry.npmjs.org/search-insights/-/search-insights-2.17.3.tgz",
@@ -3749,6 +3827,16 @@
"dev": true, "dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/tailwind-merge": {
"version": "2.6.1",
"resolved": "https://registry.npmjs.org/tailwind-merge/-/tailwind-merge-2.6.1.tgz",
"integrity": "sha512-Oo6tHdpZsGpkKG88HJ8RR1rg/RdnEkQEfMoEk2x1XRI3F1AxeU+ijRXpiVUF4UbLfcxxRGw6TbUINKYdWVsQTQ==",
"license": "MIT",
"funding": {
"type": "github",
"url": "https://github.com/sponsors/dcastil"
}
},
"node_modules/tinyexec": { "node_modules/tinyexec": {
"version": "1.0.2", "version": "1.0.2",
"resolved": "https://registry.npmjs.org/tinyexec/-/tinyexec-1.0.2.tgz", "resolved": "https://registry.npmjs.org/tinyexec/-/tinyexec-1.0.2.tgz",

View File

@@ -0,0 +1,736 @@
# Commands & Skills Reference
> **Quick Reference**: Complete catalog of Claude commands, skills, and Codex capabilities
---
## Quick Reference Table
### Commands Quick Reference
| Category | Command | Description | Arguments |
|----------|---------|-------------|-----------|
| **Orchestrator** | `/ccw` | Main workflow orchestrator | `"task description"` |
| **Orchestrator** | `/ccw-coordinator` | Command orchestration tool | `[task description]` |
| **Session** | `/workflow:session:start` | Start workflow session | `[--type] [--auto|--new] [task]` |
| **Session** | `/workflow:session:resume` | Resume paused session | - |
| **Session** | `/workflow:session:complete` | Complete active session | `[-y] [--detailed]` |
| **Session** | `/workflow:session:list` | List all sessions | - |
| **Session** | `/workflow:session:sync` | Sync session to specs | `[-y] ["what was done"]` |
| **Session** | `/workflow:session:solidify` | Crystallize learnings | `[--type] [--category] "rule"` |
| **Issue** | `/issue:new` | Create structured issue | `<url|text> [--priority 1-5]` |
| **Issue** | `/issue:discover` | Discover potential issues | `<path> [--perspectives=...]` |
| **Issue** | `/issue:plan` | Plan issue resolution | `--all-pending <ids>` |
| **Issue** | `/issue:queue` | Form execution queue | `[--queues <n>] [--issue <id>]` |
| **Issue** | `/issue:execute` | Execute queue | `--queue <id> [--worktree]` |
| **IDAW** | `/idaw:run` | IDAW orchestrator | `[--task <ids>] [--dry-run]` |
| **IDAW** | `/idaw:add` | Add task to queue | - |
| **IDAW** | `/idaw:resume` | Resume IDAW session | - |
| **IDAW** | `/idaw:status` | Show queue status | - |
| **With-File** | `/workflow:brainstorm-with-file` | Interactive brainstorming | `[-c] [-m creative|structured] "topic"` |
| **With-File** | `/workflow:analyze-with-file` | Collaborative analysis | `[-c] "topic"` |
| **With-File** | `/workflow:debug-with-file` | Hypothesis-driven debugging | `"bug description"` |
| **With-File** | `/workflow:collaborative-plan-with-file` | Multi-agent planning | - |
| **With-File** | `/workflow:roadmap-with-file` | Strategic roadmap | - |
| **Cycle** | `/workflow:integration-test-cycle` | Integration test cycle | - |
| **Cycle** | `/workflow:refactor-cycle` | Refactor cycle | - |
| **CLI** | `/cli:codex-review` | Codex code review | `[--uncommitted|--base|--commit]` |
| **CLI** | `/cli:cli-init` | Initialize CLI config | - |
| **Memory** | `/memory:prepare` | Prepare memory context | - |
| **Memory** | `/memory:style-skill-memory` | Style/skill memory | - |
### Skills Quick Reference
| Category | Skill | Internal Pipeline | Use Case |
|----------|-------|-------------------|----------|
| **Workflow** | workflow-lite-planex | explore → plan → confirm → execute | Quick features, bug fixes |
| **Workflow** | workflow-plan | session → context → convention → gen → verify/replan | Complex feature planning |
| **Workflow** | workflow-execute | session discovery → task processing → commit | Execute pre-generated plans |
| **Workflow** | workflow-tdd-plan | 6-phase TDD plan → verify | TDD development |
| **Workflow** | workflow-test-fix | session → context → analysis → gen → cycle | Test generation and fixes |
| **Workflow** | workflow-multi-cli-plan | ACE context → CLI discussion → plan → execute | Multi-perspective planning |
| **Workflow** | workflow-skill-designer | - | Create new skills |
| **Team** | team-lifecycle-v5 | spec pipeline → impl pipeline | Full lifecycle |
| **Team** | team-planex | planner wave → executor wave | Issue batch execution |
| **Team** | team-arch-opt | architecture analysis → optimization | Architecture optimization |
| **Utility** | brainstorm | framework → parallel analysis → synthesis | Multi-perspective ideation |
| **Utility** | review-cycle | discovery → analysis → aggregation → deep-dive | Code review |
| **Utility** | spec-generator | study → brief → PRD → architecture → epics | Specification packages |
---
## 1. Main Orchestrator Commands
### /ccw
**Description**: Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process
**Arguments**: `"task description"`
**Category**: orchestrator
**5-Phase Workflow**:
1. Phase 1: Analyze Intent (detect task type, complexity, clarity)
2. Phase 1.5: Requirement Clarification (if clarity < 2)
3. Phase 2: Select Workflow & Build Command Chain
4. Phase 3: User Confirmation
5. Phase 4: Setup TODO Tracking & Status File
6. Phase 5: Execute Command Chain
**Skill Mapping**:
| Skill | Internal Pipeline |
|-------|-------------------|
| workflow-lite-planex | explore → plan → confirm → execute |
| workflow-plan | session → context → convention → gen → verify/replan |
| workflow-execute | session discovery → task processing → commit |
| workflow-tdd-plan | 6-phase TDD plan → verify |
| workflow-test-fix | session → context → analysis → gen → cycle |
| workflow-multi-cli-plan | ACE context → CLI discussion → plan → execute |
| review-cycle | session/module review → fix orchestration |
| brainstorm | auto/single-role → artifacts → analysis → synthesis |
| spec-generator | product-brief → PRD → architecture → epics |
**Auto Mode**: `-y` or `--yes` flag skips confirmations, propagates to all skills
---
### /ccw-coordinator
**Description**: Command orchestration tool - analyze requirements, recommend chain, execute sequentially with state persistence
**Arguments**: `[task description]`
**Category**: orchestrator
**3-Phase Workflow**:
1. Phase 1: Analyze Requirements
2. Phase 2: Discover Commands & Recommend Chain
3. Phase 3: Execute Sequential Command Chain
**Minimum Execution Units**:
| Unit | Commands | Purpose |
|------|----------|---------|
| Quick Implementation | lite-plan (plan → execute) | Lightweight plan and execution |
| Multi-CLI Planning | multi-cli-plan | Multi-perspective planning |
| Bug Fix | lite-plan --bugfix | Bug diagnosis and fix |
| Full Planning + Execution | plan → execute | Detailed planning |
| Verified Planning + Execution | plan → plan-verify → execute | Planning with verification |
| TDD Planning + Execution | tdd-plan → execute | TDD workflow |
| Issue Workflow | discover → plan → queue → execute | Complete issue lifecycle |
---
### /flow-create
**Description**: Flow template generator for meta-skill/flow-coordinator
**Arguments**: `[template-name] [--output <path>]`
**Category**: utility
**Execution Flow**:
1. Phase 1: Template Design (name + description + level)
2. Phase 2: Step Definition (command category → specific command → execution unit → mode)
3. Phase 3: Generate JSON
---
## 2. Workflow Session Commands
### /workflow:session:start
**Description**: Discover existing sessions or start new workflow session with intelligent session management and conflict detection
**Arguments**: `[--type <workflow|review|tdd|test|docs>] [--auto|--new] [task description]`
**Category**: session-management
**Session Types**:
| Type | Description | Default For |
|------|-------------|-------------|
| workflow | Standard implementation | workflow-plan skill |
| review | Code review sessions | review-cycle skill |
| tdd | TDD-based development | workflow-tdd-plan skill |
| test | Test generation/fix | workflow-test-fix skill |
| docs | Documentation sessions | memory-manage skill |
**Modes**:
- **Discovery Mode** (default): List active sessions, prompt user
- **Auto Mode** (`--auto`): Intelligent session selection
- **Force New Mode** (`--new`): Create new session
---
### /workflow:session:resume
**Description**: Resume the most recently paused workflow session with automatic session discovery and status update
**Category**: session-management
---
### /workflow:session:complete
**Description**: Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag
**Arguments**: `[-y|--yes] [--detailed]`
**Category**: session-management
**Execution Phases**:
1. Find Session
2. Generate Manifest Entry
3. Atomic Commit (mv to archives)
4. Auto-Sync Project State
---
### /workflow:session:list
**Description**: List all workflow sessions with status filtering, shows session metadata and progress information
**Category**: session-management
---
### /workflow:session:sync
**Description**: Quick-sync session work to specs/*.md and project-tech
**Arguments**: `[-y|--yes] ["what was done"]`
**Category**: session-management
**Process**:
1. Gather Context (git diff, session, summary)
2. Extract Updates (guidelines, tech)
3. Preview & Confirm
4. Write both files
---
### /workflow:session:solidify
**Description**: Crystallize session learnings and user-defined constraints into permanent project guidelines, or compress recent memories
**Arguments**: `[-y|--yes] [--type <convention|constraint|learning|compress>] [--category <category>] [--limit <N>] "rule or insight"`
**Category**: session-management
**Type Categories**:
| Type | Subcategories |
|------|---------------|
| convention | coding_style, naming_patterns, file_structure, documentation |
| constraint | architecture, tech_stack, performance, security |
| learning | architecture, performance, security, testing, process, other |
| compress | (operates on core memories) |
---
## 3. Issue Workflow Commands
### /issue:new
**Description**: Create structured issue from GitHub URL or text description
**Arguments**: `[-y|--yes] <github-url | text-description> [--priority 1-5]`
**Category**: issue
**Execution Flow**:
1. Input Analysis & Clarity Detection
2. Data Extraction (GitHub or Text)
3. Lightweight Context Hint (ACE for medium clarity)
4. Conditional Clarification
5. GitHub Publishing Decision
6. Create Issue
---
### /issue:discover
**Description**: Discover potential issues from multiple perspectives using CLI explore. Supports Exa external research for security and best-practices perspectives.
**Arguments**: `[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]`
**Category**: issue
**Available Perspectives**:
| Perspective | Focus | Categories |
|-------------|-------|------------|
| bug | Potential Bugs | edge-case, null-check, resource-leak, race-condition |
| ux | User Experience | error-message, loading-state, feedback, accessibility |
| test | Test Coverage | missing-test, edge-case-test, integration-gap |
| quality | Code Quality | complexity, duplication, naming, documentation |
| security | Security Issues | injection, auth, encryption, input-validation |
| performance | Performance | n-plus-one, memory-usage, caching, algorithm |
| maintainability | Maintainability | coupling, cohesion, tech-debt, extensibility |
| best-practices | Best Practices | convention, pattern, framework-usage, anti-pattern |
---
### /issue:plan
**Description**: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
**Arguments**: `[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]`
**Category**: issue
**Execution Process**:
1. Issue Loading & Intelligent Grouping
2. Unified Explore + Plan (issue-plan-agent)
3. Solution Registration & Binding
4. Summary
---
### /issue:queue
**Description**: Form execution queue from bound solutions using issue-queue-agent (solution-level)
**Arguments**: `[-y|--yes] [--queues <n>] [--issue <id>]`
**Category**: issue
**Core Capabilities**:
- Agent-driven ordering logic
- Solution-level granularity
- Conflict clarification
- Parallel/Sequential group assignment
---
### /issue:execute
**Description**: Execute queue with DAG-based parallel orchestration (one commit per solution)
**Arguments**: `[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]`
**Category**: issue
**Execution Flow**:
1. Validate Queue ID (REQUIRED)
2. Get DAG & User Selection
3. Dispatch Parallel Batch (DAG-driven)
4. Next Batch (repeat)
5. Worktree Completion
**Recommended Executor**: Codex (2hr timeout, full write access)
---
### /issue:from-brainstorm
**Description**: Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle
**Arguments**: `SESSION="<session-id>" [--idea=<index>] [--auto] [-y|--yes]`
**Category**: issue
**Execution Flow**:
1. Session Loading
2. Idea Selection
3. Enrich Issue Context
4. Create Issue
5. Generate Solution Tasks
6. Bind Solution
---
## 4. IDAW Commands
### /idaw:run
**Description**: IDAW orchestrator - execute task skill chains serially with git checkpoints
**Arguments**: `[-y|--yes] [--task <id>[,<id>,...]] [--dry-run]`
**Category**: idaw
**Skill Chain Mapping**:
| Task Type | Skill Chain |
|-----------|-------------|
| bugfix | workflow-lite-planex → workflow-test-fix |
| bugfix-hotfix | workflow-lite-planex |
| feature | workflow-lite-planex → workflow-test-fix |
| feature-complex | workflow-plan → workflow-execute → workflow-test-fix |
| refactor | workflow:refactor-cycle |
| tdd | workflow-tdd-plan → workflow-execute |
| test | workflow-test-fix |
| test-fix | workflow-test-fix |
| review | review-cycle |
| docs | workflow-lite-planex |
**6-Phase Execution**:
1. Load Tasks
2. Session Setup
3. Startup Protocol
4. Main Loop (serial, one task at a time)
5. Checkpoint (per task)
6. Report
---
### /idaw:add
**Description**: Add task to IDAW queue with auto-inferred task type and skill chain
**Category**: idaw
---
### /idaw:resume
**Description**: Resume IDAW session with crash recovery
**Category**: idaw
---
### /idaw:status
**Description**: Show IDAW queue status
**Category**: idaw
---
### /idaw:run-coordinate
**Description**: Multi-agent IDAW execution with parallel task coordination
**Category**: idaw
---
## 5. With-File Workflows
### /workflow:brainstorm-with-file
**Description**: Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution
**Arguments**: `[-y|--yes] [-c|--continue] [-m|--mode creative|structured] "idea or topic"`
**Category**: with-file
**Output Directory**: `.workflow/.brainstorm/{session-id}/`
**4-Phase Workflow**:
1. Phase 1: Seed Understanding (parse topic, select roles, expand vectors)
2. Phase 2: Divergent Exploration (cli-explore-agent + Multi-CLI perspectives)
3. Phase 3: Interactive Refinement (multi-round)
4. Phase 4: Convergence & Crystallization
**Output Artifacts**:
- `brainstorm.md` - Complete thought evolution timeline
- `exploration-codebase.json` - Codebase context
- `perspectives.json` - Multi-CLI findings
- `synthesis.json` - Final synthesis
---
### /workflow:analyze-with-file
**Description**: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding
**Arguments**: `[-y|--yes] [-c|--continue] "topic or question"`
**Category**: with-file
**Output Directory**: `.workflow/.analysis/{session-id}/`
**4-Phase Workflow**:
1. Phase 1: Topic Understanding
2. Phase 2: CLI Exploration (cli-explore-agent + perspectives)
3. Phase 3: Interactive Discussion (multi-round)
4. Phase 4: Synthesis & Conclusion
**Decision Recording Protocol**: Must record direction choices, key findings, assumption changes, user feedback
---
### /workflow:debug-with-file
**Description**: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction
**Arguments**: `[-y|--yes] "bug description or error message"`
**Category**: with-file
**Output Directory**: `.workflow/.debug/{session-id}/`
**Core Workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
**Output Artifacts**:
- `debug.log` - NDJSON execution evidence
- `understanding.md` - Exploration timeline + consolidated understanding
- `hypotheses.json` - Hypothesis history with verdicts
---
### /workflow:collaborative-plan-with-file
**Description**: Multi-agent collaborative planning with Plan Note shared doc
**Category**: with-file
---
### /workflow:roadmap-with-file
**Description**: Strategic requirement roadmap → issue creation → execution-plan.json
**Category**: with-file
---
### /workflow:unified-execute-with-file
**Description**: Universal execution engine - consumes plan output from collaborative-plan, roadmap, brainstorm
**Category**: with-file
---
## 6. Cycle Workflows
### /workflow:integration-test-cycle
**Description**: Self-iterating integration test with reflection - explore → test dev → test-fix cycle → reflection
**Category**: cycle
**Output Directory**: `.workflow/.test-cycle/`
---
### /workflow:refactor-cycle
**Description**: Tech debt discovery → prioritize → execute → validate
**Category**: cycle
**Output Directory**: `.workflow/.refactor-cycle/`
---
## 7. CLI Commands
### /cli:codex-review
**Description**: Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions
**Arguments**: `[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]`
**Category**: cli
**Review Targets**:
| Target | Flag | Description |
|--------|------|-------------|
| Uncommitted changes | `--uncommitted` | Review staged, unstaged, and untracked changes |
| Compare to branch | `--base <BRANCH>` | Review changes against base branch |
| Specific commit | `--commit <SHA>` | Review changes introduced by a commit |
**Focus Areas**: General review, Security focus, Performance focus, Code quality
**Important**: Target flags and prompt are mutually exclusive
---
### /cli:cli-init
**Description**: Initialize CLI configuration for ccw endpoint
**Category**: cli
---
## 8. Memory Commands
### /memory:prepare
**Description**: Prepare memory context for session
**Category**: memory
---
### /memory:style-skill-memory
**Description**: Style and skill memory management
**Category**: memory
---
## 9. Team Skills
### Team Lifecycle Skills
| Skill | Description |
|-------|-------------|
| team-lifecycle-v5 | Full team lifecycle with role-spec-driven worker agents |
| team-planex | Planner + executor wave pipeline (for large issue batches or roadmap outputs) |
| team-coordinate-v2 | Team coordination and orchestration |
| team-executor-v2 | Task execution with worker agents |
| team-arch-opt | Architecture optimization skill |
### Team Domain Skills
| Skill | Description |
|-------|-------------|
| team-brainstorm | Multi-perspective brainstorming |
| team-review | Code review workflow |
| team-testing | Testing workflow |
| team-frontend | Frontend development workflow |
| team-issue | Issue management workflow |
| team-iterdev | Iterative development workflow |
| team-perf-opt | Performance optimization workflow |
| team-quality-assurance | QA workflow |
| team-roadmap-dev | Roadmap development workflow |
| team-tech-debt | Technical debt management |
| team-uidesign | UI design workflow |
| team-ultra-analyze | Deep analysis workflow |
---
## 10. Workflow Skills
| Skill | Internal Pipeline | Description |
|-------|-------------------|-------------|
| workflow-lite-planex | explore → plan → confirm → execute | Lightweight merged-mode planning |
| workflow-plan | session → context → convention → gen → verify/replan | Full planning with architecture design |
| workflow-execute | session discovery → task processing → commit | Execute from planning session |
| workflow-tdd-plan | 6-phase TDD plan → verify | TDD workflow planning |
| workflow-test-fix | session → context → analysis → gen → cycle | Test generation and fix cycle |
| workflow-multi-cli-plan | ACE context → CLI discussion → plan → execute | Multi-CLI collaborative planning |
| workflow-skill-designer | - | Workflow skill design and generation |
---
## 11. Utility Skills
| Skill | Description |
|-------|-------------|
| brainstorm | Unified brainstorming skill (auto-parallel + role analysis) |
| review-code | Code review skill |
| review-cycle | Session/module review → fix orchestration |
| spec-generator | Product-brief → PRD → architecture → epics |
| skill-generator | Generate new skills |
| skill-tuning | Tune existing skills |
| command-generator | Generate new commands |
| memory-capture | Capture session memories |
| memory-manage | Manage stored memories |
| issue-manage | Issue management utility |
| ccw-help | CCW help and documentation |
---
## 12. Codex Capabilities
### Codex Review Mode
**Command**: `ccw cli --tool codex --mode review [OPTIONS]`
| Option | Description |
|--------|-------------|
| `[PROMPT]` | Custom review instructions (positional, no target flags) |
| `-c model=<model>` | Override model via config |
| `--uncommitted` | Review staged, unstaged, and untracked changes |
| `--base <BRANCH>` | Review changes against base branch |
| `--commit <SHA>` | Review changes introduced by a commit |
| `--title <TITLE>` | Optional commit title for review summary |
**Available Models**:
- Default: gpt-5.2
- o3: OpenAI o3 reasoning model
- gpt-4.1: GPT-4.1 model
- o4-mini: OpenAI o4-mini (faster)
**Constraints**:
- Target flags (`--uncommitted`, `--base`, `--commit`) **cannot** be used with positional `[PROMPT]`
- Custom prompts only supported WITHOUT target flags (reviews uncommitted by default)
### Codex Integration Points
| Integration Point | Description |
|-------------------|-------------|
| CLI endpoint | `ccw cli --tool codex --mode <analysis\|write\|review>` |
| Multi-CLI planning | Pragmatic perspective in workflow-multi-cli-plan |
| Code review | `/cli:codex-review` command |
| Issue execution | Recommended executor for `/issue:execute` |
| Devil's advocate | Challenge mode in brainstorm refinement |
### Codex Mode Summary
| Mode | Permission | Use Case |
|------|------------|----------|
| analysis | Read-only | Code analysis, architecture review |
| write | Full access | Implementation, file modifications |
| review | Read-only output | Git-aware code review |
---
## Summary Statistics
| Category | Count |
|----------|-------|
| Main Orchestrator Commands | 3 |
| Workflow Session Commands | 6 |
| Issue Workflow Commands | 6 |
| IDAW Commands | 5 |
| With-File Workflows | 6 |
| Cycle Workflows | 2 |
| CLI Commands | 2 |
| Memory Commands | 2 |
| Team Skills | 17 |
| Workflow Skills | 7 |
| Utility Skills | 11 |
| **Total Commands** | 32 |
| **Total Skills** | 35 |
---
## Invocation Patterns
### Slash Command Invocation
```
/<namespace>:<command> [arguments] [flags]
```
Examples:
- `/ccw "Add user authentication"`
- `/workflow:session:start --auto "implement feature"`
- `/issue:new https://github.com/org/repo/issues/123`
- `/cli:codex-review --base main`
### Skill Invocation (from code)
```javascript
Skill({ skill: "workflow-lite-planex", args: '"task description"' })
Skill({ skill: "brainstorm", args: '"topic or question"' })
Skill({ skill: "review-cycle", args: '--session="WFS-xxx"' })
```
### CLI Tool Invocation
```bash
ccw cli -p "PURPOSE: ... TASK: ... MODE: analysis|write" --tool <tool> --mode <mode>
```
---
## Related Documentation
- [Workflow Comparison Table](../workflows/comparison-table.md) - Workflow selection guide
- [Workflow Overview](../workflows/index.md) - 4-Level workflow system
- [Claude Workflow Skills](../skills/claude-workflow.md) - Detailed skill documentation

View File

@@ -1,11 +1,6 @@
---
适用CLI: claude
分类: team
---
# Claude Skills - Team Collaboration # Claude Skills - Team Collaboration
## One-Liner ## One-Line Positioning
**Team Collaboration Skills is a multi-role collaborative work orchestration system** — Through coordinator, worker roles, and message bus, it enables parallel processing and state synchronization for complex tasks. **Team Collaboration Skills is a multi-role collaborative work orchestration system** — Through coordinator, worker roles, and message bus, it enables parallel processing and state synchronization for complex tasks.
@@ -13,32 +8,83 @@
| Pain Point | Current State | Claude Code Workflow Solution | | Pain Point | Current State | Claude Code Workflow Solution |
|------------|---------------|----------------------| |------------|---------------|----------------------|
| **Single model limitation** | Can only call one AI model | Multi-role parallel collaboration, leveraging各自专长 | | **Single model limitation** | Can only call one AI model | Multi-role parallel collaboration, leveraging respective strengths |
| **Chaotic task orchestration** | Manual task dependency and state management | Automatic task discovery, dependency resolution, pipeline orchestration | | **Chaotic task orchestration** | Manual task dependency and state management | Automatic task discovery, dependency resolution, pipeline orchestration |
| **Fragmented collaboration** | Team members work independently | Unified message bus, shared state, progress sync | | **Fragmented collaboration** | Team members work independently | Unified message bus, shared state, progress sync |
| **Resource waste** | Repeated context loading | Wisdom accumulation, exploration cache, artifact reuse | | **Resource waste** | Repeated context loading | Wisdom accumulation, exploration cache, artifact reuse |
## Skills List ---
| Skill | Function | Trigger | ## Skills Overview
|-------|----------|---------|
| `team-coordinate` | Universal team coordinator (dynamic role generation) | `/team-coordinate` | | Skill | Function | Use Case |
| `team-lifecycle` | Full lifecycle team (spec→impl→test→review) | `/team-lifecycle` | | --- | --- | --- |
| `team-planex` | Plan-execute pipeline (plan while executing) | `/team-planex` | | `team-coordinate-v2` | Universal team coordinator (dynamic role generation) | Any complex task |
| `team-review` | Code review team (scan→review→fix) | `/team-review` | | `team-lifecycle-v5` | Full lifecycle team (spec→impl→test) | Complete feature development |
| `team-testing` | Testing team (strategy→generate→execute→analyze) | `/team-testing` | | `team-planex` | Plan-execute pipeline | Issue batch processing |
| `team-review` | Code review team | Code review, vulnerability scanning |
| `team-testing` | Testing team | Test coverage, test case generation |
| `team-arch-opt` | Architecture optimization team | Refactoring, architecture analysis |
| `team-perf-opt` | Performance optimization team | Performance tuning, bottleneck analysis |
| `team-brainstorm` | Brainstorming team | Multi-angle analysis, idea generation |
| `team-frontend` | Frontend development team | UI development, design system |
| `team-uidesign` | UI design team | Design system, component specs |
| `team-issue` | Issue processing team | Issue analysis, implementation |
| `team-iterdev` | Iterative development team | Incremental delivery, agile development |
| `team-quality-assurance` | Quality assurance team | Quality scanning, defect management |
| `team-roadmap-dev` | Roadmap development team | Phased development, milestones |
| `team-tech-debt` | Tech debt team | Debt cleanup, code governance |
| `team-ultra-analyze` | Deep analysis team | Complex problem analysis, collaborative exploration |
| `team-executor-v2` | Lightweight executor | Session resume, pure execution |
---
## Core Architecture
All Team Skills share a unified **team-worker agent architecture**:
```
┌──────────────────────────────────────────────────────────┐
│ Skill(skill="team-xxx", args="task description") │
└────────────────────────┬─────────────────────────────────┘
│ Role Router
┌──── --role present? ────┐
│ NO │ YES
↓ ↓
Orchestration Mode Role Dispatch
(auto → coordinator) (route to role.md)
┌─────────┴─────────┬───────────────┬──────────────┐
↓ ↓ ↓ ↓
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ coord │ │worker 1│ │worker 2│ │worker N│
│(orchestrate)│ │(execute)│ │(execute)│ │(execute)│
└────────┘ └────────┘ └────────┘ └────────┘
│ │ │ │
└───────────────────┴───────────────┴──────────────┘
Message Bus (message bus)
```
**Core Components**:
- **Coordinator**: Built-in orchestrator for task analysis, dispatch, monitoring
- **Team-Worker Agent**: Unified agent, loads role-spec to execute role logic
- **Role Router**: `--role=xxx` parameter routes to role execution
- **Message Bus**: Inter-team member communication protocol
- **Shared Memory**: Cross-task knowledge accumulation (Wisdom)
---
## Skills Details ## Skills Details
### team-coordinate ### team-coordinate-v2
**One-Liner**: Universal team coordinator — Dynamically generates roles and orchestrates execution based on task analysis **One-Liner**: Universal team coordinator — Dynamically generates roles and orchestrates execution based on task analysis
**Trigger**: **Trigger**:
``` ```bash
/team-coordinate <task-description> team-coordinate <task-description>
/team-coordinate --role=coordinator <task> team-coordinate --role=coordinator <task>
/team-coordinate --role=<worker> --session=<path>
``` ```
**Features**: **Features**:
@@ -47,17 +93,6 @@
- Fast-Advance mechanism skips coordinator to directly spawn successor tasks - Fast-Advance mechanism skips coordinator to directly spawn successor tasks
- Wisdom accumulates cross-task knowledge - Wisdom accumulates cross-task knowledge
**Role Registry**:
| Role | File | Task Prefix | Type |
|------|------|-------------|------|
| coordinator | roles/coordinator/role.md | (none) | orchestrator |
| (dynamic) | `<session>/roles/<role>.md` | (dynamic) | worker |
**Pipeline**:
```
Task Analysis → Generate Roles → Initialize Session → Create Task Chain → Spawn First Batch Workers → Loop Progress → Completion Report
```
**Session Directory**: **Session Directory**:
``` ```
.workflow/.team/TC-<slug>-<date>/ .workflow/.team/TC-<slug>-<date>/
@@ -66,20 +101,18 @@ Task Analysis → Generate Roles → Initialize Session → Create Task Chain
├── roles/ # Dynamic role definitions ├── roles/ # Dynamic role definitions
├── artifacts/ # All MD deliverables ├── artifacts/ # All MD deliverables
├── wisdom/ # Cross-task knowledge ├── wisdom/ # Cross-task knowledge
├── explorations/ # Shared exploration cache
├── discussions/ # Inline discussion records
└── .msg/ # Team message bus logs └── .msg/ # Team message bus logs
``` ```
--- ---
### team-lifecycle ### team-lifecycle-v5
**One-Liner**: Full lifecycle team — Complete pipeline from specification to implementation to testing to review **One-Liner**: Full lifecycle team — Complete pipeline from specification to implementation to testing to review
**Trigger**: **Trigger**:
``` ```bash
/team-lifecycle <task-description> team-lifecycle <task-description>
``` ```
**Features**: **Features**:
@@ -97,164 +130,67 @@ Task Analysis → Generate Roles → Initialize Session → Create Task Chain
| executor | role-specs/executor.md | IMPL-* | true | | executor | role-specs/executor.md | IMPL-* | true |
| tester | role-specs/tester.md | TEST-* | false | | tester | role-specs/tester.md | TEST-* | false |
| reviewer | role-specs/reviewer.md | REVIEW-* | false | | reviewer | role-specs/reviewer.md | REVIEW-* | false |
| architect | role-specs/architect.md | ARCH-* | false |
| fe-developer | role-specs/fe-developer.md | DEV-FE-* | false |
| fe-qa | role-specs/fe-qa.md | QA-FE-* | false |
**Pipeline Definitions**: **Pipeline Definitions**:
``` ```
Specification Pipeline (6 tasks): Specification Pipeline: RESEARCH → DRAFT → QUALITY
RESEARCH-001 → DRAFT-001 → DRAFT-002 → DRAFT-003 → DRAFT-004 → QUALITY-001 Implementation Pipeline: PLAN → IMPL → TEST + REVIEW
Full Lifecycle: [Spec Pipeline] → [Impl Pipeline]
Implementation Pipeline (4 tasks):
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
Full Lifecycle (10 tasks):
[Spec Pipeline] → PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
Frontend Pipeline:
PLAN-001 → DEV-FE-001 → QA-FE-001 (GC loop, max 2 rounds)
``` ```
**Quality Gate** (after QUALITY-001 completion):
```
═════════════════════════════════════════
SPEC PHASE COMPLETE
Quality Gate: <PASS|REVIEW|FAIL> (<score>%)
Dimension Scores:
Completeness: <bar> <n>%
Consistency: <bar> <n>%
Traceability: <bar> <n>%
Depth: <bar> <n>%
Coverage: <bar> <n>%
Available Actions:
resume -> Proceed to implementation
improve -> Auto-improve weakest dimension
improve <dimension> -> Improve specific dimension
revise <TASK-ID> -> Revise specific document
recheck -> Re-run quality check
feedback <text> -> Inject feedback, create revision
═════════════════════════════════════════
```
**User Commands** (wake paused coordinator):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no progress |
| `resume` / `continue` | Check worker status, advance next step |
| `revise <TASK-ID> [feedback]` | Create revision task + cascade downstream |
| `feedback <text>` | Analyze feedback impact, create targeted revision chain |
| `recheck` | Re-run QUALITY-001 quality check |
| `improve [dimension]` | Auto-improve weakest dimension in readiness-report |
--- ---
### team-planex ### team-planex
**One-Liner**: Plan-and-execute team — Planner and executor work in parallel through per-issue beat pipeline **One-Liner**: Plan-and-execute team — Per-issue beat pipeline
**Trigger**: **Trigger**:
``` ```bash
/team-planex <task-description> team-planex <task-description>
/team-planex --role=planner <input> team-planex --role=planner <input>
/team-planex --role=executor --input <solution-file> team-planex --role=executor --input <solution-file>
``` ```
**Features**: **Features**:
- 2-member team (planner + executor), planner serves as lead role - 2-member team (planner + executor), planner serves as lead role
- Per-issue beat: planner creates EXEC-* task immediately after completing each issue's solution - Per-issue beat: planner creates EXEC-* task immediately after completing each issue
- Solution written to intermediate artifact file, executor loads from file - Solution written to intermediate artifact file, executor loads from file
- Supports multiple execution backends (agent/codex/gemini)
**Role Registry**: **Wave Pipeline**:
| Role | File | Task Prefix | Type |
|------|------|-------------|------|
| planner | roles/planner.md | PLAN-* | pipeline (lead) |
| executor | roles/executor.md | EXEC-* | pipeline |
**Input Types**:
| Input Type | Format | Example |
|------------|--------|---------|
| Issue IDs | Direct ID input | `--role=planner ISS-20260215-001 ISS-20260215-002` |
| Requirement text | `--text '...'` | `--role=planner --text 'Implement user authentication module'` |
| Plan file | `--plan path` | `--role=planner --plan plan/2026-02-15-auth.md` |
**Wave Pipeline** (per-issue beat):
``` ```
Issue 1: planner plans solution → write artifact → conflict check → create EXEC-* → issue_ready Issue 1: planner plans → write artifact → create EXEC-* → executor executes
↓ (executor starts immediately) Issue 2: planner plans → write artifact → create EXEC-* → executor parallel consume
Issue 2: planner plans solution → write artifact → conflict check → create EXEC-* → issue_ready Final: planner sends all_planned → executor completes remaining → finish
↓ (executor consumes in parallel)
Issue N: ...
Final: planner sends all_planned → executor completes remaining EXEC-* → finish
``` ```
**Execution Method Selection**:
| Executor | Backend | Use Case |
|----------|---------|----------|
| `agent` | code-developer subagent | Simple tasks, synchronous execution |
| `codex` | `ccw cli --tool codex --mode write` | Complex tasks, background execution |
| `gemini` | `ccw cli --tool gemini --mode write` | Analysis tasks, background execution |
**User Commands**:
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no progress |
| `resume` / `continue` | Check worker status, advance next step |
| `add <issue-ids or --text '...' or --plan path>` | Append new tasks to planner queue |
--- ---
### team-review ### team-review
**One-Liner**: Code review team — Unified code scanning, vulnerability review, optimization suggestions, and auto-fix **One-Liner**: Code review team — Unified code scanning, vulnerability review, auto-fix
**Trigger**: **Trigger**:
```bash
team-review <target-path>
team-review --full <target-path> # scan + review + fix
team-review --fix <review-files> # fix only
team-review -q <target-path> # quick scan only
``` ```
/team-review <target-path>
/team-review --full <target-path> # scan + review + fix
/team-review --fix <review-files> # fix only
/team-review -q <target-path> # quick scan only
```
**Features**:
- 4-role team (coordinator, scanner, reviewer, fixer)
- Multi-dimensional review: security, correctness, performance, maintainability
- Auto-fix loop (review → fix → verify)
**Role Registry**: **Role Registry**:
| Role | File | Task Prefix | Type | | Role | Task Prefix | Type |
|------|------|-------------|------| |------|-------------|------|
| coordinator | roles/coordinator/role.md | RC-* | orchestrator | | coordinator | RC-* | orchestrator |
| scanner | roles/scanner/role.md | SCAN-* | read-only analysis | | scanner | SCAN-* | read-only analysis |
| reviewer | roles/reviewer/role.md | REV-* | read-only analysis | | reviewer | REV-* | read-only analysis |
| fixer | roles/fixer/role.md | FIX-* | code generation | | fixer | FIX-* | code generation |
**Pipeline** (CP-1 Linear): **Pipeline**:
``` ```
coordinator dispatch SCAN-* (scan) → REV-* (review) → [User confirmation] → FIX-* (fix)
→ SCAN-* (scanner: toolchain + LLM scan)
→ REV-* (reviewer: deep analysis + report)
→ [User confirmation]
→ FIX-* (fixer: plan + execute + verify)
``` ```
**Checkpoints**: **Review Dimensions**: Security, Correctness, Performance, Maintainability
| Trigger | Location | Behavior |
|---------|----------|----------|
| Review→Fix transition | REV-* complete | Pause, show review report, wait for user `resume` to confirm fix |
| Quick mode (`-q`) | After SCAN-* | Pipeline ends after scan, no review/fix |
| Fix-only mode (`--fix`) | Entry | Skip scan/review, go directly to fixer |
**Review Dimensions**:
| Dimension | Check Points |
|-----------|--------------|
| Security (sec) | Injection vulnerabilities, sensitive data leakage, permission control |
| Correctness (cor) | Boundary conditions, error handling, type safety |
| Performance (perf) | Algorithm complexity, I/O optimization, resource usage |
| Maintainability (maint) | Code structure, naming conventions, comment quality |
--- ---
@@ -263,75 +199,318 @@ coordinator dispatch
**One-Liner**: Testing team — Progressive test coverage through Generator-Critic loop **One-Liner**: Testing team — Progressive test coverage through Generator-Critic loop
**Trigger**: **Trigger**:
```bash
team-testing <task-description>
``` ```
/team-testing <task-description>
```
**Features**:
- 5-role team (coordinator, strategist, generator, executor, analyst)
- Three pipelines: Targeted, Standard, Comprehensive
- Generator-Critic loop automatically improves test coverage
**Role Registry**: **Role Registry**:
| Role | File | Task Prefix | Type | | Role | Task Prefix | Type |
|------|------|-------------|------| |------|-------------|------|
| coordinator | roles/coordinator.md | (none) | orchestrator | | coordinator | (none) | orchestrator |
| strategist | roles/strategist.md | STRATEGY-* | pipeline | | strategist | STRATEGY-* | pipeline |
| generator | roles/generator.md | TESTGEN-* | pipeline | | generator | TESTGEN-* | pipeline |
| executor | roles/executor.md | TESTRUN-* | pipeline | | executor | TESTRUN-* | pipeline |
| analyst | roles/analyst.md | TESTANA-* | pipeline | | analyst | TESTANA-* | pipeline |
**Three Pipelines**: **Three Pipelines**:
``` ```
Targeted (small scope changes): Targeted: STRATEGY → TESTGEN(L1) → TESTRUN
STRATEGY-001 → TESTGEN-001(L1 unit) → TESTRUN-001 Standard: STRATEGY → TESTGEN(L1) → TESTRUN → TESTGEN(L2) → TESTRUN → TESTANA
Comprehensive: STRATEGY → [TESTGEN(L1+L2) parallel] → [TESTRUN parallel] → TESTGEN(L3) → TESTRUN → TESTANA
Standard (progressive):
STRATEGY-001 → TESTGEN-001(L1) → TESTRUN-001(L1) → TESTGEN-002(L2) → TESTRUN-002(L2) → TESTANA-001
Comprehensive (full coverage):
STRATEGY-001 → [TESTGEN-001(L1) + TESTGEN-002(L2)](parallel) → [TESTRUN-001(L1) + TESTRUN-002(L2)](parallel) → TESTGEN-003(L3) → TESTRUN-003(L3) → TESTANA-001
``` ```
**Generator-Critic Loop**: **Test Layers**: L1: Unit (80%) → L2: Integration (60%) → L3: E2E (40%)
```
TESTGEN → TESTRUN → (if coverage < target) → TESTGEN-fix → TESTRUN-2 ---
(if coverage >= target) → next layer or TESTANA
### team-arch-opt
**One-Liner**: Architecture optimization team — Analyze architecture issues, design refactoring strategies, implement improvements
**Trigger**:
```bash
team-arch-opt <task-description>
``` ```
**Test Layer Definitions**: **Role Registry**:
| Layer | Coverage Target | Example | | Role | Task Prefix | Function |
|-------|-----------------|---------| |------|-------------|----------|
| L1: Unit | 80% | Unit tests, function-level tests | | coordinator | (none) | orchestrator |
| L2: Integration | 60% | Integration tests, module interaction | | analyzer | ANALYZE-* | architecture analysis |
| L3: E2E | 40% | End-to-end tests, user scenarios | | designer | DESIGN-* | refactoring design |
| refactorer | REFACT-* | implement refactoring |
| validator | VALID-* | validate improvements |
| reviewer | REVIEW-* | code review |
**Shared Memory** (shared-memory.json): **Detection Scope**: Dependency cycles, coupling/cohesion, layering violations, God Classes, dead code
| Role | Field |
|------|-------| ---
| strategist | `test_strategy` |
| generator | `generated_tests` | ### team-perf-opt
| executor | `execution_results`, `defect_patterns` |
| analyst | `analysis_report`, `coverage_history` | **One-Liner**: Performance optimization team — Performance profiling, bottleneck identification, optimization implementation
**Trigger**:
```bash
team-perf-opt <task-description>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| profiler | PROFILE-* | performance profiling |
| strategist | STRAT-* | optimization strategy |
| optimizer | OPT-* | implement optimization |
| benchmarker | BENCH-* | benchmarking |
| reviewer | REVIEW-* | code review |
---
### team-brainstorm
**One-Liner**: Brainstorming team — Multi-angle creative analysis, Generator-Critic loop
**Trigger**:
```bash
team-brainstorm <topic>
team-brainstorm --role=ideator <topic>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| ideator | IDEA-* | idea generation |
| challenger | CHALLENGE-* | critical questioning |
| synthesizer | SYNTH-* | synthesis integration |
| evaluator | EVAL-* | evaluation scoring |
---
### team-frontend
**One-Liner**: Frontend development team — Built-in ui-ux-pro-max design intelligence
**Trigger**:
```bash
team-frontend <task-description>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| analyst | ANALYZE-* | requirement analysis |
| architect | ARCH-* | architecture design |
| developer | DEV-* | frontend implementation |
| qa | QA-* | quality assurance |
---
### team-uidesign
**One-Liner**: UI design team — Design system analysis, token definition, component specs
**Trigger**:
```bash
team-uidesign <task>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| researcher | RESEARCH-* | design research |
| designer | DESIGN-* | design definition |
| reviewer | AUDIT-* | accessibility audit |
| implementer | BUILD-* | code implementation |
---
### team-issue
**One-Liner**: Issue processing team — Issue processing pipeline
**Trigger**:
```bash
team-issue <issue-ids>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| explorer | EXPLORE-* | code exploration |
| planner | PLAN-* | solution planning |
| implementer | IMPL-* | code implementation |
| reviewer | REVIEW-* | code review |
| integrator | INTEG-* | integration validation |
---
### team-iterdev
**One-Liner**: Iterative development team — Generator-Critic loop, incremental delivery
**Trigger**:
```bash
team-iterdev <task-description>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| architect | ARCH-* | architecture design |
| developer | DEV-* | feature development |
| tester | TEST-* | test validation |
| reviewer | REVIEW-* | code review |
**Features**: Developer-Reviewer loop (max 3 rounds), Task Ledger real-time progress
---
### team-quality-assurance
**One-Liner**: Quality assurance team — Issue discovery + test validation closed loop
**Trigger**:
```bash
team-quality-assurance <task-description>
team-qa <task-description>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| scout | SCOUT-* | issue discovery |
| strategist | QASTRAT-* | strategy formulation |
| generator | QAGEN-* | test generation |
| executor | QARUN-* | test execution |
| analyst | QAANA-* | result analysis |
---
### team-roadmap-dev
**One-Liner**: Roadmap development team — Phased development, milestone management
**Trigger**:
```bash
team-roadmap-dev <task-description>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | human interaction |
| planner | PLAN-* | phase planning |
| executor | EXEC-* | phase execution |
| verifier | VERIFY-* | phase validation |
---
### team-tech-debt
**One-Liner**: Tech debt team — Debt scanning, assessment, cleanup, validation
**Trigger**:
```bash
team-tech-debt <task-description>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| scanner | TDSCAN-* | debt scanning |
| assessor | TDEVAL-* | quantitative assessment |
| planner | TDPLAN-* | governance planning |
| executor | TDFIX-* | cleanup execution |
| validator | TDVAL-* | validation regression |
---
### team-ultra-analyze
**One-Liner**: Deep analysis team — Multi-role collaborative exploration, progressive understanding
**Trigger**:
```bash
team-ultra-analyze <topic>
team-analyze <topic>
```
**Role Registry**:
| Role | Task Prefix | Function |
|------|-------------|----------|
| coordinator | (none) | orchestrator |
| explorer | EXPLORE-* | code exploration |
| analyst | ANALYZE-* | deep analysis |
| discussant | DISCUSS-* | discussion interaction |
| synthesizer | SYNTH-* | synthesis output |
**Features**: Supports Quick/Standard/Deep three depth modes
---
### team-executor-v2
**One-Liner**: Lightweight executor — Resume session, pure execution mode
**Trigger**:
```bash
team-executor --session=<path>
```
**Features**:
- No analysis, no role generation — only load and execute existing session
- Used to resume interrupted team-coordinate sessions
---
## User Commands
All Team Skills support unified user commands (wake paused coordinator):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no progress |
| `resume` / `continue` | Check worker status, advance next step |
| `revise <TASK-ID>` | Create revision task + cascade downstream |
| `feedback <text>` | Analyze feedback impact, create targeted revision chain |
---
## Best Practices
1. **Choose the right team type**:
- General tasks → `team-coordinate-v2`
- Complete feature development → `team-lifecycle-v5`
- Issue batch processing → `team-planex`
- Code review → `team-review`
- Test coverage → `team-testing`
- Architecture optimization → `team-arch-opt`
- Performance tuning → `team-perf-opt`
- Brainstorming → `team-brainstorm`
- Frontend development → `team-frontend`
- UI design → `team-uidesign`
- Tech debt → `team-tech-debt`
- Deep analysis → `team-ultra-analyze`
2. **Leverage inner-loop roles**: Set `inner_loop: true` to let single worker handle multiple same-prefix tasks
3. **Wisdom accumulation**: All roles in team sessions accumulate knowledge to `wisdom/` directory
4. **Fast-Advance**: Simple linear successor tasks automatically skip coordinator to spawn directly
5. **Checkpoint recovery**: All team skills support session recovery via `--resume` or `resume` command
---
## Related Commands ## Related Commands
- [Claude Commands - Workflow](../commands/claude/workflow.md) - [Claude Commands - Workflow](../commands/claude/workflow.md)
- [Claude Commands - Session](../commands/claude/session.md) - [Claude Commands - Session](../commands/claude/session.md)
## Best Practices
1. **Choose the right team type**:
- General tasks → `team-coordinate`
- Full feature development → `team-lifecycle`
- Issue batch processing → `team-planex`
- Code review → `team-review`
- Test coverage → `team-testing`
2. **Leverage inner-loop roles**: For roles with multiple same-prefix serial tasks, set `inner_loop: true` to let a single worker handle all tasks, avoiding repeated spawn overhead
3. **Wisdom accumulation**: All roles in team sessions accumulate knowledge to `wisdom/` directory, subsequent tasks can reuse these patterns, decisions, and conventions
4. **Fast-Advance**: Simple linear successor tasks automatically skip coordinator to spawn directly, reducing coordination overhead
5. **Checkpoint recovery**: All team skills support session recovery, continue interrupted sessions via `--resume` or user command `resume`

View File

@@ -24,6 +24,9 @@
| `workflow-tdd-plan` | TDD workflow | `/workflow-tdd-plan` | | `workflow-tdd-plan` | TDD workflow | `/workflow-tdd-plan` |
| `workflow-test-fix` | Test-fix workflow | `/workflow-test-fix` | | `workflow-test-fix` | Test-fix workflow | `/workflow-test-fix` |
| `workflow-skill-designer` | Skill design workflow | `/workflow-skill-designer` | | `workflow-skill-designer` | Skill design workflow | `/workflow-skill-designer` |
| `team-arch-opt` | Architecture optimization | `/team-arch-opt` |
> **New in 7.2.1**: `team-arch-opt` skill added for architecture analysis and optimization. `workflow-lite-planex` renamed from `workflow-lite-plan`.
## Skills Details ## Skills Details
@@ -323,6 +326,30 @@ Wave 2: Issue 6-10 → Parallel planning → Parallel execution
... ...
``` ```
---
### team-arch-opt
**One-Liner**: Architecture optimization — Analyze and optimize system architecture
**Trigger**:
```shell
/team-arch-opt
/ccw "team arch opt: analyze module structure"
```
**Features**:
- Architecture analysis and assessment
- Optimization recommendations
- Team-based architecture review
- Role-spec-driven worker agents
**Use Cases**:
- Architecture health assessment
- Module dependency analysis
- Performance bottleneck identification
- Technical debt evaluation
## Related Commands ## Related Commands
- [Claude Commands - Workflow](../commands/claude/workflow.md) - [Claude Commands - Workflow](../commands/claude/workflow.md)

View File

@@ -0,0 +1,307 @@
# Naming Conventions
CCW uses consistent naming conventions across skills, commands, files, and configurations. Following these conventions ensures your custom skills integrate seamlessly with the built-in ecosystem.
## Overview
Consistent naming conventions help with:
- **Discoverability**: Skills and commands are easy to find and understand
- **Integration**: Custom skills work well with built-in ones
- **Maintenance**: Clear naming reduces cognitive load when debugging
- **Documentation**: Self-documenting code and configuration
## Skill Naming
### Built-in Skill Pattern
Built-in CCW skills follow these patterns:
| Pattern | Examples | Usage |
|---------|----------|-------|
| `team-*` | `team-lifecycle-v4`, `team-brainstorm` | Multi-agent team skills |
| `workflow-*` | `workflow-plan`, `workflow-execute` | Planning and execution workflows |
| `*-cycle` | `review-cycle`, `refactor-cycle` | Iterative process skills |
| `memory-*` | `memory-capture`, `memory-manage` | Memory-related operations |
### Custom Skill Naming
When creating custom skills, use these guidelines:
```yaml
# Good: Clear, descriptive, follows convention
name: generate-react-component
name: api-scaffolding
name: test-coverage-analyzer
# Avoid: Too generic or unclear
name: helper
name: stuff
name: my-skill-v2
```
### Naming Principles
1. **Use kebab-case**: All skill names use lowercase with hyphens
```yaml
name: team-lifecycle-v4 # Good
name: teamLifecycleV4 # Bad
```
2. **Start with action or category**: Indicate what the skill does
```yaml
name: generate-component # Action-based
name: test-coverage # Category-based
```
3. **Use version suffixes for iterations**: Not "v2" but purpose
```yaml
name: team-lifecycle-v5 # Version iteration
name: workflow-lite # Lightweight variant
```
## Command Naming
### CLI Command Pattern
Commands follow a `category:action:variant` pattern:
```bash
# Format
ccw <category>:<action>:<variant>
# Examples
ccw workflow:plan:verify
ccw workflow:replan
ccw memory:capture:session
ccw team:lifecycle
```
### Command Aliases
Short aliases for common commands:
| Full Command | Short Alias |
|--------------|-------------|
| `workflow:multi-cli-plan` | `workflow:multi-cli-plan` |
| `team:lifecycle-v4` | `team lifecycle` |
| `brainstorm:auto` | `brainstorm` |
## File Naming
### Skill Files
```
~/.claude/skills/my-skill/
├── SKILL.md # Skill definition (required, uppercase)
├── index.ts # Skill logic (optional)
├── phases/ # Phase files (optional)
│ ├── phase-1.md # Numbered phases
│ └── phase-2.md
└── examples/ # Usage examples
└── basic-usage.md
```
### Documentation Files
Documentation follows clear hierarchical patterns:
```
docs/
├── skills/
│ ├── index.md # Skills overview
│ ├── core-skills.md # Built-in skills reference
│ ├── naming-conventions.md # This file
│ └── custom.md # Custom skill development
├── workflows/
│ ├── index.md
│ ├── teams.md # Team workflows
│ └── best-practices.md
└── zh/ # Chinese translations
└── skills/
└── index.md
```
### Markdown File Conventions
| Pattern | Example | Usage |
|---------|---------|-------|
| `index.md` | `skills/index.md` | Section overview |
| `kebab-case.md` | `naming-conventions.md` | Topic pages |
| `UPPERCASE.md` | `SKILL.md`, `README.md` | Special/config files |
## Configuration Keys
### Skill Frontmatter
```yaml
---
name: workflow-plan # kebab-case
description: 4-phase planning # Sentence case
version: 1.0.0 # Semantic versioning
triggers: # Array format
- workflow-plan
- workflow:replan
category: planning # Lowercase
tags: # Array of keywords
- planning
- verification
---
```
### CLI Tool Configuration
```json
{
"tools": {
"gemini": {
"enabled": true, // camelCase for JSON
"primaryModel": "gemini-2.5-flash",
"tags": ["analysis", "Debug"]
}
}
}
```
## Variable Naming in Code
### TypeScript/JavaScript
```typescript
// Files and directories
import { SkillContext } from './types'
// Variables and functions
const skillName = "my-skill"
function executeSkill() {}
// Classes and interfaces
class SkillExecutor {}
interface SkillOptions {}
// Constants
const MAX_RETRIES = 3
const DEFAULT_TIMEOUT = 5000
```
### Configuration Keys
```yaml
# Use kebab-case for YAML configuration keys
skill-name: my-skill
max-retries: 3
default-timeout: 5000
# JSON uses camelCase
{
"skillName": "my-skill",
"maxRetries": 3,
"defaultTimeout": 5000
}
```
## Trigger Keywords
### Skill Triggers
Triggers define how skills are invoked:
| Skill | Triggers (English) | Triggers (Chinese) |
|-------|-------------------|-------------------|
| brainstorm | `brainstorm`, `brainstorming` | `头脑风暴` |
| team-planex | `team planex`, `wave pipeline` | `波浪流水线` |
| review-code | `review code`, `code review` | `代码审查` |
| memory-manage | `memory manage`, `update memory` | `更新记忆` |
### Trigger Guidelines
1. **Use natural language**: Triggers should be conversational
2. **Support multiple languages**: English and Chinese for built-in skills
3. **Include variants**: Add common synonyms and abbreviations
4. **Be specific**: Avoid overly generic triggers that conflict
## Session Naming
### Workflow Sessions
Sessions use timestamp-based naming:
```
.workflow/.team/
├── TLS-my-project-2026-03-02/ # Team:Project:Date
├── WS-feature-dev-2026-03-02/ # Workflow:Feature:Date
└── review-session-2026-03-02/ # Descriptor:Date
```
### Session ID Format
```
<TYPE>-<DESCRIPTOR>-<DATE>
Types:
- TLS = Team Lifecycle Session
- WS = Workflow Session
- RS = Review Session
```
## Examples
### Example 1: Good Skill Naming
```yaml
---
name: api-scaffolding
description: Generate REST API boilerplate with routes, controllers, and tests
version: 1.0.0
triggers:
- generate api
- api scaffold
- create api
category: development
tags: [api, rest, scaffolding, generator]
---
```
### Example 2: Good File Organization
```
~/.claude/skills/api-scaffolding/
├── SKILL.md
├── index.ts
├── templates/
│ ├── controller.ts.template
│ ├── route.ts.template
│ └── test.spec.ts.template
└── examples/
├── basic-api.md
└── authenticated-api.md
```
### Example 3: Good Command Naming
```bash
# Clear and hierarchical
ccw api:scaffold:rest
ccw api:scaffold:graphql
ccw api:test:coverage
# Aliases for convenience
ccw api:scaffold # Defaults to REST
ccw api:test # Defaults to coverage
```
## Migration Checklist
When renaming skills or commands:
- [ ] Update `SKILL.md` frontmatter
- [ ] Update all trigger references
- [ ] Update documentation links
- [ ] Add migration note in old skill description
- [ ] Update session naming if applicable
- [ ] Test all command invocations
::: info See Also
- [Custom Skill Development](./custom.md) - Creating your own skills
- [Core Skills Reference](./core-skills.md) - All built-in skills
- [Skills Library](./index.md) - Skill overview and categories
:::

View File

@@ -0,0 +1,175 @@
# Workflow Comparison Table
> **Complete reference for all CCW workflows** - Compare workflows by invocation, pipeline, use case, complexity, and auto-chain behavior.
## Quick Reference
| Workflow | Best For | Level | Self-Contained |
|----------|----------|-------|----------------|
| workflow-lite-planex | Quick tasks, bug fixes | 2 (Lightweight) | YES |
| workflow-plan → workflow-execute | Complex features | 3-4 (Standard) | NO (requires execute) |
| workflow-tdd-plan → workflow-execute | TDD development | 3 (Standard) | NO (requires execute) |
| workflow-test-fix | Test generation/fix | 3 (Standard) | YES |
| workflow-multi-cli-plan | Multi-perspective planning | 3 (Standard) | YES |
| brainstorm-with-file | Ideation, exploration | 4 (Full) | NO (chains to plan) |
| analyze-with-file | Deep analysis | 3 (Standard) | NO (chains to lite-plan) |
| debug-with-file | Hypothesis-driven debugging | 3 (Standard) | YES |
| collaborative-plan-with-file | Multi-agent planning | 3 (Standard) | NO (chains to execute) |
| roadmap-with-file | Strategic roadmaps | 4 (Strategic) | NO (chains to team-planex) |
| integration-test-cycle | Integration testing | 3 (Standard) | YES |
| refactor-cycle | Tech debt refactoring | 3 (Standard) | YES |
| review-cycle | Code review | 3 (Standard) | YES |
| spec-generator | Specification packages | 4 (Full) | NO (chains to plan) |
| team-planex | Issue batch execution | Team | YES |
| team-lifecycle-v5 | Full lifecycle | Team | YES |
| issue pipeline | Issue management | 2.5 (Bridge) | YES |
---
## Complete Comparison Table
| Workflow | Invocation | Pipeline | Use Case | Level | Self-Contained | Auto-Chains To |
|----------|------------|----------|----------|-------|----------------|----------------|
| **Plan+Execute Workflows** |
| workflow-lite-planex | `/ccw "task"` (auto for low/medium complexity) | explore → plan → confirm → execute | Quick features, bug fixes, simple tasks | 2 (Lightweight) | YES | workflow-test-fix |
| workflow-plan | `/ccw "complex feature"` (high complexity) | session → context → convention → gen → verify/replan | Complex feature planning, formal verification | 3-4 (Standard) | NO | workflow-execute |
| workflow-execute | `/workflow-execute` (after plan) | session discovery → task processing → commit | Execute pre-generated plans | 3 (Standard) | YES | review-cycle (optional) |
| workflow-multi-cli-plan | `/ccw "multi-cli plan: ..."` | ACE context → CLI discussion → plan → execute | Multi-perspective planning | 3 (Standard) | YES | (internal handoff) |
| **TDD Workflows** |
| workflow-tdd-plan | `/ccw "Implement with TDD"` | 6-phase TDD plan → verify | Test-driven development planning | 3 (Standard) | NO | workflow-execute |
| workflow-test-fix | `/ccw "generate tests"` or auto-chained | session → context → analysis → gen → cycle | Test generation, coverage improvement | 3 (Standard) | YES | (standalone) |
| **Brainstorm Workflows** |
| brainstorm | `/brainstorm "topic"` | mode detect → framework → parallel analysis → synthesis | Multi-perspective ideation | 4 (Full) | YES (ideation only) | workflow-plan |
| brainstorm-with-file | `/ccw "brainstorm: ..."` | brainstorm + documented artifacts | Documented ideation with session | 4 (Full) | NO | workflow-plan → execute |
| collaborative-plan-with-file | `/ccw "collaborative plan: ..."` | understanding → parallel agents → plan-note.md | Multi-agent collaborative planning | 3 (Standard) | NO | unified-execute-with-file |
| **Analysis Workflows** |
| analyze-with-file | `/ccw "analyze: ..."` | multi-CLI analysis → discussion.md | Deep understanding, architecture exploration | 3 (Standard) | NO | workflow-lite-planex |
| debug-with-file | `/ccw "debug: ..."` | hypothesis-driven iteration → debug.log | Systematic debugging | 3 (Standard) | YES | (standalone) |
| **Review Workflows** |
| review-cycle | `/ccw "review code"` | discovery → analysis → aggregation → deep-dive → completion | Code review, quality gates | 3 (Standard) | YES | fix mode (if findings) |
| **Specification Workflows** |
| spec-generator | `/ccw "specification: ..."` | study → discovery → brief → PRD → architecture → epics | Complete specification package | 4 (Full) | YES (docs only) | workflow-plan / team-planex |
| **Team Workflows** |
| team-planex | `/ccw "team planex: ..."` | coordinator → planner wave → executor wave | Issue-based parallel execution | Team | YES | (complete pipeline) |
| team-lifecycle-v5 | `/ccw "team lifecycle: ..."` | spec pipeline → impl pipeline | Full lifecycle specification to validation | Team | YES | (complete lifecycle) |
| team-arch-opt | (architecture optimization) | architecture analysis → optimization | Architecture optimization | Team | YES | (complete) |
| **Cycle Workflows** |
| integration-test-cycle | `/ccw "integration test: ..."` | explore → test dev → test-fix cycle → reflection | Integration testing with iteration | 3 (Standard) | YES | (self-iterating) |
| refactor-cycle | `/ccw "refactor: ..."` | discover → prioritize → execute → validate | Tech debt discovery and refactoring | 3 (Standard) | YES | (self-iterating) |
| **Issue Workflow** |
| issue pipeline | `/ccw "use issue workflow"` | discover → plan → queue → execute | Structured issue management | 2.5 (Bridge) | YES | (complete pipeline) |
| **Roadmap Workflow** |
| roadmap-with-file | `/ccw "roadmap: ..."` | strategic roadmap → issue creation → execution-plan | Strategic requirement decomposition | 4 (Strategic) | NO | team-planex |
---
## Workflow Level Classification
| Level | Workflows | Characteristics |
|-------|-----------|-----------------|
| **2 (Lightweight)** | workflow-lite-planex, docs | Quick execution, minimal phases |
| **2.5 (Bridge)** | issue pipeline, rapid-to-issue | Bridge to issue workflow |
| **3 (Standard)** | workflow-plan, workflow-execute, workflow-tdd-plan, workflow-test-fix, review-cycle, debug-with-file, analyze-with-file, workflow-multi-cli-plan | Full planning/execution, multi-phase |
| **4 (Full)** | brainstorm, spec-generator, brainstorm-with-file, roadmap-with-file | Complete exploration, specification |
| **Team** | team-planex, team-lifecycle-v5, team-arch-opt | Multi-agent parallel execution |
| **Cycle** | integration-test-cycle, refactor-cycle | Self-iterating with reflection |
---
## Auto-Chain Reference
| Source Workflow | Auto-Chains To | Condition |
|-----------------|---------------|-----------|
| workflow-lite-planex | workflow-test-fix | Default (unless skip-tests) |
| workflow-plan | workflow-execute | After plan confirmation |
| workflow-execute | review-cycle | User choice via Phase 6 |
| workflow-tdd-plan | workflow-execute | After TDD plan validation |
| brainstorm | workflow-plan | Auto-chain for formal planning |
| brainstorm-with-file | workflow-plan → workflow-execute | Auto |
| analyze-with-file | workflow-lite-planex | Auto |
| debug-with-file | (none) | Standalone |
| collaborative-plan-with-file | unified-execute-with-file | Auto |
| roadmap-with-file | team-planex | Auto |
| spec-generator | workflow-plan / team-planex | User choice |
| review-cycle | fix mode | If findings exist |
---
## Self-Contained vs Multi-Skill
| Workflow | Self-Contained | Notes |
|----------|---------------|-------|
| workflow-lite-planex | YES | Complete plan + execute |
| workflow-plan | NO | Requires workflow-execute |
| workflow-execute | YES | Complete execution |
| workflow-tdd-plan | NO | Requires workflow-execute |
| workflow-test-fix | YES | Complete generation + execution |
| brainstorm | YES (ideation) | NO for implementation |
| review-cycle | YES | Complete review + optional fix |
| spec-generator | YES (docs) | NO for implementation |
| team-planex | YES | Complete team pipeline |
| team-lifecycle-v5 | YES | Complete lifecycle |
| debug-with-file | YES | Complete debugging |
| integration-test-cycle | YES | Self-iterating |
| refactor-cycle | YES | Self-iterating |
---
## Keyword Detection Reference
| Keyword Pattern | Detected Workflow |
|-----------------|-------------------|
| `urgent`, `critical`, `hotfix` | bugfix-hotfix |
| `from scratch`, `greenfield`, `new project` | greenfield |
| `brainstorm`, `ideation`, `multi-perspective` | brainstorm |
| `debug`, `hypothesis`, `systematic` | debug-with-file |
| `analyze`, `understand`, `collaborative analysis` | analyze-with-file |
| `roadmap` | roadmap-with-file |
| `specification`, `PRD`, `产品需求` | spec-generator |
| `integration test`, `集成测试` | integration-test-cycle |
| `refactor`, `技术债务` | refactor-cycle |
| `team planex`, `wave pipeline` | team-planex |
| `multi-cli`, `多模型协作` | workflow-multi-cli-plan |
| `TDD`, `test-driven` | workflow-tdd-plan |
| `review`, `code review` | review-cycle |
| `issue workflow`, `use issue workflow` | issue pipeline |
---
## Workflow Selection Guide
| Task Type | Recommended Workflow | Command Chain |
|-----------|---------------------|---------------|
| Quick feature | `/ccw "..."` | lite-planex → test-fix |
| Bug fix | `/ccw "fix ..."` | lite-planex --bugfix → test-fix |
| Complex feature | `/ccw "..."` (detected) | plan → execute → review → test-fix |
| Exploration | `/workflow:analyze-with-file "..."` | analysis → (optional) lite-planex |
| Ideation | `/workflow:brainstorm-with-file "..."` | brainstorm → plan → execute |
| Debugging | `/workflow:debug-with-file "..."` | hypothesis-driven debugging |
| Issue management | `/issue:new``/issue:plan``/issue:queue``/issue:execute` | issue workflow |
| Multi-issue batch | `/issue:discover``/issue:plan --all-pending` | issue batch workflow |
| Code review | `/cli:codex-review --uncommitted` | codex review |
| Team coordination | `team-lifecycle-v5` or `team-planex` | team workflow |
| TDD development | `/ccw "Implement with TDD"` | tdd-plan → execute |
| Integration testing | `/ccw "integration test: ..."` | integration-test-cycle |
| Tech debt | `/ccw "refactor: ..."` | refactor-cycle |
| Specification docs | `/ccw "specification: ..."` | spec-generator → plan |
---
## Greenfield Development Paths
| Size | Pipeline | Complexity |
|------|----------|------------|
| Small | brainstorm-with-file → workflow-plan → workflow-execute | 3 |
| Medium | brainstorm-with-file → workflow-plan → workflow-execute → workflow-test-fix | 3 |
| Large | brainstorm-with-file → workflow-plan → workflow-execute → review-cycle → workflow-test-fix | 4 |
---
## Related Documentation
- [4-Level System](./4-level.md) - Detailed workflow explanation
- [Best Practices](./best-practices.md) - Workflow optimization tips
- [Examples](./examples.md) - Workflow usage examples
- [Teams](./teams.md) - Team workflow coordination

View File

@@ -1,242 +0,0 @@
# 仪表板
## 一句话概述
**仪表板通过直观的基于小部件的界面,提供项目工作流状态、统计信息和最近活动的概览。**
---
## 解决的痛点
| 痛点 | 当前状态 | 仪表板解决方案 |
|------|----------|----------------|
| **项目可见性不足** | 无法查看整体项目健康状况 | 带有技术栈和开发索引的项目信息横幅 |
| **指标分散** | 统计信息分布在多个位置 | 集中式统计数据,带有迷你趋势图 |
| **工作流状态未知** | 难以跟踪会话进度 | 带有状态细分的饼图 |
| **最近工作丢失** | 无法快速访问活动会话 | 带有任务详情的会话轮播 |
| **索引状态不明确** | 不知道代码是否已索引 | 实时索引状态指示器 |
---
## 概述
**位置**: `ccw/frontend/src/pages/HomePage.tsx`
**用途**: 仪表板主页,提供项目概览、统计信息、工作流状态和最近活动监控。
**访问**: 导航 → 仪表板(默认首页,路径为 `/`
**布局**:
```
+--------------------------------------------------------------------------+
| 仪表板头部(标题 + 刷新) |
+--------------------------------------------------------------------------+
| WorkflowTaskWidget组合卡片 |
| +--------------------------------------------------------------------+ |
| | 项目信息横幅(可展开) | |
| | - 项目名称、描述、技术栈徽章 | |
| | - 快速统计功能、bug修复、增强 | |
| | - 索引状态指示器 | |
| +----------------------------------+---------------------------------+ |
| | 统计部分 | 工作流状态 | 任务详情(轮播) | |
| | - 6 个迷你卡片 | - 饼图 | - 会话导航 | |
| | - 迷你趋势图 | - 图例 | - 任务列表2 列) | |
| +----------------+-----------------+-------------------------------+ |
+--------------------------------------------------------------------------+
| RecentSessionsWidget |
| +--------------------------------------------------------------------+ |
| | 标签页:所有任务 | 工作流 | 轻量任务 | |
| | +---------------+---------------+-------------------------------+ | |
| | | 带有状态、进度、标签、时间的任务卡片 | | |
| | +---------------------------------------------------------------+ | |
+--------------------------------------------------------------------------+
```
---
## 实时演示
:::demo DashboardOverview #DashboardOverview.tsx :::
---
## 核心功能
| 功能 | 描述 |
|------|------|
| **项目信息横幅** | 可展开的横幅,显示项目名称、描述、技术栈(语言、框架、架构)、开发索引(功能/bug修复/增强)和实时索引状态 |
| **统计部分** | 6 个迷你统计卡片(活动会话、总任务、已完成任务、待处理任务、失败任务、今日活动),带有 7 天迷你趋势图 |
| **工作流状态饼图** | 环形图显示会话状态细分(已完成、进行中、计划中、已暂停、已归档),附带百分比 |
| **会话轮播** | 自动轮播5秒间隔的会话卡片带有任务列表、进度条和手动导航箭头 |
| **最近会话小部件** | 所有任务类型的标签页视图,带有筛选、状态徽章和进度指示器 |
| **实时更新** | 统计数据每 60 秒自动刷新,索引状态每 30 秒刷新 |
---
## 组件层次结构
```
HomePage
├── DashboardHeader
│ ├── 标题
│ └── 刷新操作按钮
├── WorkflowTaskWidget
│ ├── ProjectInfoBanner可展开
│ │ ├── 项目名称和描述
│ │ ├── 技术栈徽章
│ │ ├── 快速统计卡片
│ │ ├── 索引状态指示器
│ │ ├── 架构部分
│ │ ├── 关键组件
│ │ └── 设计模式
│ ├── 统计部分
│ │ └── MiniStatCard6 个卡片,带迷你趋势图)
│ ├── WorkflowStatusChart
│ │ └── 饼图与图例
│ └── SessionCarousel
│ ├── 导航箭头
│ └── 会话卡片(任务列表)
└── RecentSessionsWidget
├── 标签导航(全部 | 工作流 | 轻量任务)
├── 任务网格
│ └── TaskItemCard
└── 加载/空状态
```
---
## Props API
### HomePage 组件
| Prop | 类型 | 默认值 | 描述 |
|------|------|--------|------|
| - | - | - | 此页面组件不接受任何 props数据通过 hooks 获取) |
### WorkflowTaskWidget
| Prop | 类型 | 默认值 | 描述 |
|------|------|--------|------|
| `className` | `string` | `undefined` | 用于样式的额外 CSS 类 |
### RecentSessionsWidget
| Prop | 类型 | 默认值 | 描述 |
|------|------|--------|------|
| `className` | `string` | `undefined` | 用于样式的额外 CSS 类 |
| `maxItems` | `number` | `6` | 要显示的最大项目数量 |
---
## 使用示例
### 基础仪表板
```tsx
import { HomePage } from '@/pages/HomePage'
// 仪表板在根路由 (/) 自动渲染
// 不需要 props - 数据通过 hooks 获取
```
### 嵌入 WorkflowTaskWidget
```tsx
import { WorkflowTaskWidget } from '@/components/dashboard/widgets/WorkflowTaskWidget'
function CustomDashboard() {
return (
<div className="p-6">
<WorkflowTaskWidget />
</div>
)
}
```
### 自定义最近会话小部件
```tsx
import { RecentSessionsWidget } from '@/components/dashboard/widgets/RecentSessionsWidget'
function ActivityFeed() {
return (
<div className="p-6">
<RecentSessionsWidget maxItems={10} />
</div>
)
}
```
---
## 状态管理
### 本地状态
| 状态 | 类型 | 描述 |
|------|------|------|
| `hasError` | `boolean` | 关键错误的错误跟踪 |
| `projectExpanded` | `boolean` | 项目信息横幅展开状态 |
| `currentSessionIndex` | `number` | 轮播中活动会话的索引 |
| `activeTab` | `'all' \| 'workflow' \| 'lite'` | 最近会话小部件筛选标签页 |
### Store 选择器Zustand
| Store | 选择器 | 用途 |
|-------|--------|------|
| `appStore` | `selectIsImmersiveMode` | 检查沉浸模式是否激活 |
### 自定义 Hooks数据获取
| Hook | 描述 | 重新获取间隔 |
|------|-------------|--------------|
| `useWorkflowStatusCounts` | 会话状态分布数据 | - |
| `useDashboardStats` | 带迷你趋势图的统计数据 | 60 秒 |
| `useProjectOverview` | 项目信息和技术栈 | - |
| `useIndexStatus` | 实时索引状态 | 30 秒 |
| `useSessions` | 活动会话数据 | - |
| `useLiteTasks` | 最近小部件的轻量任务数据 | - |
---
## 交互演示
### 统计卡片演示
:::demo MiniStatCards #MiniStatCards.tsx :::
### 项目信息横幅演示
:::demo ProjectInfoBanner #ProjectInfoBanner.tsx :::
### 会话轮播演示
:::demo SessionCarousel #SessionCarousel.tsx :::
---
## 可访问性
- **键盘导航**
- <kbd>Tab</kbd> - 在交互元素之间导航
- <kbd>Enter</kbd>/<kbd>Space</kbd> - 激活按钮和卡片
- <kbd>方向键</kbd> - 导航轮播会话
- **ARIA 属性**
- 导航按钮上的 `aria-label`
- 可展开部分的 `aria-expanded`
- 实时更新的 `aria-live` 区域
- **屏幕阅读器支持**
- 所有图表都有文字描述
- 状态指示器包含文字标签
- 导航被正确宣布
---
## 相关链接
- [会话](/features/sessions) - 查看和管理所有会话
- [终端仪表板](/features/terminal) - 终端优先监控界面
- [队列](/features/queue) - 问题执行队列管理
- [内存](/features/memory) - 持久化内存管理
- [设置](/features/settings) - 全局应用设置

View File

@@ -1,53 +0,0 @@
---
layout: home
title: Claude Code Workflow
titleTemplate: Claude Code Workflow 文档
hero:
name: Claude Code Workflow
text: 高级 AI 驱动开发环境
tagline: 提升您的编码工作流程,实现智能化开发
image:
src: /logo.svg
alt: Claude Code Workflow
actions:
- theme: brand
text: 快速开始
link: /zh-CN/guide/ch01-what-is-claude-dms3
- theme: alt
text: 查看功能
link: /zh-CN/features/spec
features:
- title: 🤖 智能编排
details: Claude AI 驱动的任务编排,自动拆解复杂项目为可执行的步骤。
- title: ⚡ 内置技能
details: 27+ 个内置技能,涵盖代码审查、测试、重构等常见开发场景。
- title: 🔄 工作流系统
details: 四级工作流体系,从基础任务到复杂代理团队,灵活适应各种需求。
- title: 💾 持久化记忆
details: 跨会话的知识积累和上下文管理,让 AI 学习您的编码偏好。
- title: 🎨 可视化界面
details: 丰富的 Web 界面,包括仪表板、终端监控、问题追踪等实用工具。
- title: 🔌 MCP 集成
details: 支持 Model Context Protocol轻松扩展 AI 能力和工具集。
---
## 什么是 Claude Code Workflow
Claude Code Workflow (CCW) 是一个基于 Claude AI 的智能开发环境,通过工作流编排、代理协作和持久化记忆,帮助开发者更高效地完成编码任务。
### 核心特性
- **智能任务拆解**: 将复杂需求自动分解为可执行的子任务
- **代理协作系统**: 多个专业代理协同工作,覆盖代码、测试、文档等各个环节
- **上下文记忆**: 跨会话保持项目知识和编码决策
- **实时监控**: 终端流监控、问题追踪等可视化工具
- **CLI 工具集成**: 灵活的命令行工具,支持多种 AI 模型
## 快速链接
- [安装指南](/zh-CN/guide/installation)
- [第一个工作流](/zh-CN/guide/first-workflow)
- [核心功能](/zh-CN/features/spec)
- [组件文档](/zh-CN/components/)

View File

@@ -14,6 +14,58 @@
| **错误恢复缺失** | 工具失败只能手动重试 | 自动 fallback 到备用模型 | | **错误恢复缺失** | 工具失败只能手动重试 | 自动 fallback 到备用模型 |
| **上下文管理弱** | 多轮对话上下文不连贯 | Native Resume 自动管理 | | **上下文管理弱** | 多轮对话上下文不连贯 | Native Resume 自动管理 |
---
## 语义化调用(推荐)
::: tip 核心理念
CLI 工具是 **AI 自动调用的能力扩展**。用户只需用自然语言描述需求AI 会自动选择并调用合适的工具。
:::
### 语义触发示例
只需在对话中自然表达AI 会自动调用对应工具:
| 目标 | 用户语义描述 | AI 自动执行 |
| :--- | :--- | :--- |
| **代码分析** | "用 Gemini 分析 auth 模块的代码结构" | Gemini + 分析规则 |
| **安全审计** | "用 Gemini 扫描安全漏洞,重点关注 OWASP Top 10" | Gemini + 安全评估规则 |
| **代码实现** | "让 Qwen 实现一个带缓存的用户仓库" | Qwen + 功能实现规则 |
| **代码审查** | "用 Codex 审查这个 PR 的改动" | Codex + 审查规则 |
| **Bug 诊断** | "用 Gemini 分析这个内存泄漏的根因" | Gemini + 诊断规则 |
### 多模型协作模式
通过语义描述,可以让多个 AI 模型协同工作:
| 模式 | 描述方式 | 适用场景 |
| --- | --- | --- |
| **协作型** | "让 Gemini 和 Codex 共同分析架构问题" | 多角度分析同一问题 |
| **流水线型** | "Gemini 设计方案Qwen 实现Codex 审查" | 分阶段完成复杂任务 |
| **迭代型** | "用 Gemini 诊断问题Codex 修复,迭代直到通过测试" | Bug 修复循环 |
| **并行型** | "让 Gemini 和 Qwen 分别给出优化建议" | 对比不同方案 |
### 协作示例
**流水线开发**
```
用户:我需要实现一个 WebSocket 实时通知功能。
请 Gemini 设计架构Qwen 实现代码,最后用 Codex 审查。
AI[依次调用三个模型,完成设计→实现→审查流程]
```
**迭代修复**
```
用户:测试失败了,用 Gemini 诊断原因,让 Qwen 修复,循环直到测试通过。
AI[自动迭代诊断-修复流程,直到问题解决]
```
---
## 底层命令参考
以下是 AI 自动调用时使用的底层命令,用户通常无需手动执行。
## 核心概念速览 ## 核心概念速览
| 概念 | 说明 | 示例 | | 概念 | 说明 | 示例 |

View File

@@ -2,255 +2,180 @@
## 一句话定位 ## 一句话定位
**高级技巧是提升效率的关键** — CLI 工具链深度使用、多模型协作优化、记忆管理最佳实践 **用自然语言驱动 AI 编排工具链** — 语义化 CLI 调用、多模型协作、智能记忆管理。
--- ---
## 5.1 CLI 工具链使用 ## 5.1 语义化工具调度
### 5.1.1 CLI 配置 ### 5.1.1 核心理念
CLI 工具配置文件:`~/.claude/cli-tools.json` CCW 的 CLI 工具是 **AI 自动调用的能力扩展**用户只需用自然语言描述需求AI 会自动选择并调用合适的工具。
::: tip 关键理解
- 用户说:"用 Gemini 分析这段代码"
- AI 自动:调用 Gemini CLI + 应用分析规则 + 返回结果
- 用户无需关心 `ccw cli` 命令细节
:::
### 5.1.2 可用工具与能力
| 工具 | 擅长领域 | 典型触发词 |
| --- | --- | --- |
| **Gemini** | 深度分析、架构设计、Bug 诊断 | "用 Gemini 分析"、"深度理解" |
| **Qwen** | 代码生成、功能实现 | "让 Qwen 实现"、"代码生成" |
| **Codex** | 代码审查、Git 操作 | "用 Codex 审查"、"代码评审" |
| **OpenCode** | 开源多模型 | "用 OpenCode" |
### 5.1.3 语义触发示例
只需在对话中自然表达AI 会自动调用对应工具:
| 目标 | 用户语义描述 | AI 自动执行 |
| :--- | :--- | :--- |
| **安全评估** | "用 Gemini 扫描认证模块的安全漏洞" | Gemini + 安全分析规则 |
| **代码实现** | "让 Qwen 帮我实现一个速率限制中间件" | Qwen + 功能实现规则 |
| **代码审查** | "用 Codex 审查这个 PR 的改动" | Codex + 审查规则 |
| **Bug 诊断** | "用 Gemini 分析这个内存泄漏的根因" | Gemini + 诊断规则 |
### 5.1.4 底层配置(可选了解)
AI 调用工具的配置文件位于 `~/.claude/cli-tools.json`
```json ```json
{ {
"version": "3.3.0",
"tools": { "tools": {
"gemini": { "gemini": {
"enabled": true, "enabled": true,
"primaryModel": "gemini-2.5-flash", "primaryModel": "gemini-2.5-flash",
"secondaryModel": "gemini-2.5-flash", "tags": ["分析", "Debug"]
"tags": ["分析", "Debug"],
"type": "builtin"
}, },
"qwen": { "qwen": {
"enabled": true, "enabled": true,
"primaryModel": "coder-model", "primaryModel": "coder-model",
"secondaryModel": "coder-model", "tags": ["实现"]
"tags": [],
"type": "builtin"
},
"codex": {
"enabled": true,
"primaryModel": "gpt-5.2",
"secondaryModel": "gpt-5.2",
"tags": [],
"type": "builtin"
} }
} }
} }
``` ```
### 5.1.2 标签路由 ::: info 说明
标签tags帮助 AI 根据任务类型自动选择最合适的工具。用户通常无需修改此配置。
根据任务类型自动选择模型: :::
| 标签 | 适用模型 | 任务类型 |
| --- | --- | --- |
| **分析** | Gemini | 代码分析、架构设计 |
| **Debug** | Gemini | 根因分析、问题诊断 |
| **实现** | Qwen | 功能开发、代码生成 |
| **审查** | Codex | 代码审查、Git 操作 |
### 5.1.3 CLI 命令模板
#### 分析任务
```bash
ccw cli -p "PURPOSE: 识别安全漏洞
TASK: • 扫描 SQL 注入 • 检查 XSS • 验证 CSRF
MODE: analysis
CONTEXT: @src/auth/**/*
EXPECTED: 安全报告,含严重性分级和修复建议
CONSTRAINTS: 聚焦认证模块" --tool gemini --mode analysis --rule analysis-assess-security-risks
```
#### 实现任务
```bash
ccw cli -p "PURPOSE: 实现速率限制
TASK: • 创建中间件 • 配置路由 • Redis 后端
MODE: write
CONTEXT: @src/middleware/**/* @src/config/**/*
EXPECTED: 生产代码 + 单元测试 + 集成测试
CONSTRAINTS: 遵循现有中间件模式" --tool qwen --mode write --rule development-implement-feature
```
### 5.1.4 规则模板
| 规则 | 用途 |
| --- | --- |
| **analysis-diagnose-bug-root-cause** | Bug 根因分析 |
| **analysis-analyze-code-patterns** | 代码模式分析 |
| **analysis-review-architecture** | 架构审查 |
| **analysis-assess-security-risks** | 安全评估 |
| **development-implement-feature** | 功能实现 |
| **development-refactor-codebase** | 代码重构 |
| **development-generate-tests** | 测试生成 |
--- ---
## 5.2 多模型协作 ## 5.2 多模型协作
### 5.2.1 模型选择指南 ### 5.2.1 协作模式
| 任务 | 推荐模型 | 理由 | 通过语义描述,可以让多个 AI 模型协同工作:
| 模式 | 描述方式 | 适用场景 |
| --- | --- | --- | | --- | --- | --- |
| **代码分析** | Gemini | 擅长深度代码理解和模式识别 | | **协作型** | "让 Gemini 和 Codex 共同分析架构问题" | 多角度分析同一问题 |
| **流水线型** | "Gemini 设计方案Qwen 实现Codex 审查" | 分阶段完成复杂任务 |
| **迭代型** | "用 Gemini 诊断问题Codex 修复,迭代直到通过测试" | Bug 修复循环 |
| **并行型** | "让 Gemini 和 Qwen 分别给出优化建议" | 对比不同方案 |
### 5.2.2 语义示例
**协作分析**
```
用户:让 Gemini 和 Codex 共同分析 src/auth 模块的安全性和性能问题
AI[自动调用两个模型,综合分析结果]
```
**流水线开发**
```
用户:我需要实现一个 WebSocket 实时通知功能。
请 Gemini 设计架构Qwen 实现代码,最后用 Codex 审查。
AI[依次调用三个模型,完成设计→实现→审查流程]
```
**迭代修复**
```
用户:测试失败了,用 Gemini 诊断原因,让 Qwen 修复,循环直到测试通过。
AI[自动迭代诊断-修复流程,直到问题解决]
```
### 5.2.3 模型选择建议
| 任务类型 | 推荐模型 | 理由 |
| --- | --- | --- |
| **架构分析** | Gemini | 擅长深度理解和模式识别 |
| **Bug 诊断** | Gemini | 强大的根因分析能力 | | **Bug 诊断** | Gemini | 强大的根因分析能力 |
| **功能开发** | Qwen | 代码生成效率高 | | **代码生成** | Qwen | 代码生成效率高 |
| **代码审查** | Codex (GPT) | Git 集成好,审查格式标准 | | **代码审查** | Codex | Git 集成好,审查格式标准 |
| **长文本** | Claude | 上下文窗口大 | | **长文本处理** | Claude | 上下文窗口大 |
### 5.2.2 协作模式
#### 串行协作
```bash
# 步骤 1: Gemini 分析
ccw cli -p "分析代码架构" --tool gemini --mode analysis
# 步骤 2: Qwen 实现
ccw cli -p "基于分析结果实现功能" --tool qwen --mode write
# 步骤 3: Codex 审查
ccw cli -p "审查实现代码" --tool codex --mode review
```
#### 并行协作
使用 `--tool gemini``--tool qwen` 同时分析同一问题:
```bash
# 终端 1
ccw cli -p "从性能角度分析" --tool gemini --mode analysis &
# 终端 2
ccw cli -p "从安全角度分析" --tool codex --mode analysis &
```
### 5.2.3 会话恢复
跨模型会话恢复:
```bash
# 第一次调用
ccw cli -p "分析认证模块" --tool gemini --mode analysis
# 恢复会话继续
ccw cli -p "基于上次分析,设计改进方案" --tool qwen --mode write --resume
```
--- ---
## 5.3 Memory 管理 ## 5.3 智能记忆管理
### 5.3.1 Memory 分类 ### 5.3.1 记忆系统概述
| 分类 | 用途 | 示例内容 | CCW 的记忆系统是 **AI 自主管理** 的知识库,包括:
| 分类 | 用途 | 示例 |
| --- | --- | --- | | --- | --- | --- |
| **learnings** | 学习心得 | 新技术使用经验 | | **learnings** | 学习心得 | 新技术使用经验、最佳实践 |
| **decisions** | 架构决策 | 技术选型理由 | | **decisions** | 架构决策 | 技术选型理由、设计权衡 |
| **conventions** | 编码规范 | 命名约定、模式 | | **conventions** | 编码规范 | 命名约定、代码风格 |
| **issues** | 已知问题 | Bug、限制、TODO | | **issues** | 已知问题 | Bug 记录、限制说明 |
### 5.3.2 Memory 命令 ### 5.3.2 记忆的自动使用
| 命令 | 功能 | 示例 | AI 在执行任务时会自动检索和应用相关记忆:
| --- | --- | --- |
| **list** | 列出所有记忆 | `ccw memory list` |
| **search** | 搜索记忆 | `ccw memory search "认证"` |
| **export** | 导出记忆 | `ccw memory export <id>` |
| **import** | 导入记忆 | `ccw memory import "..."` |
| **embed** | 生成嵌入 | `ccw memory embed <id>` |
### 5.3.3 Memory 最佳实践
::: tip 提示
- **定期整理**: 每周整理一次 Memory删除过时内容
- **结构化**: 使用标准格式,便于搜索和复用
- **上下文**: 记录决策背景,不只是结论
- **链接**: 相关内容互相引用
:::
### 5.3.4 Memory 模板
```markdown
## 标题
### 背景
- **问题**: ...
- **影响**: ...
### 决策
- **方案**: ...
- **理由**: ...
### 结果
- **效果**: ...
- **经验**: ...
### 相关
- [相关记忆 1](memory-id-1)
- [相关文档](link)
``` ```
用户:帮我实现用户认证模块
AI[自动检索记忆中的认证相关 decisions 和 conventions]
根据之前的技术决策,我们使用 JWT + bcrypt 方案...
```
### 5.3.3 用户如何引导记忆
虽然 AI 自动管理记忆,但用户可以主动强化:
**明确要求记住**
```
用户:记住这个命名规范:所有 API 路由使用 kebab-case
AI[将此规范存入 conventions 记忆]
```
**要求回顾决策**
```
用户:我们之前为什么选择 Redis 做缓存?
AI[检索 decisions 记忆并回答]
```
**纠正错误记忆**
```
用户:之前的决定改了,我们现在用 PostgreSQL 代替 MongoDB
AI[更新相关 decision 记忆]
```
### 5.3.4 记忆文件位置
- **全局记忆**: `~/.claude/projects/{project-name}/memory/`
- **项目记忆**: `.claude/memory/``MEMORY.md`
--- ---
## 5.4 CodexLens 高级用法 ## 5.4 Hook 自动化
### 5.4.1 混合搜索 ### 5.4.1 Hook 概念
结合向量搜索和关键词搜索 Hook 是 AI 执行任务前后的自动化流程,用户无需手动触发
```bash
# 纯向量搜索
ccw search --mode vector "用户认证"
# 混合搜索(默认)
ccw search --mode hybrid "用户认证"
# 纯关键词搜索
ccw search --mode keyword "authenticate"
```
### 5.4.2 调用链追踪
追踪函数的完整调用链:
```bash
# 向上追踪(谁调用了我)
ccw search --trace-up "authenticateUser"
# 向下追踪(我调用了谁)
ccw search --trace-down "authenticateUser"
# 完整调用链
ccw search --trace-full "authenticateUser"
```
### 5.4.3 语义搜索技巧
| 技巧 | 示例 | 效果 |
| --- | --- | --- |
| **功能描述** | "处理用户登录" | 找到登录相关代码 |
| **问题描述** | "内存泄漏的地方" | 找到潜在问题 |
| **模式描述** | "单例模式的实现" | 找到设计模式 |
| **技术描述** | "使用 React Hooks" | 找到相关用法 |
---
## 5.5 Hook 自动注入
### 5.5.1 Hook 类型
| Hook 类型 | 触发时机 | 用途 | | Hook 类型 | 触发时机 | 用途 |
| --- | --- | --- | | --- | --- | --- |
| **pre-command** | 命令执行前 | 注入规范、加载上下文 | | **pre-command** | AI 思考前 | 加载项目规范、检索记忆 |
| **post-command** | 命令执行后 | 保存 Memory、更新状态 | | **post-command** | AI 完成后 | 保存决策、更新索引 |
| **pre-commit** | Git 提交前 | 代码审查、规范检查 | | **pre-commit** | Git 提交前 | 代码审查、规范检查 |
| **file-change** | 文件变更时 | 自动格式化、更新索引 |
### 5.5.2 Hook 配置 ### 5.4.2 配置示例
`.claude/hooks.json` 中配置: `.claude/hooks.json` 中配置:
@@ -258,19 +183,14 @@ ccw search --trace-full "authenticateUser"
{ {
"pre-command": [ "pre-command": [
{ {
"name": "inject-specs", "name": "load-project-specs",
"description": "注入项目规范", "description": "加载项目规范",
"command": "cat .workflow/specs/project-constraints.md" "command": "cat .workflow/specs/project-constraints.md"
},
{
"name": "load-memory",
"description": "加载相关 Memory",
"command": "ccw memory search \"{query}\""
} }
], ],
"post-command": [ "post-command": [
{ {
"name": "save-memory", "name": "save-decisions",
"description": "保存重要决策", "description": "保存重要决策",
"command": "ccw memory import \"{content}\"" "command": "ccw memory import \"{content}\""
} }
@@ -280,49 +200,54 @@ ccw search --trace-full "authenticateUser"
--- ---
## 5.6 性能优化 ## 5.5 ACE 语义搜索
### 5.6.1 索引优化 ### 5.5.1 什么是 ACE
| 优化项 | 说明 | ACE (Augment Context Engine) 是 AI 的 **代码感知能力**,让 AI 能理解整个代码库的语义。
### 5.5.2 AI 如何使用 ACE
当用户提问时AI 会自动使用 ACE 搜索相关代码:
```
用户:认证流程是怎么实现的?
AI[通过 ACE 语义搜索 auth 相关代码]
根据代码分析,认证流程是...
```
### 5.5.3 配置参考
| 配置方式 | 链接 |
| --- | --- | | --- | --- |
| **增量索引** | 只索引变更文件 | | **官方文档** | [Augment MCP Documentation](https://docs.augmentcode.com/context-services/mcp/overview) |
| **并行索引** | 多进程并行处理 | | **代理工具** | [ace-tool (GitHub)](https://github.com/eastxiaodong/ace-tool) |
| **缓存策略** | 向量嵌入缓存 |
### 5.6.2 搜索优化
| 优化项 | 说明 |
| --- | --- |
| **结果缓存** | 相同查询返回缓存 |
| **分页加载** | 大结果集分页返回 |
| **智能去重** | 相似结果自动去重 |
--- ---
## 5.7 快速参考 ## 5.6 语义提示速查
### CLI 命令速查 ### 常用语义模式
| 命令 | 功能 | | 目标 | 语义描述示例 |
| --- | --- | | --- | --- |
| `ccw cli -p "..." --tool gemini --mode analysis` | 分析任务 | | **分析代码** | "用 Gemini 分析 src/auth 的架构设计" |
| `ccw cli -p "..." --tool qwen --mode write` | 实现任务 | | **安全审计** | "用 Gemini 扫描安全漏洞,重点关注 OWASP Top 10" |
| `ccw cli -p "..." --tool codex --mode review` | 审查任务 | | **实现功能** | "让 Qwen 实现一个带缓存的用户仓库" |
| `ccw memory list` | 列出记忆 | | **代码审查** | "用 Codex 审查最近的改动" |
| `ccw memory search "..."` | 搜索记忆 | | **Bug 诊断** | "用 Gemini 分析这个内存泄漏的根因" |
| `ccw search "..."` | 语义搜索 | | **多模型协作** | "Gemini 设计方案Qwen 实现Codex 审查" |
| `ccw search --trace "..."` | 调用链追踪 | | **记住规范** | "记住:所有 API 使用 RESTful 风格" |
| **回顾决策** | "我们之前为什么选择这个技术栈?" |
### 规则模板速查 ### 协作模式速查
| 规则 | 用途 | | 模式 | 语义示例 |
| --- | --- | | --- | --- |
| `analysis-diagnose-bug-root-cause` | Bug 分析 | | **协作** | "让 Gemini 和 Codex 共同分析..." |
| `analysis-assess-security-risks` | 安全评估 | | **流水线** | "Gemini 设计Qwen 实现Codex 审查" |
| `development-implement-feature` | 功能实现 | | **迭代** | "诊断并修复,直到测试通过" |
| `development-refactor-codebase` | 代码重构 | | **并行** | "让多个模型分别给出建议" |
| `development-generate-tests` | 测试生成 |
--- ---

View File

@@ -106,13 +106,109 @@ npm install -g claude-code-workflow@latest
## 卸载 ## 卸载
```bash CCW 提供了智能卸载命令,会自动处理安装清单、孤立文件清理和全局文件保护。
npm uninstall -g claude-code-workflow
# 删除配置(可选) ### 使用 CCW 卸载命令(推荐)
rm -rf ~/.claude
```bash
ccw uninstall
``` ```
卸载流程:
1. **扫描安装清单** - 自动检测所有已安装的 CCW 实例Global 和 Path 模式)
2. **交互选择** - 显示安装列表,让您选择要卸载的实例
3. **智能保护** - 卸载 Path 模式时,如果存在 Global 安装会自动保护全局文件workflows、scripts、templates
4. **孤立文件清理** - 自动清理不再被任何安装引用的 skills 和 commands 文件
5. **空目录清理** - 移除安装留下的空目录
6. **Git Bash 修复移除** - Windows 上最后一个安装卸载后,询问是否移除 Git Bash 多行提示修复
### 卸载输出示例
```
Found installations:
1. Global
Path: /Users/username/my-project
Date: 2026/3/2
Version: 7.0.5
Files: 156 | Dirs: 23
──────────────────────────────────────
? Select installation to uninstall: Global - /Users/username/my-project
? Are you sure you want to uninstall Global installation? Yes
✔ Removing files...
✔ Uninstall complete!
╔══════════════════════════════════════╗
║ Uninstall Summary ║
╠══════════════════════════════════════╣
║ ✓ Successfully Uninstalled ║
║ ║
║ Files removed: 156 ║
║ Directories removed: 23 ║
║ Orphan files cleaned: 3 ║
║ ║
║ Manifest removed ║
╚══════════════════════════════════════╝
```
### 手动卸载 npm 包
如果需要完全移除 CCW npm 包:
```bash
# 卸载全局 npm 包
npm uninstall -g claude-code-workflow
```
### 手动删除 CCW 文件(不推荐)
如果必须手动删除,以下是 CCW 安装的具体路径:
```bash
# CCW 安装的目录(可安全删除)
rm -rf ~/.claude/commands/ccw.md
rm -rf ~/.claude/commands/ccw-coordinator.md
rm -rf ~/.claude/commands/workflow
rm -rf ~/.claude/commands/issue
rm -rf ~/.claude/commands/cli
rm -rf ~/.claude/commands/memory
rm -rf ~/.claude/commands/idaw
rm -rf ~/.claude/skills/workflow-*
rm -rf ~/.claude/skills/team-*
rm -rf ~/.claude/skills/review-*
rm -rf ~/.claude/agents/team-worker.md
rm -rf ~/.claude/agents/cli-*-agent.md
rm -rf ~/.claude/workflows
rm -rf ~/.claude/scripts
rm -rf ~/.claude/templates
rm -rf ~/.claude/manifests
rm -rf ~/.claude/version.json
# Codex 相关目录
rm -rf ~/.codex/prompts
rm -rf ~/.codex/skills
rm -rf ~/.codex/agents
# 其他 CLI 目录
rm -rf ~/.gemini
rm -rf ~/.qwen
# CCW 核心目录
rm -rf ~/.ccw
```
::: danger 危险
**不要**执行 `rm -rf ~/.claude`,这会删除您的 Claude Code 个人配置:
- `~/.claude/settings.json` - 您的 Claude Code 设置
- `~/.claude/settings.local.json` - 本地覆盖设置
- MCP 服务器配置等
建议始终使用 `ccw uninstall` 进行受控卸载。
:::
## 故障排除 ## 故障排除
### 权限问题 ### 权限问题

View File

@@ -0,0 +1,736 @@
# 命令与技能参考
> **快速参考**: Claude 命令、技能和 Codex 能力的完整目录
---
## 快速参考表
### 命令快速参考
| 类别 | 命令 | 描述 | 参数 |
|----------|---------|-------------|-----------|
| **编排器** | `/ccw` | 主工作流编排器 | `"任务描述"` |
| **编排器** | `/ccw-coordinator` | 命令编排工具 | `[任务描述]` |
| **会话** | `/workflow:session:start` | 启动工作流会话 | `[--type] [--auto|--new] [任务]` |
| **会话** | `/workflow:session:resume` | 恢复暂停的会话 | - |
| **会话** | `/workflow:session:complete` | 完成活动会话 | `[-y] [--detailed]` |
| **会话** | `/workflow:session:list` | 列出所有会话 | - |
| **会话** | `/workflow:session:sync` | 同步会话到规格 | `[-y] ["完成内容"]` |
| **会话** | `/workflow:session:solidify` | 固化学习成果 | `[--type] [--category] "规则"` |
| **Issue** | `/issue:new` | 创建结构化 Issue | `<url|文本> [--priority 1-5]` |
| **Issue** | `/issue:discover` | 发现潜在问题 | `<路径> [--perspectives=...]` |
| **Issue** | `/issue:plan` | 规划 Issue 解决 | `--all-pending <ids>` |
| **Issue** | `/issue:queue` | 形成执行队列 | `[--queues <n>] [--issue <id>]` |
| **Issue** | `/issue:execute` | 执行队列 | `--queue <id> [--worktree]` |
| **IDAW** | `/idaw:run` | IDAW 编排器 | `[--task <ids>] [--dry-run]` |
| **IDAW** | `/idaw:add` | 添加任务到队列 | - |
| **IDAW** | `/idaw:resume` | 恢复 IDAW 会话 | - |
| **IDAW** | `/idaw:status` | 显示队列状态 | - |
| **With-File** | `/workflow:brainstorm-with-file` | 交互式头脑风暴 | `[-c] [-m creative|structured] "主题"` |
| **With-File** | `/workflow:analyze-with-file` | 协作分析 | `[-c] "主题"` |
| **With-File** | `/workflow:debug-with-file` | 假设驱动调试 | `"Bug描述"` |
| **With-File** | `/workflow:collaborative-plan-with-file` | 多代理规划 | - |
| **With-File** | `/workflow:roadmap-with-file` | 战略路线图 | - |
| **Cycle** | `/workflow:integration-test-cycle` | 集成测试循环 | - |
| **Cycle** | `/workflow:refactor-cycle` | 重构循环 | - |
| **CLI** | `/cli:codex-review` | Codex 代码审查 | `[--uncommitted|--base|--commit]` |
| **CLI** | `/cli:cli-init` | 初始化 CLI 配置 | - |
| **Memory** | `/memory:prepare` | 准备记忆上下文 | - |
| **Memory** | `/memory:style-skill-memory` | 样式/技能记忆 | - |
### 技能快速参考
| 类别 | 技能 | 内部流水线 | 用例 |
|----------|-------|-------------------|----------|
| **Workflow** | workflow-lite-planex | explore → plan → confirm → execute | 快速功能、Bug 修复 |
| **Workflow** | workflow-plan | session → context → convention → gen → verify/replan | 复杂功能规划 |
| **Workflow** | workflow-execute | session discovery → task processing → commit | 执行预生成的计划 |
| **Workflow** | workflow-tdd-plan | 6阶段 TDD plan → verify | TDD 开发 |
| **Workflow** | workflow-test-fix | session → context → analysis → gen → cycle | 测试生成与修复 |
| **Workflow** | workflow-multi-cli-plan | ACE context → CLI discussion → plan → execute | 多视角规划 |
| **Workflow** | workflow-skill-designer | - | 创建新技能 |
| **Team** | team-lifecycle-v5 | spec pipeline → impl pipeline | 完整生命周期 |
| **Team** | team-planex | planner wave → executor wave | Issue 批量执行 |
| **Team** | team-arch-opt | architecture analysis → optimization | 架构优化 |
| **Utility** | brainstorm | framework → parallel analysis → synthesis | 多视角创意 |
| **Utility** | review-cycle | discovery → analysis → aggregation → deep-dive | 代码审查 |
| **Utility** | spec-generator | study → brief → PRD → architecture → epics | 规格文档包 |
---
## 1. 主编排器命令
### /ccw
**描述**: 主工作流编排器 - 分析意图、选择工作流、在主进程中执行命令链
**参数**: `"任务描述"`
**类别**: orchestrator
**5阶段工作流**:
1. Phase 1: 分析意图 (检测任务类型、复杂度、清晰度)
2. Phase 1.5: 需求澄清 (如果清晰度 < 2)
3. Phase 2: 选择工作流并构建命令链
4. Phase 3: 用户确认
5. Phase 4: 设置 TODO 跟踪和状态文件
6. Phase 5: 执行命令链
**技能映射**:
| 技能 | 内部流水线 |
|-------|-------------------|
| workflow-lite-planex | explore → plan → confirm → execute |
| workflow-plan | session → context → convention → gen → verify/replan |
| workflow-execute | session discovery → task processing → commit |
| workflow-tdd-plan | 6阶段 TDD plan → verify |
| workflow-test-fix | session → context → analysis → gen → cycle |
| workflow-multi-cli-plan | ACE context → CLI discussion → plan → execute |
| review-cycle | session/module review → fix orchestration |
| brainstorm | auto/single-role → artifacts → analysis → synthesis |
| spec-generator | product-brief → PRD → architecture → epics |
**自动模式**: `-y``--yes` 标志跳过确认,传播到所有技能
---
### /ccw-coordinator
**描述**: 命令编排工具 - 分析需求、推荐链、带状态持久化顺序执行
**参数**: `[任务描述]`
**类别**: orchestrator
**3阶段工作流**:
1. Phase 1: 分析需求
2. Phase 2: 发现命令并推荐链
3. Phase 3: 顺序执行命令链
**最小执行单元**:
| 单元 | 命令 | 用途 |
|------|----------|---------|
| 快速实现 | lite-plan (plan → execute) | 轻量级规划和执行 |
| 多CLI规划 | multi-cli-plan | 多视角规划 |
| Bug修复 | lite-plan --bugfix | Bug诊断和修复 |
| 完整规划+执行 | plan → execute | 详细规划 |
| 验证规划+执行 | plan → plan-verify → execute | 带验证的规划 |
| TDD规划+执行 | tdd-plan → execute | TDD工作流 |
| Issue工作流 | discover → plan → queue → execute | 完整Issue生命周期 |
---
### /flow-create
**描述**: 元技能/flow-coordinator 的流程模板生成器
**参数**: `[模板名称] [--output <路径>]`
**类别**: utility
**执行流程**:
1. Phase 1: 模板设计 (名称 + 描述 + 级别)
2. Phase 2: 步骤定义 (命令类别 → 具体命令 → 执行单元 → 模式)
3. Phase 3: 生成 JSON
---
## 2. 工作流会话命令
### /workflow:session:start
**描述**: 发现现有会话或启动新工作流会话,具有智能会话管理和冲突检测
**参数**: `[--type <workflow|review|tdd|test|docs>] [--auto|--new] [任务描述]`
**类别**: session-management
**会话类型**:
| 类型 | 描述 | 默认对应技能 |
|------|-------------|-------------|
| workflow | 标准实现 | workflow-plan skill |
| review | 代码审查会话 | review-cycle skill |
| tdd | TDD开发 | workflow-tdd-plan skill |
| test | 测试生成/修复 | workflow-test-fix skill |
| docs | 文档会话 | memory-manage skill |
**模式**:
- **发现模式** (默认): 列出活动会话,提示用户
- **自动模式** (`--auto`): 智能会话选择
- **强制新建模式** (`--new`): 创建新会话
---
### /workflow:session:resume
**描述**: 恢复最近暂停的工作流会话,自动发现会话并更新状态
**类别**: session-management
---
### /workflow:session:complete
**描述**: 标记活动工作流会话为完成,归档并记录经验教训,更新清单,移除活动标志
**参数**: `[-y|--yes] [--detailed]`
**类别**: session-management
**执行阶段**:
1. 查找会话
2. 生成清单条目
3. 原子提交 (移至归档)
4. 自动同步项目状态
---
### /workflow:session:list
**描述**: 列出所有工作流会话,支持状态过滤,显示会话元数据和进度信息
**类别**: session-management
---
### /workflow:session:sync
**描述**: 快速同步会话工作到 specs/*.md 和 project-tech
**参数**: `[-y|--yes] ["完成内容"]`
**类别**: session-management
**流程**:
1. 收集上下文 (git diff, session, summary)
2. 提取更新 (guidelines, tech)
3. 预览并确认
4. 写入两个文件
---
### /workflow:session:solidify
**描述**: 将会话学习成果和用户定义约束固化为永久项目指南,或压缩近期记忆
**参数**: `[-y|--yes] [--type <convention|constraint|learning|compress>] [--category <类别>] [--limit <N>] "规则或见解"`
**类别**: session-management
**类型类别**:
| 类型 | 子类别 |
|------|---------------|
| convention | coding_style, naming_patterns, file_structure, documentation |
| constraint | architecture, tech_stack, performance, security |
| learning | architecture, performance, security, testing, process, other |
| compress | (操作核心记忆) |
---
## 3. Issue 工作流命令
### /issue:new
**描述**: 从 GitHub URL 或文本描述创建结构化 Issue
**参数**: `[-y|--yes] <github-url | 文本描述> [--priority 1-5]`
**类别**: issue
**执行流程**:
1. 输入分析与清晰度检测
2. 数据提取 (GitHub 或文本)
3. 轻量级上下文提示 (ACE 用于中等清晰度)
4. 条件性澄清
5. GitHub 发布决策
6. 创建 Issue
---
### /issue:discover
**描述**: 使用 CLI explore 从多视角发现潜在问题。支持 Exa 外部研究用于安全和最佳实践视角。
**参数**: `[-y|--yes] <路径模式> [--perspectives=bug,ux,...] [--external]`
**类别**: issue
**可用视角**:
| 视角 | 关注点 | 类别 |
|-------------|-------|------------|
| bug | 潜在Bug | edge-case, null-check, resource-leak, race-condition |
| ux | 用户体验 | error-message, loading-state, feedback, accessibility |
| test | 测试覆盖 | missing-test, edge-case-test, integration-gap |
| quality | 代码质量 | complexity, duplication, naming, documentation |
| security | 安全问题 | injection, auth, encryption, input-validation |
| performance | 性能 | n-plus-one, memory-usage, caching, algorithm |
| maintainability | 可维护性 | coupling, cohesion, tech-debt, extensibility |
| best-practices | 最佳实践 | convention, pattern, framework-usage, anti-pattern |
---
### /issue:plan
**描述**: 使用 issue-plan-agent 批量规划 Issue 解决方案 (探索 + 计划闭环)
**参数**: `[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]`
**类别**: issue
**执行过程**:
1. Issue 加载与智能分组
2. 统一探索 + 规划 (issue-plan-agent)
3. 解决方案注册与绑定
4. 汇总
---
### /issue:queue
**描述**: 使用 issue-queue-agent 从绑定解决方案形成执行队列 (解决方案级别)
**参数**: `[-y|--yes] [--queues <n>] [--issue <id>]`
**类别**: issue
**核心能力**:
- 代理驱动的排序逻辑
- 解决方案级别粒度
- 冲突澄清
- 并行/顺序组分配
---
### /issue:execute
**描述**: 使用基于 DAG 的并行编排执行队列 (每个解决方案一次提交)
**参数**: `[-y|--yes] --queue <queue-id> [--worktree [<现有路径>]]`
**类别**: issue
**执行流程**:
1. 验证队列 ID (必需)
2. 获取 DAG 和用户选择
3. 分发并行批次 (DAG 驱动)
4. 下一批次 (重复)
5. Worktree 完成
**推荐执行器**: Codex (2小时超时, 完整写入权限)
---
### /issue:from-brainstorm
**描述**: 将头脑风暴会话想法转换为带可执行解决方案的 Issue
**参数**: `SESSION="<session-id>" [--idea=<index>] [--auto] [-y|--yes]`
**类别**: issue
**执行流程**:
1. 会话加载
2. 想法选择
3. 丰富 Issue 上下文
4. 创建 Issue
5. 生成解决方案任务
6. 绑定解决方案
---
## 4. IDAW 命令
### /idaw:run
**描述**: IDAW 编排器 - 带 Git 检查点顺序执行任务技能链
**参数**: `[-y|--yes] [--task <id>[,<id>,...]] [--dry-run]`
**类别**: idaw
**技能链映射**:
| 任务类型 | 技能链 |
|-----------|-------------|
| bugfix | workflow-lite-planex → workflow-test-fix |
| bugfix-hotfix | workflow-lite-planex |
| feature | workflow-lite-planex → workflow-test-fix |
| feature-complex | workflow-plan → workflow-execute → workflow-test-fix |
| refactor | workflow:refactor-cycle |
| tdd | workflow-tdd-plan → workflow-execute |
| test | workflow-test-fix |
| test-fix | workflow-test-fix |
| review | review-cycle |
| docs | workflow-lite-planex |
**6阶段执行**:
1. 加载任务
2. 会话设置
3. 启动协议
4. 主循环 (顺序,一次一个任务)
5. 检查点 (每个任务)
6. 报告
---
### /idaw:add
**描述**: 添加任务到 IDAW 队列,自动推断任务类型和技能链
**类别**: idaw
---
### /idaw:resume
**描述**: 带崩溃恢复的 IDAW 会话恢复
**类别**: idaw
---
### /idaw:status
**描述**: 显示 IDAW 队列状态
**类别**: idaw
---
### /idaw:run-coordinate
**描述**: 带并行任务协调的多代理 IDAW 执行
**类别**: idaw
---
## 5. With-File 工作流
### /workflow:brainstorm-with-file
**描述**: 交互式头脑风暴,具有多 CLI 协作、想法扩展和文档化思维演进
**参数**: `[-y|--yes] [-c|--continue] [-m|--mode creative|structured] "想法或主题"`
**类别**: with-file
**输出目录**: `.workflow/.brainstorm/{session-id}/`
**4阶段工作流**:
1. Phase 1: 种子理解 (解析主题, 选择角色, 扩展向量)
2. Phase 2: 发散探索 (cli-explore-agent + 多 CLI 视角)
3. Phase 3: 交互式精炼 (多轮)
4. Phase 4: 收敛与结晶
**输出产物**:
- `brainstorm.md` - 完整思维演进时间线
- `exploration-codebase.json` - 代码库上下文
- `perspectives.json` - 多 CLI 发现
- `synthesis.json` - 最终综合
---
### /workflow:analyze-with-file
**描述**: 交互式协作分析具有文档化讨论、CLI 辅助探索和演进理解
**参数**: `[-y|--yes] [-c|--continue] "主题或问题"`
**类别**: with-file
**输出目录**: `.workflow/.analysis/{session-id}/`
**4阶段工作流**:
1. Phase 1: 主题理解
2. Phase 2: CLI 探索 (cli-explore-agent + 视角)
3. Phase 3: 交互式讨论 (多轮)
4. Phase 4: 综合与结论
**决策记录协议**: 必须记录方向选择、关键发现、假设变更、用户反馈
---
### /workflow:debug-with-file
**描述**: 交互式假设驱动调试,具有文档化探索、理解演进和 Gemini 辅助纠正
**参数**: `[-y|--yes] "Bug 描述或错误信息"`
**类别**: with-file
**输出目录**: `.workflow/.debug/{session-id}/`
**核心工作流**: 探索 → 文档 → 日志 → 分析 → 纠正理解 → 修复 → 验证
**输出产物**:
- `debug.log` - NDJSON 执行证据
- `understanding.md` - 探索时间线 + 整合理解
- `hypotheses.json` - 带结论的假设历史
---
### /workflow:collaborative-plan-with-file
**描述**: 带 Plan Note 共享文档的多代理协作规划
**类别**: with-file
---
### /workflow:roadmap-with-file
**描述**: 战略需求路线图 → Issue 创建 → execution-plan.json
**类别**: with-file
---
### /workflow:unified-execute-with-file
**描述**: 通用执行引擎 - 消费来自 collaborative-plan、roadmap、brainstorm 的计划输出
**类别**: with-file
---
## 6. 循环工作流
### /workflow:integration-test-cycle
**描述**: 带反思的自迭代集成测试 - 探索 → 测试开发 → 测试修复循环 → 反思
**类别**: cycle
**输出目录**: `.workflow/.test-cycle/`
---
### /workflow:refactor-cycle
**描述**: 技术债务发现 → 优先级排序 → 执行 → 验证
**类别**: cycle
**输出目录**: `.workflow/.refactor-cycle/`
---
## 7. CLI 命令
### /cli:codex-review
**描述**: 使用 Codex CLI 通过 ccw 端点进行交互式代码审查,支持可配置审查目标、模型和自定义指令
**参数**: `[--uncommitted|--base <分支>|--commit <sha>] [--model <模型>] [--title <标题>] [提示]`
**类别**: cli
**审查目标**:
| 目标 | 标志 | 描述 |
|--------|------|-------------|
| 未提交更改 | `--uncommitted` | 审查已暂存、未暂存和未跟踪的更改 |
| 与分支比较 | `--base <BRANCH>` | 审查与基础分支的差异 |
| 特定提交 | `--commit <SHA>` | 审查提交引入的更改 |
**关注领域**: 一般审查、安全重点、性能重点、代码质量
**重要**: 目标标志和提示互斥
---
### /cli:cli-init
**描述**: 初始化 ccw 端点的 CLI 配置
**类别**: cli
---
## 8. 记忆命令
### /memory:prepare
**描述**: 为会话准备记忆上下文
**类别**: memory
---
### /memory:style-skill-memory
**描述**: 样式和技能记忆管理
**类别**: memory
---
## 9. 团队技能
### 团队生命周期技能
| 技能 | 描述 |
|-------|-------------|
| team-lifecycle-v5 | 带角色规格驱动工作代理的完整团队生命周期 |
| team-planex | 规划器 + 执行器波流水线 (用于大批量 Issue 或路线图输出) |
| team-coordinate-v2 | 团队协调和编排 |
| team-executor-v2 | 带工作代理的任务执行 |
| team-arch-opt | 架构优化技能 |
### 团队领域技能
| 技能 | 描述 |
|-------|-------------|
| team-brainstorm | 多视角头脑风暴 |
| team-review | 代码审查工作流 |
| team-testing | 测试工作流 |
| team-frontend | 前端开发工作流 |
| team-issue | Issue 管理工作流 |
| team-iterdev | 迭代开发工作流 |
| team-perf-opt | 性能优化工作流 |
| team-quality-assurance | QA 工作流 |
| team-roadmap-dev | 路线图开发工作流 |
| team-tech-debt | 技术债务管理 |
| team-uidesign | UI 设计工作流 |
| team-ultra-analyze | 深度分析工作流 |
---
## 10. 工作流技能
| 技能 | 内部流水线 | 描述 |
|-------|-------------------|-------------|
| workflow-lite-planex | explore → plan → confirm → execute | 轻量级合并模式规划 |
| workflow-plan | session → context → convention → gen → verify/replan | 带架构设计的完整规划 |
| workflow-execute | session discovery → task processing → commit | 从规划会话执行 |
| workflow-tdd-plan | 6阶段 TDD plan → verify | TDD 工作流规划 |
| workflow-test-fix | session → context → analysis → gen → cycle | 测试生成与修复循环 |
| workflow-multi-cli-plan | ACE context → CLI discussion → plan → execute | 多 CLI 协作规划 |
| workflow-skill-designer | - | 工作流技能设计和生成 |
---
## 11. 实用技能
| 技能 | 描述 |
|-------|-------------|
| brainstorm | 统一头脑风暴技能 (自动并行 + 角色分析) |
| review-code | 代码审查技能 |
| review-cycle | 会话/模块审查 → 修复编排 |
| spec-generator | 产品简介 → PRD → 架构 → Epics |
| skill-generator | 生成新技能 |
| skill-tuning | 调优现有技能 |
| command-generator | 生成新命令 |
| memory-capture | 捕获会话记忆 |
| memory-manage | 管理存储的记忆 |
| issue-manage | Issue 管理工具 |
| ccw-help | CCW 帮助和文档 |
---
## 12. Codex 能力
### Codex 审查模式
**命令**: `ccw cli --tool codex --mode review [选项]`
| 选项 | 描述 |
|--------|-------------|
| `[提示]` | 自定义审查指令 (位置参数, 无目标标志) |
| `-c model=<模型>` | 通过配置覆盖模型 |
| `--uncommitted` | 审查已暂存、未暂存和未跟踪的更改 |
| `--base <BRANCH>` | 审查与基础分支的差异 |
| `--commit <SHA>` | 审查提交引入的更改 |
| `--title <TITLE>` | 可选的提交标题用于审查摘要 |
**可用模型**:
- 默认: gpt-5.2
- o3: OpenAI o3 推理模型
- gpt-4.1: GPT-4.1 模型
- o4-mini: OpenAI o4-mini (更快)
**约束**:
- 目标标志 (`--uncommitted`, `--base`, `--commit`) **不能**与位置参数 `[提示]` 一起使用
- 自定义提示仅支持不带目标标志的情况 (默认审查未提交更改)
### Codex 集成点
| 集成点 | 描述 |
|-------------------|-------------|
| CLI 端点 | `ccw cli --tool codex --mode <analysis\|write\|review>` |
| 多 CLI 规划 | workflow-multi-cli-plan 中的务实视角 |
| 代码审查 | `/cli:codex-review` 命令 |
| Issue 执行 | `/issue:execute` 的推荐执行器 |
| 魔鬼代言人 | 头脑风暴精炼中的挑战模式 |
### Codex 模式汇总
| 模式 | 权限 | 用例 |
|------|------------|----------|
| analysis | 只读 | 代码分析、架构审查 |
| write | 完整访问 | 实现、文件修改 |
| review | 只读输出 | Git 感知的代码审查 |
---
## 统计汇总
| 类别 | 数量 |
|----------|-------|
| 主编排器命令 | 3 |
| 工作流会话命令 | 6 |
| Issue 工作流命令 | 6 |
| IDAW 命令 | 5 |
| With-File 工作流 | 6 |
| 循环工作流 | 2 |
| CLI 命令 | 2 |
| 记忆命令 | 2 |
| 团队技能 | 17 |
| 工作流技能 | 7 |
| 实用技能 | 11 |
| **命令总数** | 32 |
| **技能总数** | 35 |
---
## 调用模式
### 斜杠命令调用
```
/<命名空间>:<命令> [参数] [标志]
```
示例:
- `/ccw "添加用户认证"`
- `/workflow:session:start --auto "实现功能"`
- `/issue:new https://github.com/org/repo/issues/123`
- `/cli:codex-review --base main`
### 技能调用 (从代码)
```javascript
Skill({ skill: "workflow-lite-planex", args: '"任务描述"' })
Skill({ skill: "brainstorm", args: '"主题或问题"' })
Skill({ skill: "review-cycle", args: '--session="WFS-xxx"' })
```
### CLI 工具调用
```bash
ccw cli -p "PURPOSE: ... TASK: ... MODE: analysis|write" --tool <工具> --mode <模式>
```
---
## 相关文档
- [工作流对比表](../workflows/comparison-table.md) - 工作流选择指南
- [工作流概览](../workflows/index.md) - 4级工作流系统
- [Claude 工作流技能](../skills/claude-workflow.md) - 详细技能文档

View File

@@ -13,27 +13,78 @@
| **协作流程割裂** | 团队成员各自为战 | 统一消息总线、共享状态、进度同步 | | **协作流程割裂** | 团队成员各自为战 | 统一消息总线、共享状态、进度同步 |
| **资源浪费** | 重复上下文加载 | Wisdom 累积、探索缓存、产物复用 | | **资源浪费** | 重复上下文加载 | Wisdom 累积、探索缓存、产物复用 |
## Skills 列表 ---
| Skill | 功能 | 触发方式 | ## Skills 总览
| Skill | 功能 | 适用场景 |
| --- | --- | --- | | --- | --- | --- |
| `team-coordinate` | 通用团队协调器(动态角色生成) | `/team-coordinate` | | `team-coordinate-v2` | 通用团队协调器(动态角色生成) | 任意复杂任务 |
| `team-lifecycle-v5` | 全生命周期团队(规范→实现→测试→审查 | `/team-lifecycle` | | `team-lifecycle-v5` | 全生命周期团队(规范→实现→测试) | 完整功能开发 |
| `team-planex` | 规划-执行流水线(边规划边执行) | `/team-planex` | | `team-planex` | 规划-执行流水线 | Issue 批处理 |
| `team-review` | 代码审查团队(扫描→审查→修复) | `/team-review` | | `team-review` | 代码审查团队 | 代码审查、漏洞扫描 |
| `team-testing` | 测试团队(策略→生成→执行→分析) | `/team-testing` | | `team-testing` | 测试团队 | 测试覆盖、用例生成 |
| `team-arch-opt` | 架构优化团队 | 重构、架构分析 |
| `team-perf-opt` | 性能优化团队 | 性能调优、瓶颈分析 |
| `team-brainstorm` | 头脑风暴团队 | 多角度分析、创意生成 |
| `team-frontend` | 前端开发团队 | UI 开发、设计系统 |
| `team-uidesign` | UI 设计团队 | 设计系统、组件规范 |
| `team-issue` | Issue 处理团队 | Issue 分析、实现 |
| `team-iterdev` | 迭代开发团队 | 增量交付、敏捷开发 |
| `team-quality-assurance` | 质量保证团队 | 质量扫描、缺陷管理 |
| `team-roadmap-dev` | 路线图开发团队 | 分阶段开发、里程碑 |
| `team-tech-debt` | 技术债务团队 | 债务清理、代码治理 |
| `team-ultra-analyze` | 深度分析团队 | 复杂问题分析、协作探索 |
| `team-executor-v2` | 轻量执行器 | 会话恢复、纯执行 |
---
## 核心架构
所有 Team Skills 共享统一的 **team-worker agent 架构**
```
┌──────────────────────────────────────────────────────────┐
│ Skill(skill="team-xxx", args="任务描述") │
└────────────────────────┬─────────────────────────────────┘
│ Role Router
┌──── --role present? ────┐
│ NO │ YES
↓ ↓
Orchestration Mode Role Dispatch
(auto → coordinator) (route to role.md)
┌─────────┴─────────┬───────────────┬──────────────┐
↓ ↓ ↓ ↓
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ coord │ │worker 1│ │worker 2│ │worker N│
│ (编排) │ │(执行) │ │(执行) │ │(执行) │
└────────┘ └────────┘ └────────┘ └────────┘
│ │ │ │
└───────────────────┴───────────────┴──────────────┘
Message Bus (消息总线)
```
**核心组件**:
- **Coordinator**: 内置编排器,负责任务分析、派发、监控
- **Team-Worker Agent**: 统一代理,加载 role-spec 执行角色逻辑
- **Role Router**: `--role=xxx` 参数路由到角色执行
- **Message Bus**: 团队成员间通信协议
- **Shared Memory**: 跨任务知识累积 (Wisdom)
---
## Skills 详解 ## Skills 详解
### team-coordinate ### team-coordinate-v2
**一句话定位**: 通用团队协调器 — 根据任务分析动态生成角色并编排执行 **一句话定位**: 通用团队协调器 — 根据任务分析动态生成角色并编排执行
**触发**: **触发**:
``` ```bash
/team-coordinate <task-description> team-coordinate <task-description>
/team-coordinate --role=coordinator <task> team-coordinate --role=coordinator <task>
/team-coordinate --role=<worker> --session=<path>
``` ```
**功能**: **功能**:
@@ -42,17 +93,6 @@
- Fast-Advance 机制跳过协调器直接派生后继任务 - Fast-Advance 机制跳过协调器直接派生后继任务
- Wisdom 累积跨任务知识 - Wisdom 累积跨任务知识
**角色注册表**:
| 角色 | 文件 | 任务前缀 | 类型 |
|------|------|----------|------|
| coordinator | roles/coordinator/role.md | (无) | 编排器 |
| (动态) | `<session>/roles/<role>.md` | (动态) | 工作者 |
**流水线**:
```
任务分析 → 生成角色 → 初始化会话 → 创建任务链 → 派生首批工作者 → 循环推进 → 完成报告
```
**会话目录**: **会话目录**:
``` ```
.workflow/.team/TC-<slug>-<date>/ .workflow/.team/TC-<slug>-<date>/
@@ -61,8 +101,6 @@
├── roles/ # 动态角色定义 ├── roles/ # 动态角色定义
├── artifacts/ # 所有 MD 交付产物 ├── artifacts/ # 所有 MD 交付产物
├── wisdom/ # 跨任务知识 ├── wisdom/ # 跨任务知识
├── explorations/ # 共享探索缓存
├── discussions/ # 内联讨论记录
└── .msg/ # 团队消息总线日志 └── .msg/ # 团队消息总线日志
``` ```
@@ -73,8 +111,8 @@
**一句话定位**: 全生命周期团队 — 从规范到实现到测试到审查的完整流水线 **一句话定位**: 全生命周期团队 — 从规范到实现到测试到审查的完整流水线
**触发**: **触发**:
``` ```bash
/team-lifecycle <task-description> team-lifecycle <task-description>
``` ```
**功能**: **功能**:
@@ -92,164 +130,67 @@
| executor | role-specs/executor.md | IMPL-* | true | | executor | role-specs/executor.md | IMPL-* | true |
| tester | role-specs/tester.md | TEST-* | false | | tester | role-specs/tester.md | TEST-* | false |
| reviewer | role-specs/reviewer.md | REVIEW-* | false | | reviewer | role-specs/reviewer.md | REVIEW-* | false |
| architect | role-specs/architect.md | ARCH-* | false |
| fe-developer | role-specs/fe-developer.md | DEV-FE-* | false |
| fe-qa | role-specs/fe-qa.md | QA-FE-* | false |
**流水线定义**: **流水线定义**:
``` ```
规范流水线 (6 任务): 规范流水线: RESEARCH → DRAFT → QUALITY
RESEARCH-001 → DRAFT-001 → DRAFT-002 → DRAFT-003 → DRAFT-004 → QUALITY-001 实现流水线: PLAN → IMPL → TEST + REVIEW
全生命周期: [规范流水线] → [实现流水线]
实现流水线 (4 任务):
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
全生命周期 (10 任务):
[规范流水线] → PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
前端流水线:
PLAN-001 → DEV-FE-001 → QA-FE-001 (GC 循环,最多 2 轮)
``` ```
**质量关卡** (QUALITY-001 完成后):
```
═════════════════════════════════════════
SPEC PHASE COMPLETE
Quality Gate: <PASS|REVIEW|FAIL> (<score>%)
Dimension Scores:
Completeness: <bar> <n>%
Consistency: <bar> <n>%
Traceability: <bar> <n>%
Depth: <bar> <n>%
Coverage: <bar> <n>%
Available Actions:
resume -> Proceed to implementation
improve -> Auto-improve weakest dimension
improve <dimension> -> Improve specific dimension
revise <TASK-ID> -> Revise specific document
recheck -> Re-run quality check
feedback <text> -> Inject feedback, create revision
═════════════════════════════════════════
```
**用户命令** (唤醒暂停的协调器):
| 命令 | 动作 |
|------|------|
| `check` / `status` | 输出执行状态图,不推进 |
| `resume` / `continue` | 检查工作者状态,推进下一步 |
| `revise <TASK-ID> [feedback]` | 创建修订任务 + 级联下游 |
| `feedback <text>` | 分析反馈影响,创建定向修订链 |
| `recheck` | 重新运行 QUALITY-001 质量检查 |
| `improve [dimension]` | 自动改进 readiness-report 中最弱维度 |
--- ---
### team-planex ### team-planex
**一句话定位**: 边规划边执行团队 — 通过逐 Issue 节拍流水线实现 planner 和 executor 并行工作 **一句话定位**: 边规划边执行团队 — 逐 Issue 节拍流水线
**触发**: **触发**:
``` ```bash
/team-planex <task-description> team-planex <task-description>
/team-planex --role=planner <input> team-planex --role=planner <input>
/team-planex --role=executor --input <solution-file> team-planex --role=executor --input <solution-file>
``` ```
**功能**: **功能**:
- 2 成员团队planner + executorplanner 担任 lead 角色 - 2 成员团队planner + executorplanner 担任 lead 角色
- 逐 Issue 节拍planner 完成一个 issue 的 solution 后立即创建 EXEC-* 任务 - 逐 Issue 节拍planner 完成后立即创建 EXEC-* 任务
- Solution 写入中间产物文件executor 从文件加载 - Solution 写入中间产物文件executor 从文件加载
- 支持多种执行后端agent/codex/gemini
**角色注册表**: **Wave Pipeline**:
| 角色 | 文件 | 任务前缀 | 类型 |
|------|------|----------|------|
| planner | roles/planner.md | PLAN-* | pipeline (lead) |
| executor | roles/executor.md | EXEC-* | pipeline |
**输入类型**:
| 输入类型 | 格式 | 示例 |
|----------|------|------|
| Issue IDs | 直接传入 ID | `--role=planner ISS-20260215-001 ISS-20260215-002` |
| 需求文本 | `--text '...'` | `--role=planner --text '实现用户认证模块'` |
| Plan 文件 | `--plan path` | `--role=planner --plan plan/2026-02-15-auth.md` |
**Wave Pipeline** (逐 Issue 节拍):
``` ```
Issue 1: planner 规划 solution → 写中间产物 → 冲突检查 → 创建 EXEC-* → issue_ready Issue 1: planner 规划 → 写产物 → 创建 EXEC-* → executor 执行
↓ (executor 立即开始) Issue 2: planner 规划 → 写产物 → 创建 EXEC-* → executor 并行消费
Issue 2: planner 规划 solution → 写中间产物 → 冲突检查 → 创建 EXEC-* → issue_ready Final: planner 发送 all_planned → executor 完成剩余 → 结束
↓ (executor 并行消费)
Issue N: ...
Final: planner 发送 all_planned → executor 完成剩余 EXEC-* → 结束
``` ```
**执行方法选择**:
| 执行器 | 后端 | 适用场景 |
|--------|------|----------|
| `agent` | code-developer subagent | 简单任务、同步执行 |
| `codex` | `ccw cli --tool codex --mode write` | 复杂任务、后台执行 |
| `gemini` | `ccw cli --tool gemini --mode write` | 分析类任务、后台执行 |
**用户命令**:
| 命令 | 动作 |
|------|------|
| `check` / `status` | 输出执行状态图,不推进 |
| `resume` / `continue` | 检查工作者状态,推进下一步 |
| `add <issue-ids or --text '...' or --plan path>` | 追加新任务到 planner 队列 |
--- ---
### team-review ### team-review
**一句话定位**: 代码审查团队 — 统一的代码扫描、漏洞审查、优化建议和自动修复 **一句话定位**: 代码审查团队 — 统一的代码扫描、漏洞审查、自动修复
**触发**: **触发**:
```bash
team-review <target-path>
team-review --full <target-path> # scan + review + fix
team-review --fix <review-files> # fix only
team-review -q <target-path> # quick scan only
``` ```
/team-review <target-path>
/team-review --full <target-path> # scan + review + fix
/team-review --fix <review-files> # fix only
/team-review -q <target-path> # quick scan only
```
**功能**:
- 4 角色团队coordinator, scanner, reviewer, fixer
- 多维度审查:安全性、正确性、性能、可维护性
- 自动修复循环(审查 → 修复 → 验证)
**角色注册表**: **角色注册表**:
| 角色 | 文件 | 任务前缀 | 类型 | | 角色 | 任务前缀 | 类型 |
|------|------|----------|------| |------|----------|------|
| coordinator | roles/coordinator/role.md | RC-* | 编排器 | | coordinator | RC-* | 编排器 |
| scanner | roles/scanner/role.md | SCAN-* | 只读分析 | | scanner | SCAN-* | 只读分析 |
| reviewer | roles/reviewer/role.md | REV-* | 只读分析 | | reviewer | REV-* | 只读分析 |
| fixer | roles/fixer/role.md | FIX-* | 代码生成 | | fixer | FIX-* | 代码生成 |
**流水线** (CP-1 Linear): **流水线**:
``` ```
coordinator dispatch SCAN-* (扫描) → REV-* (审查) → [用户确认] → FIX-* (修复)
→ SCAN-* (scanner: 工具链 + LLM 扫描)
→ REV-* (reviewer: 深度分析 + 报告)
→ [用户确认]
→ FIX-* (fixer: 规划 + 执行 + 验证)
``` ```
**检查点**: **审查维度**: 安全性、正确性、性能、可维护性
| 触发 | 位置 | 行为 |
|------|------|------|
| Review→Fix 过渡 | REV-* 完成 | 暂停,展示审查报告,等待用户 `resume` 确认修复 |
| 快速模式 (`-q`) | SCAN-* 后 | 流水线在扫描后结束,无审查/修复 |
| 仅修复模式 (`--fix`) | 入口 | 跳过扫描/审查,直接进入 fixer |
**审查维度**:
| 维度 | 检查点 |
|------|--------|
| 安全性 (sec) | 注入漏洞、敏感信息泄露、权限控制 |
| 正确性 (cor) | 边界条件、错误处理、类型安全 |
| 性能 (perf) | 算法复杂度、I/O 优化、资源使用 |
| 可维护性 (maint) | 代码结构、命名规范、注释质量 |
--- ---
@@ -258,75 +199,318 @@ coordinator dispatch
**一句话定位**: 测试团队 — 通过 Generator-Critic 循环实现渐进式测试覆盖 **一句话定位**: 测试团队 — 通过 Generator-Critic 循环实现渐进式测试覆盖
**触发**: **触发**:
```bash
team-testing <task-description>
``` ```
/team-testing <task-description>
```
**功能**:
- 5 角色团队coordinator, strategist, generator, executor, analyst
- 三种流水线Targeted、Standard、Comprehensive
- Generator-Critic 循环自动改进测试覆盖率
**角色注册表**: **角色注册表**:
| 角色 | 文件 | 任务前缀 | 类型 | | 角色 | 任务前缀 | 类型 |
|------|------|----------|------| |------|----------|------|
| coordinator | roles/coordinator.md | (无) | 编排器 | | coordinator | (无) | 编排器 |
| strategist | roles/strategist.md | STRATEGY-* | pipeline | | strategist | STRATEGY-* | pipeline |
| generator | roles/generator.md | TESTGEN-* | pipeline | | generator | TESTGEN-* | pipeline |
| executor | roles/executor.md | TESTRUN-* | pipeline | | executor | TESTRUN-* | pipeline |
| analyst | roles/analyst.md | TESTANA-* | pipeline | | analyst | TESTANA-* | pipeline |
**三种流水线**: **三种流水线**:
``` ```
Targeted (小范围变更): Targeted: STRATEGY → TESTGEN(L1) → TESTRUN
STRATEGY-001 → TESTGEN-001(L1 unit) → TESTRUN-001 Standard: STRATEGY → TESTGEN(L1) → TESTRUN → TESTGEN(L2) → TESTRUN → TESTANA
Comprehensive: STRATEGY → [TESTGEN(L1+L2) 并行] → [TESTRUN 并行] → TESTGEN(L3) → TESTRUN → TESTANA
Standard (渐进式):
STRATEGY-001 → TESTGEN-001(L1) → TESTRUN-001(L1) → TESTGEN-002(L2) → TESTRUN-002(L2) → TESTANA-001
Comprehensive (完整覆盖):
STRATEGY-001 → [TESTGEN-001(L1) + TESTGEN-002(L2)](并行) → [TESTRUN-001(L1) + TESTRUN-002(L2)](并行) → TESTGEN-003(L3) → TESTRUN-003(L3) → TESTANA-001
``` ```
**Generator-Critic 循环**: **测试层**: L1: Unit (80%) → L2: Integration (60%) → L3: E2E (40%)
```
TESTGEN → TESTRUN → (如果覆盖率 < 目标) → TESTGEN-fix → TESTRUN-2 ---
(如果覆盖率 >= 目标) → 下一层或 TESTANA
### team-arch-opt
**一句话定位**: 架构优化团队 — 分析架构问题、设计重构策略、实施改进
**触发**:
```bash
team-arch-opt <task-description>
``` ```
**测试层定义**: **角色注册表**:
| 层级 | 覆盖目标 | 示例 | | 角色 | 任务前缀 | 功能 |
|------|----------|------| |------|----------|------|
| L1: Unit | 80% | 单元测试、函数级测试 | | coordinator | (无) | 编排器 |
| L2: Integration | 60% | 集成测试、模块间交互 | | analyzer | ANALYZE-* | 架构分析 |
| L3: E2E | 40% | 端到端测试、用户场景 | | designer | DESIGN-* | 重构设计 |
| refactorer | REFACT-* | 实施重构 |
| validator | VALID-* | 验证改进 |
| reviewer | REVIEW-* | 代码审查 |
**共享内存** (shared-memory.json): **检测范围**: 依赖循环、耦合/内聚、分层违规、上帝类、死代码
| 角色 | 字段 |
---
### team-perf-opt
**一句话定位**: 性能优化团队 — 性能分析、瓶颈识别、优化实施
**触发**:
```bash
team-perf-opt <task-description>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| profiler | PROFILE-* | 性能分析 |
| strategist | STRAT-* | 优化策略 |
| optimizer | OPT-* | 实施优化 |
| benchmarker | BENCH-* | 基准测试 |
| reviewer | REVIEW-* | 代码审查 |
---
### team-brainstorm
**一句话定位**: 头脑风暴团队 — 多角度创意分析、Generator-Critic 循环
**触发**:
```bash
team-brainstorm <topic>
team-brainstorm --role=ideator <topic>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| ideator | IDEA-* | 创意生成 |
| challenger | CHALLENGE-* | 批判质疑 |
| synthesizer | SYNTH-* | 综合整合 |
| evaluator | EVAL-* | 评估评分 |
---
### team-frontend
**一句话定位**: 前端开发团队 — 内置 ui-ux-pro-max 设计智能
**触发**:
```bash
team-frontend <task-description>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| analyst | ANALYZE-* | 需求分析 |
| architect | ARCH-* | 架构设计 |
| developer | DEV-* | 前端实现 |
| qa | QA-* | 质量保证 |
---
### team-uidesign
**一句话定位**: UI 设计团队 — 设计系统分析、Token 定义、组件规范
**触发**:
```bash
team-uidesign <task>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| researcher | RESEARCH-* | 设计研究 |
| designer | DESIGN-* | 设计定义 |
| reviewer | AUDIT-* | 无障碍审计 |
| implementer | BUILD-* | 代码实现 |
---
### team-issue
**一句话定位**: Issue 处理团队 — Issue 处理流水线
**触发**:
```bash
team-issue <issue-ids>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| explorer | EXPLORE-* | 代码探索 |
| planner | PLAN-* | 方案规划 |
| implementer | IMPL-* | 代码实现 |
| reviewer | REVIEW-* | 代码审查 |
| integrator | INTEG-* | 集成验证 |
---
### team-iterdev
**一句话定位**: 迭代开发团队 — Generator-Critic 循环、增量交付
**触发**:
```bash
team-iterdev <task-description>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| architect | ARCH-* | 架构设计 |
| developer | DEV-* | 功能开发 |
| tester | TEST-* | 测试验证 |
| reviewer | REVIEW-* | 代码审查 |
**特点**: Developer-Reviewer 循环(最多 3 轮Task Ledger 实时进度
---
### team-quality-assurance
**一句话定位**: 质量保证团队 — Issue 发现 + 测试验证闭环
**触发**:
```bash
team-quality-assurance <task-description>
team-qa <task-description>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| scout | SCOUT-* | 问题发现 |
| strategist | QASTRAT-* | 策略制定 |
| generator | QAGEN-* | 测试生成 |
| executor | QARUN-* | 测试执行 |
| analyst | QAANA-* | 结果分析 |
---
### team-roadmap-dev
**一句话定位**: 路线图开发团队 — 分阶段开发、里程碑管理
**触发**:
```bash
team-roadmap-dev <task-description>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 人机交互 |
| planner | PLAN-* | 阶段规划 |
| executor | EXEC-* | 阶段执行 |
| verifier | VERIFY-* | 阶段验证 |
---
### team-tech-debt
**一句话定位**: 技术债务团队 — 债务扫描、评估、清理、验证
**触发**:
```bash
team-tech-debt <task-description>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| scanner | TDSCAN-* | 债务扫描 |
| assessor | TDEVAL-* | 量化评估 |
| planner | TDPLAN-* | 治理规划 |
| executor | TDFIX-* | 清理执行 |
| validator | TDVAL-* | 验证回归 |
---
### team-ultra-analyze
**一句话定位**: 深度分析团队 — 多角色协作探索、渐进式理解
**触发**:
```bash
team-ultra-analyze <topic>
team-analyze <topic>
```
**角色注册表**:
| 角色 | 任务前缀 | 功能 |
|------|----------|------|
| coordinator | (无) | 编排器 |
| explorer | EXPLORE-* | 代码探索 |
| analyst | ANALYZE-* | 深度分析 |
| discussant | DISCUSS-* | 讨论交互 |
| synthesizer | SYNTH-* | 综合输出 |
**特点**: 支持 Quick/Standard/Deep 三种深度模式
---
### team-executor-v2
**一句话定位**: 轻量执行器 — 恢复会话、纯执行模式
**触发**:
```bash
team-executor --session=<path>
```
**功能**:
- 无分析、无角色生成 — 仅加载并执行现有会话
- 用于恢复中断的 team-coordinate 会话
---
## 用户命令
所有 Team Skills 支持统一的用户命令(唤醒暂停的协调器):
| 命令 | 动作 |
|------|------| |------|------|
| strategist | `test_strategy` | | `check` / `status` | 输出执行状态图,不推进 |
| generator | `generated_tests` | | `resume` / `continue` | 检查工作者状态,推进下一步 |
| executor | `execution_results`, `defect_patterns` | | `revise <TASK-ID>` | 创建修订任务 + 级联下游 |
| analyst | `analysis_report`, `coverage_history` | | `feedback <text>` | 分析反馈影响,创建定向修订链 |
---
## 最佳实践
1. **选择合适的团队类型**:
- 通用任务 → `team-coordinate-v2`
- 完整功能开发 → `team-lifecycle-v5`
- Issue 批处理 → `team-planex`
- 代码审查 → `team-review`
- 测试覆盖 → `team-testing`
- 架构优化 → `team-arch-opt`
- 性能调优 → `team-perf-opt`
- 头脑风暴 → `team-brainstorm`
- 前端开发 → `team-frontend`
- UI 设计 → `team-uidesign`
- 技术债务 → `team-tech-debt`
- 深度分析 → `team-ultra-analyze`
2. **利用内循环角色**: 设置 `inner_loop: true` 让单个工作者处理多个同前缀任务
3. **Wisdom 累积**: 团队会话中的所有角色都会累积知识到 `wisdom/` 目录
4. **Fast-Advance**: 简单线性后继任务会自动跳过协调器直接派生
5. **断点恢复**: 所有团队技能支持会话恢复,通过 `--resume``resume` 命令继续
---
## 相关命令 ## 相关命令
- [Claude Commands - Workflow](../commands/claude/workflow.md) - [Claude Commands - Workflow](../commands/claude/workflow.md)
- [Claude Commands - Session](../commands/claude/session.md) - [Claude Commands - Session](../commands/claude/session.md)
## 最佳实践
1. **选择合适的团队类型**:
- 通用任务 → `team-coordinate`
- 完整功能开发 → `team-lifecycle`
- Issue 批处理 → `team-planex`
- 代码审查 → `team-review`
- 测试覆盖 → `team-testing`
2. **利用内循环角色**: 对于有多个同前缀串行任务的角色,设置 `inner_loop: true` 让单个工作者处理全部任务,避免重复派生开销
3. **Wisdom 累积**: 团队会话中的所有角色都会累积知识到 `wisdom/` 目录,后续任务可复用这些模式、决策和约定
4. **Fast-Advance**: 简单线性后继任务会自动跳过协调器直接派生,减少协调开销
5. **断点恢复**: 所有团队技能支持会话恢复,通过 `--resume` 或用户命令 `resume` 继续中断的会话

View File

@@ -0,0 +1,175 @@
# 工作流对比表
> **CCW 工作流完整参考** - 按调用方式、流水线、用例、复杂度和自动链式行为对比所有工作流。
## 快速参考
| 工作流 | 最佳用途 | 级别 | 自包含 |
|----------|----------|-------|----------------|
| workflow-lite-planex | 快速任务、Bug 修复 | 2 (轻量级) | 是 |
| workflow-plan → workflow-execute | 复杂功能 | 3-4 (标准) | 否 (需要 execute) |
| workflow-tdd-plan → workflow-execute | TDD 开发 | 3 (标准) | 否 (需要 execute) |
| workflow-test-fix | 测试生成/修复 | 3 (标准) | 是 |
| workflow-multi-cli-plan | 多视角规划 | 3 (标准) | 是 |
| brainstorm-with-file | 创意构思、探索 | 4 (完整) | 否 (链式到 plan) |
| analyze-with-file | 深度分析 | 3 (标准) | 否 (链式到 lite-plan) |
| debug-with-file | 假设驱动调试 | 3 (标准) | 是 |
| collaborative-plan-with-file | 多代理规划 | 3 (标准) | 否 (链式到 execute) |
| roadmap-with-file | 战略路线图 | 4 (战略) | 否 (链式到 team-planex) |
| integration-test-cycle | 集成测试 | 3 (标准) | 是 |
| refactor-cycle | 技术债务重构 | 3 (标准) | 是 |
| review-cycle | 代码审查 | 3 (标准) | 是 |
| spec-generator | 规格文档包 | 4 (完整) | 否 (链式到 plan) |
| team-planex | Issue 批量执行 | Team | 是 |
| team-lifecycle-v5 | 完整生命周期 | Team | 是 |
| issue pipeline | Issue 管理 | 2.5 (桥接) | 是 |
---
## 完整对比表
| 工作流 | 调用方式 | 流水线 | 用例 | 级别 | 自包含 | 自动链式到 |
|----------|------------|----------|----------|-------|----------------|----------------|
| **Plan+Execute 工作流** |
| workflow-lite-planex | `/ccw "任务"` (低/中复杂度自动选择) | explore → plan → confirm → execute | 快速功能、Bug 修复、简单任务 | 2 (轻量级) | 是 | workflow-test-fix |
| workflow-plan | `/ccw "复杂功能"` (高复杂度) | session → context → convention → gen → verify/replan | 复杂功能规划、正式验证 | 3-4 (标准) | 否 | workflow-execute |
| workflow-execute | `/workflow-execute` (plan 之后) | session discovery → task processing → commit | 执行预生成的计划 | 3 (标准) | 是 | review-cycle (可选) |
| workflow-multi-cli-plan | `/ccw "multi-cli plan: ..."` | ACE context → CLI discussion → plan → execute | 多视角规划 | 3 (标准) | 是 | (内部交接) |
| **TDD 工作流** |
| workflow-tdd-plan | `/ccw "Implement with TDD"` | 6阶段 TDD plan → verify | 测试驱动开发规划 | 3 (标准) | 否 | workflow-execute |
| workflow-test-fix | `/ccw "generate tests"` 或自动链式 | session → context → analysis → gen → cycle | 测试生成、覆盖率提升 | 3 (标准) | 是 | (独立) |
| **头脑风暴工作流** |
| brainstorm | `/brainstorm "主题"` | mode detect → framework → parallel analysis → synthesis | 多视角创意构思 | 4 (完整) | 是 (仅构思) | workflow-plan |
| brainstorm-with-file | `/ccw "brainstorm: ..."` | brainstorm + documented artifacts | 带会话文档的创意构思 | 4 (完整) | 否 | workflow-plan → execute |
| collaborative-plan-with-file | `/ccw "collaborative plan: ..."` | understanding → parallel agents → plan-note.md | 多代理协作规划 | 3 (标准) | 否 | unified-execute-with-file |
| **分析工作流** |
| analyze-with-file | `/ccw "analyze: ..."` | multi-CLI analysis → discussion.md | 深度理解、架构探索 | 3 (标准) | 否 | workflow-lite-planex |
| debug-with-file | `/ccw "debug: ..."` | hypothesis-driven iteration → debug.log | 系统化调试 | 3 (标准) | 是 | (独立) |
| **审查工作流** |
| review-cycle | `/ccw "review code"` | discovery → analysis → aggregation → deep-dive → completion | 代码审查、质量门禁 | 3 (标准) | 是 | fix mode (如有发现) |
| **规格工作流** |
| spec-generator | `/ccw "specification: ..."` | study → discovery → brief → PRD → architecture → epics | 完整规格文档包 | 4 (完整) | 是 (仅文档) | workflow-plan / team-planex |
| **团队工作流** |
| team-planex | `/ccw "team planex: ..."` | coordinator → planner wave → executor wave | 基于 Issue 的并行执行 | Team | 是 | (完整流水线) |
| team-lifecycle-v5 | `/ccw "team lifecycle: ..."` | spec pipeline → impl pipeline | 从规格到验证的完整生命周期 | Team | 是 | (完整生命周期) |
| team-arch-opt | (架构优化) | architecture analysis → optimization | 架构优化 | Team | 是 | (完整) |
| **循环工作流** |
| integration-test-cycle | `/ccw "integration test: ..."` | explore → test dev → test-fix cycle → reflection | 带迭代的集成测试 | 3 (标准) | 是 | (自迭代) |
| refactor-cycle | `/ccw "refactor: ..."` | discover → prioritize → execute → validate | 技术债务发现与重构 | 3 (标准) | 是 | (自迭代) |
| **Issue 工作流** |
| issue pipeline | `/ccw "use issue workflow"` | discover → plan → queue → execute | 结构化 Issue 管理 | 2.5 (桥接) | 是 | (完整流水线) |
| **路线图工作流** |
| roadmap-with-file | `/ccw "roadmap: ..."` | strategic roadmap → issue creation → execution-plan | 战略需求拆解 | 4 (战略) | 否 | team-planex |
---
## 工作流级别分类
| 级别 | 工作流 | 特点 |
|-------|-----------|-----------------|
| **2 (轻量级)** | workflow-lite-planex, docs | 快速执行、最少阶段 |
| **2.5 (桥接)** | issue pipeline, rapid-to-issue | 桥接到 Issue 工作流 |
| **3 (标准)** | workflow-plan, workflow-execute, workflow-tdd-plan, workflow-test-fix, review-cycle, debug-with-file, analyze-with-file, workflow-multi-cli-plan | 完整规划/执行、多阶段 |
| **4 (完整)** | brainstorm, spec-generator, brainstorm-with-file, roadmap-with-file | 完整探索、规格化 |
| **Team** | team-planex, team-lifecycle-v5, team-arch-opt | 多代理并行执行 |
| **Cycle** | integration-test-cycle, refactor-cycle | 带反思的自迭代 |
---
## 自动链式参考
| 源工作流 | 自动链式到 | 条件 |
|-----------------|---------------|-----------|
| workflow-lite-planex | workflow-test-fix | 默认 (除非 skip-tests) |
| workflow-plan | workflow-execute | 计划确认后 |
| workflow-execute | review-cycle | 用户通过 Phase 6 选择 |
| workflow-tdd-plan | workflow-execute | TDD 计划验证后 |
| brainstorm | workflow-plan | 自动链式到正式规划 |
| brainstorm-with-file | workflow-plan → workflow-execute | 自动 |
| analyze-with-file | workflow-lite-planex | 自动 |
| debug-with-file | (无) | 独立 |
| collaborative-plan-with-file | unified-execute-with-file | 自动 |
| roadmap-with-file | team-planex | 自动 |
| spec-generator | workflow-plan / team-planex | 用户选择 |
| review-cycle | fix mode | 如有发现 |
---
## 自包含 vs 多技能
| 工作流 | 自包含 | 说明 |
|----------|---------------|-------|
| workflow-lite-planex | 是 | 完整 plan + execute |
| workflow-plan | 否 | 需要 workflow-execute |
| workflow-execute | 是 | 完整执行 |
| workflow-tdd-plan | 否 | 需要 workflow-execute |
| workflow-test-fix | 是 | 完整生成 + 执行 |
| brainstorm | 是 (构思) | 否 (实现) |
| review-cycle | 是 | 完整审查 + 可选修复 |
| spec-generator | 是 (文档) | 否 (实现) |
| team-planex | 是 | 完整团队流水线 |
| team-lifecycle-v5 | 是 | 完整生命周期 |
| debug-with-file | 是 | 完整调试 |
| integration-test-cycle | 是 | 自迭代 |
| refactor-cycle | 是 | 自迭代 |
---
## 关键词检测参考
| 关键词模式 | 检测到的工作流 |
|-----------------|-------------------|
| `urgent`, `critical`, `hotfix`, `紧急`, `严重` | bugfix-hotfix |
| `from scratch`, `greenfield`, `new project`, `从零开始`, `全新` | greenfield |
| `brainstorm`, `ideation`, `multi-perspective`, `头脑风暴`, `创意` | brainstorm |
| `debug`, `hypothesis`, `systematic`, `调试`, `假设` | debug-with-file |
| `analyze`, `understand`, `collaborative analysis`, `分析`, `理解` | analyze-with-file |
| `roadmap`, `路线图`, `规划` | roadmap-with-file |
| `specification`, `PRD`, `产品需求`, `规格` | spec-generator |
| `integration test`, `集成测试`, `端到端` | integration-test-cycle |
| `refactor`, `技术债务`, `重构` | refactor-cycle |
| `team planex`, `wave pipeline`, `团队执行` | team-planex |
| `multi-cli`, `多模型协作`, `多CLI` | workflow-multi-cli-plan |
| `TDD`, `test-driven`, `测试驱动` | workflow-tdd-plan |
| `review`, `code review`, `代码审查` | review-cycle |
| `issue workflow`, `use issue workflow`, `Issue工作流` | issue pipeline |
---
## 工作流选择指南
| 任务类型 | 推荐工作流 | 命令链 |
|-----------|---------------------|---------------|
| 快速功能 | `/ccw "..."` | lite-planex → test-fix |
| Bug 修复 | `/ccw "fix ..."` | lite-planex --bugfix → test-fix |
| 复杂功能 | `/ccw "..."` (自动检测) | plan → execute → review → test-fix |
| 探索分析 | `/workflow:analyze-with-file "..."` | analysis → (可选) lite-planex |
| 创意构思 | `/workflow:brainstorm-with-file "..."` | brainstorm → plan → execute |
| 调试 | `/workflow:debug-with-file "..."` | hypothesis-driven debugging |
| Issue 管理 | `/issue:new``/issue:plan``/issue:queue``/issue:execute` | issue workflow |
| 多 Issue 批量 | `/issue:discover``/issue:plan --all-pending` | issue batch workflow |
| 代码审查 | `/cli:codex-review --uncommitted` | codex review |
| 团队协调 | `team-lifecycle-v5``team-planex` | team workflow |
| TDD 开发 | `/ccw "Implement with TDD"` | tdd-plan → execute |
| 集成测试 | `/ccw "integration test: ..."` | integration-test-cycle |
| 技术债务 | `/ccw "refactor: ..."` | refactor-cycle |
| 规格文档 | `/ccw "specification: ..."` | spec-generator → plan |
---
## 从零开始开发路径
| 规模 | 流水线 | 复杂度 |
|------|----------|------------|
| 小型 | brainstorm-with-file → workflow-plan → workflow-execute | 3 |
| 中型 | brainstorm-with-file → workflow-plan → workflow-execute → workflow-test-fix | 3 |
| 大型 | brainstorm-with-file → workflow-plan → workflow-execute → review-cycle → workflow-test-fix | 4 |
---
## 相关文档
- [4级系统](./4-level.md) - 详细工作流说明
- [最佳实践](./best-practices.md) - 工作流优化技巧
- [示例](./examples.md) - 工作流使用示例
- [团队](./teams.md) - 团队工作流协调

197
docs/zh/workflows/teams.md Normal file
View File

@@ -0,0 +1,197 @@
# 团队工作流
CCW 提供多个支持多角色协调复杂任务的团队协作技能。
## 团队技能概览
| 技能 | 角色 | 流水线 | 用例 |
|-------|-------|----------|----------|
| **team-planex** | 3 (planner + executor) | 波浪流水线(边规划边执行) | 规划和执行并行 |
| **team-iterdev** | 5 (generator → critic → integrator → validator) | 生成器-评论者循环 | 带反馈循环的迭代开发 |
| **team-lifecycle-v4** | 8 (spec → architect → impl → test) | 5 阶段生命周期 | 完整规范 → 实现 → 测试工作流 |
| **team-lifecycle-v5** | 可变 (team-worker) | 内置阶段 | 最新 team-worker 架构 |
| **team-issue** | 6 (explorer → planner → implementer → reviewer → integrator) | 5 阶段问题解决 | 多角色问题求解 |
| **team-testing** | 5 (strategist → generator → executor → analyst) | 4 阶段测试 | 综合测试覆盖 |
| **team-quality-assurance** | 6 (scout → strategist → generator → executor → analyst) | 5 阶段 QA | 质量保障闭环 |
| **team-brainstorm** | 5 (coordinator → ideator → challenger → synthesizer → evaluator) | 5 阶段头脑风暴 | 多视角创意生成 |
| **team-uidesign** | 4 (designer → developer → reviewer) | CP-9 双轨 | UI 设计和实现并行 |
| **team-frontend** | 6 (frontend-lead → ui-developer → ux-engineer → component-dev → qa) | 设计集成 | 带 UI/UX 集成的前端开发 |
| **team-review** | 4 (scanner → reviewer → fixer) | 4 阶段代码审查 | 代码扫描和自动修复 |
| **team-roadmap-dev** | 4 (planner → executor → verifier) | 分阶段执行 | 路线图驱动开发 |
| **team-tech-debt** | 6 (scanner → assessor → planner → executor → validator) | 5 阶段清理 | 技术债务识别和解决 |
| **team-ultra-analyze** | 5 (explorer → analyst → discussant → synthesizer) | 4 阶段分析 | 深度协作代码库分析 |
| **team-coordinate** | 可变 | 通用协调 | 通用团队协调(旧版) |
| **team-coordinate-v2** | 可变 (team-worker) | team-worker 架构 | 现代 team-worker 协调 |
| **team-executor** | 可变 | 轻量级执行 | 基于会话的执行 |
| **team-executor-v2** | 可变 (team-worker) | team-worker 执行 | 现代 team-worker 执行 |
## 使用方法
### 通过 /ccw 编排器
```bash
# 基于意图自动路由
/ccw "team planex: 用户认证系统"
/ccw "全生命周期: 通知服务开发"
/ccw "QA 团队: 质量保障支付流程"
# 基于团队的工作流
/ccw "team brainstorm: 新功能想法"
/ccw "team issue: 修复登录超时"
/ccw "team testing: 测试覆盖率提升"
```
### 直接调用技能
```javascript
// 编程调用
Skill(skill="team-lifecycle-v5", args="Build user authentication system")
Skill(skill="team-planex", args="Implement OAuth2 with concurrent planning")
Skill(skill="team-quality-assurance", args="Quality audit of payment system")
// 带模式选择
Skill(skill="workflow-plan", args="--mode replan")
```
### 通过 Task 工具(用于代理调用)
```javascript
// 生成团队工作器代理
Task({
subagent_type: "team-worker",
description: "Spawn executor worker",
team_name: "my-team",
name: "executor",
run_in_background: true,
prompt: `## Role Assignment
role: executor
session: D:/project/.workflow/.team/my-session
session_id: my-session
team_name: my-team
requirement: Implement user authentication
inner_loop: true`
})
```
## 检测关键词
| 技能 | 关键词(英文) | 关键词(中文) |
|-------|-------------------|----------------|
| **team-planex** | team planex, plan execute, wave pipeline | 团队规划执行, 波浪流水线 |
| **team-iterdev** | team iterdev, iterative development | 迭代开发团队 |
| **team-lifecycle** | team lifecycle, full lifecycle, spec impl test | 全生命周期, 规范实现测试 |
| **team-issue** | team issue, resolve issue, issue team | 团队 issue, issue 解决团队 |
| **team-testing** | team test, comprehensive test, test coverage | 测试团队, 全面测试 |
| **team-quality-assurance** | team qa, qa team, quality assurance | QA 团队, 质量保障团队 |
| **team-brainstorm** | team brainstorm, collaborative brainstorming | 团队头脑风暴, 协作头脑风暴 |
| **team-uidesign** | team ui design, ui design team, dual track | UI 设计团队, 双轨设计 |
| **team-frontend** | team frontend, frontend team | 前端开发团队 |
| **team-review** | team review, code review team | 代码审查团队 |
| **team-roadmap-dev** | team roadmap, roadmap driven | 路线图驱动开发 |
| **team-tech-debt** | tech debt cleanup, technical debt | 技术债务清理, 清理技术债 |
| **team-ultra-analyze** | team analyze, deep analysis, collaborative analysis | 深度协作分析 |
## 团队技能架构
### 版本演进
| 版本 | 架构 | 状态 |
|---------|-------------|--------|
| **v5** | team-worker动态角色 | **最新** |
| v4 | 5 阶段生命周期,内联讨论 | 稳定 |
| v3 | 3 阶段生命周期 | 旧版 |
| v2 | 通用协调 | 已弃用 |
### v5 Team Worker 架构
最新架构使用 `team-worker` 代理,基于阶段前缀进行动态角色分配:
| 阶段 | 前缀 | 角色 |
|-------|--------|------|
| 分析 | ANALYSIS | doc-analyst |
| 草稿 | DRAFT | doc-writer |
| 规划 | PLAN | planner |
| 实现 | IMPL | executor (code-developer, tdd-developer 等) |
| 测试 | TEST | tester (test-fix-agent 等) |
| 审查 | REVIEW | reviewer |
### 角色类型
| 类型 | 前缀 | 描述 |
|------|--------|-------------|
| **编排器** | COORD | 管理工作流,协调代理 |
| **负责人** | SPEC, IMPL, TEST | 领导阶段,委派给工作器 |
| **工作器** | 可变 | 执行特定任务 |
## 工作流模式
### 波浪流水线 (team-planex)
```text
Wave 1: Plan ──────────────────────────────────┐
↓ │
Wave 2: Exec ←────────────────────────────────┘
Wave 3: Plan → Exec → Plan → Exec → ...
```
规划和执行并发 - 执行者在第 N 波工作时,规划者正在规划第 N+1 波。
### 生成器-评论者循环 (team-iterdev)
```text
Generator → Output → Critic → Feedback → Generator
Integrator → Validator
```
通过反馈循环进行迭代改进。
### CP-9 双轨 (team-uidesign)
```text
Design Track: Designer → Tokens → Style
Implementation Track: Developer → Components
Reviewer → Verify
```
设计和实现并行的双轨进行。
### 5 阶段生命周期 (team-lifecycle-v4)
```text
1. Spec Planning (coordinator + spec-lead)
2. Architecture Design (architect)
3. Implementation Planning (impl-lead + dev team)
4. Test Planning (test-lead + qa-analyst)
5. Execution & Verification (all roles)
```
线性推进所有生命周期阶段。
## 何时使用各团队技能
| 场景 | 推荐技能 |
|----------|-------------------|
| 需要并行规划和执行 | **team-planex** |
| 带多次迭代的复杂功能 | **team-iterdev** |
| 完整规范 → 实现 → 测试工作流 | **team-lifecycle-v5** |
| 问题解决 | **team-issue** |
| 综合测试 | **team-testing** |
| 质量审计 | **team-quality-assurance** |
| 新功能创意 | **team-brainstorm** |
| UI 设计 + 实现 | **team-uidesign** |
| 前端特定开发 | **team-frontend** |
| 代码质量审查 | **team-review** |
| 带路线图的大型项目 | **team-roadmap-dev** |
| 技术债务清理 | **team-tech-debt** |
| 深度代码库分析 | **team-ultra-analyze** |
::: info 另请参阅
- [技能参考](../skills/reference.md) - 所有技能文档
- [CLI 命令](../cli/commands.md) - 命令参考
- [代理](../agents/index.md) - 代理文档
- [4 级工作流](./4-level.md) - 工作流系统概览
:::