mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-30 20:21:09 +08:00
feat: enhance skill templates, hooks, CLI routes, and settings UI
- Update SKILL-DESIGN-SPEC.md and skill-generator templates - Add hook templates and expand CLI/system routes - Improve SettingsPage UI - Update architecture constraints spec Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -14,3 +14,8 @@ keywords: [architecture, constraint, schema, compatibility, portability, design,
|
||||
|
||||
- [compatibility] When enhancing existing schemas, use optional fields and additionalProperties rather than creating new schemas. Avoid breaking changes.
|
||||
- [portability] Use relative paths for cross-artifact navigation to ensure portability across different environments and installations.
|
||||
|
||||
## Skill Design
|
||||
|
||||
- [decision:skills] All skills must follow Completion Status Protocol (DONE/DONE_WITH_CONCERNS/BLOCKED/NEEDS_CONTEXT) defined in SKILL-DESIGN-SPEC.md sections 13-14. New skills created via skill-generator auto-include the protocol reference. (2026-03-29)
|
||||
- [decision:hooks] Hook safety guardrails use TypeScript HookTemplate pattern (not standalone bash scripts) for integration with CCW hook endpoint system. Templates: careful-destructive-guard, freeze-edit-boundary. (2026-03-29)
|
||||
|
||||
@@ -18,6 +18,8 @@
|
||||
10. [质量控制规范](#10-质量控制规范)
|
||||
11. [最佳实践清单](#11-最佳实践清单)
|
||||
12. [示例模板](#12-示例模板)
|
||||
13. [Completion Status Protocol](#13-completion-status-protocol)
|
||||
14. [Escalation Protocol](#14-escalation-protocol)
|
||||
|
||||
---
|
||||
|
||||
@@ -665,6 +667,144 @@ Generate XXX through multi-phase analysis.
|
||||
|
||||
---
|
||||
|
||||
## 13. Completion Status Protocol
|
||||
|
||||
### 13.1 Status Definitions
|
||||
|
||||
Every Skill execution MUST terminate with one of the following four statuses:
|
||||
|
||||
| Status | Exit Code | Definition |
|
||||
|--------|-----------|------------|
|
||||
| **DONE** | 0 | All acceptance criteria met, outputs generated successfully |
|
||||
| **DONE_WITH_CONCERNS** | 0 | Completed but with warnings or non-blocking issues |
|
||||
| **BLOCKED** | 1 | Cannot proceed, requires external action or resource |
|
||||
| **NEEDS_CONTEXT** | 2 | Missing information needed to make a decision |
|
||||
|
||||
### 13.2 When to Use
|
||||
|
||||
| Status | Use When |
|
||||
|--------|----------|
|
||||
| **DONE** | All phases completed, quality gates passed, outputs validated |
|
||||
| **DONE_WITH_CONCERNS** | Core task completed but: deprecation warnings found, quality score 60-79%, non-critical checks failed, partial data used as fallback |
|
||||
| **BLOCKED** | Required file/service unavailable, dependency not installed, permission denied, prerequisite task not completed |
|
||||
| **NEEDS_CONTEXT** | Ambiguous user requirement, multiple valid interpretations, missing configuration value, unclear scope boundary |
|
||||
|
||||
### 13.3 Output Format
|
||||
|
||||
Each status MUST use the following structured output at the end of Skill execution:
|
||||
|
||||
```
|
||||
## STATUS: {DONE|DONE_WITH_CONCERNS|BLOCKED|NEEDS_CONTEXT}
|
||||
|
||||
**Summary**: {one-line description of outcome}
|
||||
|
||||
### Details
|
||||
{status-specific content — see below}
|
||||
|
||||
### Outputs
|
||||
- {list of files created/modified, if any}
|
||||
```
|
||||
|
||||
**DONE details**:
|
||||
```
|
||||
### Details
|
||||
- Phases completed: {N}/{N}
|
||||
- Quality score: {score}%
|
||||
- Key outputs: {list of primary deliverables}
|
||||
```
|
||||
|
||||
**DONE_WITH_CONCERNS details**:
|
||||
```
|
||||
### Details
|
||||
- Phases completed: {N}/{N}
|
||||
- Concerns:
|
||||
1. {concern description} — Impact: {low|medium} — Suggested fix: {action}
|
||||
2. ...
|
||||
```
|
||||
|
||||
**BLOCKED details**:
|
||||
```
|
||||
### Details
|
||||
- Blocked at: Phase {N}, Step {M}
|
||||
- Blocker: {specific description of what is blocking}
|
||||
- Need: {specific action or resource required to unblock}
|
||||
- Attempted: {what was tried before declaring blocked}
|
||||
```
|
||||
|
||||
**NEEDS_CONTEXT details**:
|
||||
```
|
||||
### Details
|
||||
- Paused at: Phase {N}, Step {M}
|
||||
- Questions:
|
||||
1. {specific question requiring user/caller input}
|
||||
2. ...
|
||||
- Context available: {what is already known}
|
||||
- Impact: {what cannot proceed without answers}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 14. Escalation Protocol
|
||||
|
||||
### 14.1 Three-Strike Rule
|
||||
|
||||
When a Skill encounters consecutive failures on the **same step**, the following escalation applies:
|
||||
|
||||
| Strike | Action |
|
||||
|--------|--------|
|
||||
| 1st failure | Log error, retry with adjusted approach |
|
||||
| 2nd failure | Log error, try alternative strategy |
|
||||
| 3rd failure | **STOP execution immediately**, output diagnostic dump, request human intervention |
|
||||
|
||||
### 14.2 Failure Tracking
|
||||
|
||||
Track failures per step, not globally. A success on any step resets that step's failure counter.
|
||||
|
||||
```
|
||||
Step failure counter:
|
||||
Phase 2, Step 3: [fail] [fail] [STOP] → escalate
|
||||
Phase 2, Step 4: [fail] [success] → counter reset, continue
|
||||
```
|
||||
|
||||
### 14.3 Diagnostic Dump Format
|
||||
|
||||
On the 3rd consecutive failure, output the following diagnostic block:
|
||||
|
||||
```
|
||||
## ESCALATION: 3-Strike Limit Reached
|
||||
|
||||
### Failed Step
|
||||
- Phase: {phase_number} — {phase_name}
|
||||
- Step: {step_number} — {step_name}
|
||||
|
||||
### Error History
|
||||
1. Attempt 1: {error message or description}
|
||||
Strategy: {what was tried}
|
||||
2. Attempt 2: {error message or description}
|
||||
Strategy: {alternative approach tried}
|
||||
3. Attempt 3: {error message or description}
|
||||
Strategy: {final approach tried}
|
||||
|
||||
### Current State
|
||||
- Last successful phase/step: {phase.step}
|
||||
- Files generated so far: {list}
|
||||
- Files touched in failed attempts: {list}
|
||||
|
||||
### Diagnosis
|
||||
- Likely root cause: {assessment}
|
||||
- Suggested human action: {specific recommendation}
|
||||
```
|
||||
|
||||
### 14.4 Post-Escalation Behavior
|
||||
|
||||
After outputting the diagnostic dump:
|
||||
1. Set Skill status to **BLOCKED** (see Section 13)
|
||||
2. Do NOT attempt further retries
|
||||
3. Preserve all intermediate outputs for debugging
|
||||
4. Wait for human intervention before resuming
|
||||
|
||||
---
|
||||
|
||||
## 附录 A: 设计对比
|
||||
|
||||
| 设计点 | software-manual | copyright-docs |
|
||||
|
||||
@@ -80,6 +80,11 @@ Generate Phase files for Sequential execution mode, defining fixed-order executi
|
||||
|
||||
{{quality_checklist}}
|
||||
|
||||
## Completion Status
|
||||
|
||||
Return one of: DONE | DONE_WITH_CONCERNS | BLOCKED | NEEDS_CONTEXT with structured reason.
|
||||
See [Completion Status Protocol](./../_shared/SKILL-DESIGN-SPEC.md#13) for output format.
|
||||
|
||||
## Next Phase
|
||||
|
||||
{{next_phase_link}}
|
||||
@@ -456,6 +461,11 @@ Write(\`${workDir}/${phaseConfig.output}\`, JSON.stringify(result, null, 2));
|
||||
- [ ] Core logic executed successfully
|
||||
- [ ] Output format correct
|
||||
|
||||
## Completion Status
|
||||
|
||||
Return one of: DONE | DONE_WITH_CONCERNS | BLOCKED | NEEDS_CONTEXT with structured reason.
|
||||
See [Completion Status Protocol](./../_shared/SKILL-DESIGN-SPEC.md#13) for output format.
|
||||
|
||||
${nextPhase ?
|
||||
`## Next Phase\n\n→ [Phase ${index + 2}: ${nextPhase.name}](${nextPhase.id}.md)` :
|
||||
'## Completion\n\nThis is the final phase.'}
|
||||
|
||||
@@ -80,6 +80,10 @@ Bash(\`mkdir -p "\${workDir}"\`);
|
||||
{{output_structure}}
|
||||
\`\`\`
|
||||
|
||||
## Completion Protocol
|
||||
|
||||
Follow [Completion Status Protocol](./../_shared/SKILL-DESIGN-SPEC.md#13) and [Escalation Protocol](./../_shared/SKILL-DESIGN-SPEC.md#14).
|
||||
|
||||
## Reference Documents by Phase
|
||||
|
||||
> **Important**: Reference documents should be organized by execution phase, clearly marking when and in what scenarios they are used. Avoid listing documents in a flat manner.
|
||||
|
||||
@@ -1304,9 +1304,41 @@ export function SettingsPage() {
|
||||
updateCliTool(toolId, { envFile });
|
||||
};
|
||||
|
||||
const handleUpdateSettingsFile = (toolId: string, settingsFile: string | undefined) => {
|
||||
const handleUpdateSettingsFile = useCallback(async (toolId: string, settingsFile: string | undefined) => {
|
||||
updateCliTool(toolId, { settingsFile });
|
||||
};
|
||||
|
||||
// Auto-parse models from settings file
|
||||
if (settingsFile && SETTINGS_FILE_TOOLS.has(toolId)) {
|
||||
try {
|
||||
const csrfToken = getCsrfToken();
|
||||
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
|
||||
if (csrfToken) headers['X-CSRF-Token'] = csrfToken;
|
||||
|
||||
const res = await fetch('/api/cli/parse-settings', {
|
||||
method: 'POST',
|
||||
headers,
|
||||
body: JSON.stringify({ path: settingsFile }),
|
||||
credentials: 'same-origin',
|
||||
});
|
||||
|
||||
if (res.ok) {
|
||||
const data = await res.json();
|
||||
if (data.primaryModel || data.secondaryModel || data.availableModels?.length) {
|
||||
const updates: Partial<{ primaryModel: string; secondaryModel: string; availableModels: string[] }> = {};
|
||||
if (data.primaryModel) updates.primaryModel = data.primaryModel;
|
||||
if (data.secondaryModel) updates.secondaryModel = data.secondaryModel;
|
||||
if (data.availableModels?.length) updates.availableModels = data.availableModels;
|
||||
updateCliTool(toolId, updates);
|
||||
toast.success(`Models loaded from settings: ${data.primaryModel || 'default'}`, {
|
||||
duration: 3000,
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Silently fail — file parsing is best-effort
|
||||
}
|
||||
}
|
||||
}, [updateCliTool]);
|
||||
|
||||
const handleUpdateEffort = (toolId: string, effort: string | undefined) => {
|
||||
updateCliTool(toolId, { effort });
|
||||
|
||||
@@ -126,10 +126,56 @@ function isDangerousCommand(cmd: string): boolean {
|
||||
/>\s*\/dev\//i,
|
||||
/wget.*\|.*sh/i,
|
||||
/curl.*\|.*bash/i,
|
||||
/DROP\s+TABLE/i,
|
||||
/TRUNCATE\s+TABLE/i,
|
||||
/kubectl\s+delete/i,
|
||||
/docker\s+(rm|rmi|system\s+prune)/i,
|
||||
];
|
||||
return patterns.some(p => p.test(cmd));
|
||||
}
|
||||
|
||||
/**
|
||||
* Safe deletion targets - directories commonly cleaned in dev workflows
|
||||
*/
|
||||
const SAFE_DELETE_TARGETS = [
|
||||
'node_modules',
|
||||
'.next',
|
||||
'dist',
|
||||
'__pycache__',
|
||||
'.cache',
|
||||
'coverage',
|
||||
'.turbo',
|
||||
'build',
|
||||
];
|
||||
|
||||
/**
|
||||
* Check if a destructive command targets only safe directories.
|
||||
* Returns true if the command IS dangerous (not a safe exception).
|
||||
* Returns false if the command targets a safe directory (allow it through).
|
||||
*/
|
||||
function isDestructiveWithSafeException(cmd: string): boolean {
|
||||
if (!isDangerousCommand(cmd)) {
|
||||
return false;
|
||||
}
|
||||
// Only apply safe exceptions for rm -rf patterns
|
||||
const rmRfMatch = cmd.match(/rm\s+-rf\s+(.+)/i);
|
||||
if (rmRfMatch) {
|
||||
const args = rmRfMatch[1].trim().split(/\s+/);
|
||||
// Every target must match a safe pattern for the exception to apply
|
||||
const allSafe = args.length > 0 && args.every(arg => {
|
||||
const target = arg.replace(/^["']|["']$/g, '').replace(/[/\\]+$/, '');
|
||||
const targetBase = target.split(/[/\\]/).pop() || '';
|
||||
return SAFE_DELETE_TARGETS.some(safe =>
|
||||
targetBase === safe || target === safe
|
||||
);
|
||||
});
|
||||
if (allSafe) {
|
||||
return false; // Safe exception - not dangerous
|
||||
}
|
||||
}
|
||||
return true; // Dangerous, no safe exception applies
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if command is a dangerous git operation
|
||||
*/
|
||||
@@ -552,6 +598,77 @@ export const HOOK_TEMPLATES: HookTemplate[] = [
|
||||
}
|
||||
},
|
||||
|
||||
{
|
||||
id: 'careful-destructive-guard',
|
||||
name: 'Careful Destructive Guard',
|
||||
description: 'Block destructive commands but allow safe targets (node_modules, dist, .next, etc.)',
|
||||
category: 'protection',
|
||||
trigger: 'PreToolUse',
|
||||
matcher: 'Bash',
|
||||
execute: (data) => {
|
||||
const cmd = (data.tool_input?.command as string) || '';
|
||||
if (isDestructiveWithSafeException(cmd)) {
|
||||
return {
|
||||
exitCode: 0,
|
||||
jsonOutput: {
|
||||
hookSpecificOutput: {
|
||||
hookEventName: 'PreToolUse',
|
||||
permissionDecision: 'ask',
|
||||
permissionDecisionReason: `Destructive command detected: requires user confirmation`
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
return { exitCode: 0 };
|
||||
}
|
||||
},
|
||||
{
|
||||
id: 'freeze-edit-boundary',
|
||||
name: 'Freeze Edit Boundary',
|
||||
description: 'Block Write/Edit to files outside locked directories defined in .claude/freeze.json',
|
||||
category: 'protection',
|
||||
trigger: 'PreToolUse',
|
||||
matcher: 'Write|Edit',
|
||||
execute: (data) => {
|
||||
const file = getStringInput(data.tool_input?.file_path);
|
||||
if (!file) {
|
||||
return { exitCode: 0 };
|
||||
}
|
||||
const projectDir = data.cwd || process.env.CLAUDE_PROJECT_DIR || process.cwd();
|
||||
const freezePath = join(projectDir, '.claude', 'freeze.json');
|
||||
if (!existsSync(freezePath)) {
|
||||
return { exitCode: 0 };
|
||||
}
|
||||
try {
|
||||
const freezeData = JSON.parse(readFileSync(freezePath, 'utf8'));
|
||||
const lockedDirs: string[] = freezeData.locked_dirs;
|
||||
if (!Array.isArray(lockedDirs) || lockedDirs.length === 0) {
|
||||
return { exitCode: 0 };
|
||||
}
|
||||
const resolvedFile = resolve(projectDir, file);
|
||||
const isInLockedDir = lockedDirs.some(dir => {
|
||||
const resolvedDir = resolve(projectDir, dir);
|
||||
return resolvedFile.startsWith(resolvedDir + '/') || resolvedFile.startsWith(resolvedDir + '\\');
|
||||
});
|
||||
if (!isInLockedDir) {
|
||||
return {
|
||||
exitCode: 2,
|
||||
jsonOutput: {
|
||||
hookSpecificOutput: {
|
||||
hookEventName: 'PreToolUse',
|
||||
permissionDecision: 'deny',
|
||||
permissionDecisionReason: `File ${file} is outside locked directories: ${lockedDirs.join(', ')}`
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
} catch {
|
||||
// Ignore parse errors - if freeze.json is invalid, allow edits
|
||||
}
|
||||
return { exitCode: 0 };
|
||||
}
|
||||
},
|
||||
|
||||
// ============ Indexing Templates ============
|
||||
{
|
||||
id: 'post-edit-index',
|
||||
|
||||
@@ -447,6 +447,49 @@ export async function handleCliRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Parse Claude settings file and extract model configuration
|
||||
if (pathname === '/api/cli/parse-settings' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: unknown) => {
|
||||
const { path: filePath } = body as { path?: string };
|
||||
if (!filePath || typeof filePath !== 'string') {
|
||||
return { error: 'File path is required', status: 400 };
|
||||
}
|
||||
|
||||
const fs = await import('fs/promises');
|
||||
const resolvedPath = resolve(filePath);
|
||||
|
||||
try {
|
||||
const content = await fs.readFile(resolvedPath, 'utf-8');
|
||||
const settings = JSON.parse(content);
|
||||
const env = settings.env || {};
|
||||
|
||||
// Extract model values from ANTHROPIC_* env vars
|
||||
const primaryModel = env.ANTHROPIC_MODEL || '';
|
||||
const secondaryModel = env.ANTHROPIC_DEFAULT_HAIKU_MODEL || '';
|
||||
|
||||
// Collect all unique model values from ANTHROPIC_* env vars
|
||||
const modelKeys = Object.keys(env).filter((k: string) =>
|
||||
k.startsWith('ANTHROPIC_') && k.includes('MODEL')
|
||||
);
|
||||
const availableModels = [...new Set(
|
||||
modelKeys.map((k: string) => env[k]).filter((v: string) => v && typeof v === 'string')
|
||||
)] as string[];
|
||||
|
||||
return { primaryModel, secondaryModel, availableModels };
|
||||
} catch (err) {
|
||||
const msg = (err as Error).message;
|
||||
if (msg.includes('ENOENT')) {
|
||||
return { error: `File not found: ${resolvedPath}`, status: 404 };
|
||||
}
|
||||
if (msg.includes('JSON')) {
|
||||
return { error: `Invalid JSON in file: ${resolvedPath}`, status: 400 };
|
||||
}
|
||||
return { error: msg, status: 500 };
|
||||
}
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Get/Update Tool Config
|
||||
const configMatch = pathname.match(/^\/api\/cli\/config\/(gemini|qwen|codex|claude|opencode)$/);
|
||||
if (configMatch) {
|
||||
|
||||
@@ -822,7 +822,7 @@ export async function handleSystemRoutes(ctx: SystemRouteContext): Promise<boole
|
||||
return new Promise<Record<string, unknown>>((resolve) => {
|
||||
if (process.platform === 'win32') {
|
||||
const script = `Add-Type -AssemblyName System.Windows.Forms; $d = New-Object System.Windows.Forms.FolderBrowserDialog; $d.SelectedPath = '${startDir.replace(/'/g, "''")}'; $d.ShowNewFolderButton = $true; if ($d.ShowDialog() -eq 'OK') { $d.SelectedPath }`;
|
||||
execFile('powershell', ['-NoProfile', '-Command', script],
|
||||
execFile('powershell', ['-NoProfile', '-Sta', '-Command', script],
|
||||
{ timeout: 120000 },
|
||||
(err, stdout) => {
|
||||
if (err || !stdout.trim()) {
|
||||
@@ -879,7 +879,7 @@ export async function handleSystemRoutes(ctx: SystemRouteContext): Promise<boole
|
||||
return new Promise<Record<string, unknown>>((resolve) => {
|
||||
if (process.platform === 'win32') {
|
||||
const script = `Add-Type -AssemblyName System.Windows.Forms; $d = New-Object System.Windows.Forms.OpenFileDialog; $d.InitialDirectory = '${startDir.replace(/'/g, "''")}'; if ($d.ShowDialog() -eq 'OK') { $d.FileName }`;
|
||||
execFile('powershell', ['-NoProfile', '-Command', script],
|
||||
execFile('powershell', ['-NoProfile', '-Sta', '-Command', script],
|
||||
{ timeout: 120000 },
|
||||
(err, stdout) => {
|
||||
if (err || !stdout.trim()) {
|
||||
|
||||
Reference in New Issue
Block a user