mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-05 01:50:27 +08:00
feat: Enhance documentation diagnosis and category mapping
- Introduced action to diagnose documentation structure, identifying redundancies and conflicts. - Added centralized category mappings in JSON format for improved detection and strategy application. - Updated existing functions to utilize new mappings for taxonomy and strategy matching. - Implemented new detection patterns for documentation redundancy and conflict. - Expanded state schema to include documentation diagnosis results. - Enhanced severity criteria and strategy selection guide to accommodate new documentation issues.
This commit is contained in:
File diff suppressed because it is too large
Load Diff
@@ -65,10 +65,11 @@ Based on comprehensive analysis, skill-tuning addresses **core skill issues** an
|
||||
|
||||
| Priority | Problem | Root Cause | Solution Strategy |
|
||||
|----------|---------|------------|-------------------|
|
||||
| **P0** | Data Flow Disruption | Scattered state, inconsistent formats | Centralized session store, transactional updates |
|
||||
| **P1** | Agent Coordination | Fragile call chains, merge complexity | Dedicated orchestrator, enforced data contracts |
|
||||
| **P2** | Context Explosion | Token accumulation, multi-turn bloat | Context summarization, sliding window, structured state |
|
||||
| **P3** | Long-tail Forgetting | Early constraint loss | Constraint injection, checkpointing, goal alignment |
|
||||
| **P0** | Authoring Principles Violation | 中间文件存储, State膨胀, 文件中转 | eliminate_intermediate_files, minimize_state, context_passing |
|
||||
| **P1** | Data Flow Disruption | Scattered state, inconsistent formats | state_centralization, schema_enforcement |
|
||||
| **P2** | Agent Coordination | Fragile call chains, merge complexity | error_wrapping, result_validation |
|
||||
| **P3** | Context Explosion | Token accumulation, multi-turn bloat | sliding_window, context_summarization |
|
||||
| **P4** | Long-tail Forgetting | Early constraint loss | constraint_injection, checkpoint_restore |
|
||||
|
||||
### General Optimization Areas (按需分析 via Gemini CLI)
|
||||
|
||||
@@ -228,6 +229,11 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) |
|
||||
│ → Detect result passing issues │
|
||||
│ → Output: agent-diagnosis.json │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-diagnose-docs: Documentation Structure Analysis (Optional) │
|
||||
│ → Detect definition duplicates across files │
|
||||
│ → Detect conflicting definitions │
|
||||
│ → Output: docs-diagnosis.json │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ action-generate-report: Consolidated Report │
|
||||
│ → Merge all diagnosis results │
|
||||
│ → Prioritize issues by severity │
|
||||
@@ -275,7 +281,8 @@ Bash(`mkdir -p "${workDir}/fixes"`);
|
||||
│ ├── context-diagnosis.json # Context explosion analysis
|
||||
│ ├── memory-diagnosis.json # Long-tail forgetting analysis
|
||||
│ ├── dataflow-diagnosis.json # Data flow analysis
|
||||
│ └── agent-diagnosis.json # Agent coordination analysis
|
||||
│ ├── agent-diagnosis.json # Agent coordination analysis
|
||||
│ └── docs-diagnosis.json # Documentation structure analysis (optional)
|
||||
├── backups/
|
||||
│ └── {skill-name}-backup/ # Original skill files backup
|
||||
├── fixes/
|
||||
@@ -287,58 +294,14 @@ Bash(`mkdir -p "${workDir}/fixes"`);
|
||||
|
||||
## State Schema
|
||||
|
||||
```typescript
|
||||
interface TuningState {
|
||||
status: 'pending' | 'running' | 'completed' | 'failed';
|
||||
target_skill: {
|
||||
name: string;
|
||||
path: string;
|
||||
execution_mode: 'sequential' | 'autonomous';
|
||||
};
|
||||
user_issue_description: string;
|
||||
diagnosis: {
|
||||
context: DiagnosisResult | null;
|
||||
memory: DiagnosisResult | null;
|
||||
dataflow: DiagnosisResult | null;
|
||||
agent: DiagnosisResult | null;
|
||||
};
|
||||
issues: Issue[];
|
||||
proposed_fixes: Fix[];
|
||||
applied_fixes: AppliedFix[];
|
||||
iteration_count: number;
|
||||
max_iterations: number;
|
||||
quality_score: number;
|
||||
completed_actions: string[];
|
||||
current_action: string | null;
|
||||
errors: Error[];
|
||||
error_count: number;
|
||||
}
|
||||
详细状态结构定义请参阅 [phases/state-schema.md](phases/state-schema.md)。
|
||||
|
||||
interface DiagnosisResult {
|
||||
status: 'completed' | 'skipped';
|
||||
issues_found: number;
|
||||
severity: 'critical' | 'high' | 'medium' | 'low' | 'none';
|
||||
details: any;
|
||||
}
|
||||
|
||||
interface Issue {
|
||||
id: string;
|
||||
type: 'context_explosion' | 'memory_loss' | 'dataflow_break' | 'agent_failure';
|
||||
severity: 'critical' | 'high' | 'medium' | 'low';
|
||||
location: string;
|
||||
description: string;
|
||||
evidence: string[];
|
||||
}
|
||||
|
||||
interface Fix {
|
||||
id: string;
|
||||
issue_id: string;
|
||||
strategy: string;
|
||||
description: string;
|
||||
changes: FileChange[];
|
||||
risk: 'low' | 'medium' | 'high';
|
||||
}
|
||||
```
|
||||
核心状态字段:
|
||||
- `status`: 工作流状态 (pending/running/completed/failed)
|
||||
- `target_skill`: 目标 skill 信息
|
||||
- `diagnosis`: 各维度诊断结果
|
||||
- `issues`: 发现的问题列表
|
||||
- `proposed_fixes`: 建议的修复方案
|
||||
|
||||
## Reference Documents
|
||||
|
||||
@@ -352,6 +315,7 @@ interface Fix {
|
||||
| [phases/actions/action-diagnose-memory.md](phases/actions/action-diagnose-memory.md) | Long-tail forgetting diagnosis |
|
||||
| [phases/actions/action-diagnose-dataflow.md](phases/actions/action-diagnose-dataflow.md) | Data flow diagnosis |
|
||||
| [phases/actions/action-diagnose-agent.md](phases/actions/action-diagnose-agent.md) | Agent coordination diagnosis |
|
||||
| [phases/actions/action-diagnose-docs.md](phases/actions/action-diagnose-docs.md) | Documentation structure diagnosis |
|
||||
| [phases/actions/action-generate-report.md](phases/actions/action-generate-report.md) | Report generation |
|
||||
| [phases/actions/action-propose-fixes.md](phases/actions/action-propose-fixes.md) | Fix proposal |
|
||||
| [phases/actions/action-apply-fix.md](phases/actions/action-apply-fix.md) | Fix application |
|
||||
|
||||
@@ -85,62 +85,55 @@ RULES:
|
||||
|
||||
### Phase 2: Spec 匹配
|
||||
|
||||
基于 `specs/dimension-mapping.md` 规则为每个维度匹配检测模式和修复策略:
|
||||
基于 `specs/category-mappings.json` 配置为每个维度匹配检测模式和修复策略:
|
||||
|
||||
```javascript
|
||||
// 加载集中式映射配置
|
||||
const mappings = JSON.parse(Read('specs/category-mappings.json'));
|
||||
|
||||
function matchSpecs(dimensions) {
|
||||
// 加载映射规则
|
||||
const mappingRules = loadMappingRules();
|
||||
|
||||
return dimensions.map(dim => {
|
||||
// 匹配 taxonomy pattern
|
||||
const taxonomyMatch = findTaxonomyMatch(dim.inferred_category, mappingRules);
|
||||
|
||||
const taxonomyMatch = findTaxonomyMatch(dim.inferred_category);
|
||||
|
||||
// 匹配 strategy
|
||||
const strategyMatch = findStrategyMatch(dim.inferred_category, mappingRules);
|
||||
|
||||
const strategyMatch = findStrategyMatch(dim.inferred_category);
|
||||
|
||||
// 判断是否满足(核心标准:有修复策略)
|
||||
const hasFix = strategyMatch !== null && strategyMatch.strategies.length > 0;
|
||||
|
||||
|
||||
return {
|
||||
dimension_id: dim.id,
|
||||
taxonomy_match: taxonomyMatch,
|
||||
strategy_match: strategyMatch,
|
||||
has_fix: hasFix,
|
||||
needs_gemini_analysis: taxonomyMatch === null // 无内置检测时需要 Gemini 深度分析
|
||||
needs_gemini_analysis: taxonomyMatch === null || mappings.categories[dim.inferred_category]?.needs_gemini_analysis
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
function findTaxonomyMatch(category, rules) {
|
||||
const patternMapping = {
|
||||
'context_explosion': { category: 'context_explosion', pattern_ids: ['CTX-001', 'CTX-002', 'CTX-003', 'CTX-004', 'CTX-005'], severity_hint: 'high' },
|
||||
'memory_loss': { category: 'memory_loss', pattern_ids: ['MEM-001', 'MEM-002', 'MEM-003', 'MEM-004', 'MEM-005'], severity_hint: 'high' },
|
||||
'dataflow_break': { category: 'dataflow_break', pattern_ids: ['DF-001', 'DF-002', 'DF-003', 'DF-004', 'DF-005'], severity_hint: 'critical' },
|
||||
'agent_failure': { category: 'agent_failure', pattern_ids: ['AGT-001', 'AGT-002', 'AGT-003', 'AGT-004', 'AGT-005', 'AGT-006'], severity_hint: 'high' },
|
||||
'performance': { category: 'performance', pattern_ids: ['CTX-001', 'CTX-003'], severity_hint: 'medium' },
|
||||
'error_handling': { category: 'error_handling', pattern_ids: ['AGT-001', 'AGT-002'], severity_hint: 'medium' }
|
||||
function findTaxonomyMatch(category) {
|
||||
const config = mappings.categories[category];
|
||||
if (!config || config.pattern_ids.length === 0) return null;
|
||||
|
||||
return {
|
||||
category: category,
|
||||
pattern_ids: config.pattern_ids,
|
||||
severity_hint: config.severity_hint
|
||||
};
|
||||
|
||||
return patternMapping[category] || null;
|
||||
}
|
||||
|
||||
function findStrategyMatch(category, rules) {
|
||||
const strategyMapping = {
|
||||
'context_explosion': { strategies: ['sliding_window', 'path_reference', 'context_summarization', 'structured_state'], risk_levels: ['low', 'low', 'low', 'medium'] },
|
||||
'memory_loss': { strategies: ['constraint_injection', 'state_constraints_field', 'checkpoint_restore', 'goal_embedding'], risk_levels: ['low', 'low', 'low', 'medium'] },
|
||||
'dataflow_break': { strategies: ['state_centralization', 'schema_enforcement', 'field_normalization'], risk_levels: ['medium', 'low', 'low'] },
|
||||
'agent_failure': { strategies: ['error_wrapping', 'result_validation', 'flatten_nesting'], risk_levels: ['low', 'low', 'medium'] },
|
||||
'prompt_quality': { strategies: ['structured_prompt', 'output_schema', 'grounding_context', 'format_enforcement'], risk_levels: ['low', 'low', 'medium', 'low'] },
|
||||
'architecture': { strategies: ['phase_decomposition', 'interface_contracts', 'plugin_architecture'], risk_levels: ['medium', 'medium', 'high'] },
|
||||
'performance': { strategies: ['token_budgeting', 'parallel_execution', 'result_caching', 'lazy_loading'], risk_levels: ['low', 'low', 'low', 'low'] },
|
||||
'error_handling': { strategies: ['graceful_degradation', 'error_propagation', 'structured_logging'], risk_levels: ['low', 'low', 'low'] },
|
||||
'output_quality': { strategies: ['quality_gates', 'output_validation', 'template_enforcement'], risk_levels: ['low', 'low', 'low'] },
|
||||
'user_experience': { strategies: ['progress_tracking', 'status_communication', 'interactive_checkpoints'], risk_levels: ['low', 'low', 'low'] }
|
||||
function findStrategyMatch(category) {
|
||||
const config = mappings.categories[category];
|
||||
if (!config) {
|
||||
// Fallback to custom from config
|
||||
return mappings.fallback;
|
||||
}
|
||||
|
||||
return {
|
||||
strategies: config.strategies,
|
||||
risk_levels: config.risk_levels
|
||||
};
|
||||
|
||||
// Fallback to custom
|
||||
return strategyMapping[category] || { strategies: ['custom'], risk_levels: ['medium'] };
|
||||
}
|
||||
```
|
||||
|
||||
@@ -224,11 +217,10 @@ function detectAmbiguities(dimensions, specMatches) {
|
||||
}
|
||||
|
||||
function suggestInterpretations(dim) {
|
||||
// 基于关键词推荐可能的解释
|
||||
const categories = [
|
||||
'context_explosion', 'memory_loss', 'dataflow_break', 'agent_failure',
|
||||
'prompt_quality', 'architecture', 'performance', 'error_handling'
|
||||
];
|
||||
// 基于 mappings 配置推荐可能的解释
|
||||
const categories = Object.keys(mappings.categories).filter(
|
||||
cat => cat !== 'authoring_principles_violation' // 排除内部检测类别
|
||||
);
|
||||
return categories.slice(0, 4); // 返回最常见的 4 个作为选项
|
||||
}
|
||||
|
||||
@@ -240,12 +232,10 @@ function hasConflictingKeywords(keywords) {
|
||||
}
|
||||
|
||||
function getKeywordCategoryHint(keyword) {
|
||||
// 从 mappings.keywords 构建查找表(合并中英文关键词)
|
||||
const keywordMap = {
|
||||
'慢': 'performance', 'slow': 'performance',
|
||||
'遗忘': 'memory_loss', 'forget': 'memory_loss',
|
||||
'状态': 'dataflow_break', 'state': 'dataflow_break',
|
||||
'agent': 'agent_failure', '失败': 'agent_failure',
|
||||
'token': 'context_explosion', '上下文': 'context_explosion'
|
||||
...mappings.keywords.chinese,
|
||||
...mappings.keywords.english
|
||||
};
|
||||
return keywordMap[keyword.toLowerCase()];
|
||||
}
|
||||
@@ -281,33 +271,13 @@ async function handleAmbiguities(ambiguities, dimensions) {
|
||||
}
|
||||
|
||||
function getCategoryLabel(category) {
|
||||
const labels = {
|
||||
'context_explosion': '上下文膨胀',
|
||||
'memory_loss': '指令遗忘',
|
||||
'dataflow_break': '数据流问题',
|
||||
'agent_failure': 'Agent 协调问题',
|
||||
'prompt_quality': '提示词质量',
|
||||
'architecture': '架构问题',
|
||||
'performance': '性能问题',
|
||||
'error_handling': '错误处理',
|
||||
'custom': '其他问题'
|
||||
};
|
||||
return labels[category] || category;
|
||||
// 从 mappings 配置加载标签
|
||||
return mappings.category_labels_chinese[category] || category;
|
||||
}
|
||||
|
||||
function getCategoryDescription(category) {
|
||||
const descriptions = {
|
||||
'context_explosion': 'Token 累积导致上下文过大',
|
||||
'memory_loss': '早期指令或约束在后期丢失',
|
||||
'dataflow_break': '状态数据在阶段间不一致',
|
||||
'agent_failure': '子 Agent 调用失败或结果异常',
|
||||
'prompt_quality': '提示词模糊导致输出不稳定',
|
||||
'architecture': '阶段划分或模块结构不合理',
|
||||
'performance': '执行慢或 Token 消耗高',
|
||||
'error_handling': '错误恢复机制不完善',
|
||||
'custom': '需要自定义分析的问题'
|
||||
};
|
||||
return descriptions[category] || '需要进一步分析';
|
||||
// 从 mappings 配置加载描述
|
||||
return mappings.category_descriptions[category] || 'Requires further analysis';
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -0,0 +1,299 @@
|
||||
# Action: Diagnose Documentation Structure
|
||||
|
||||
检测目标 skill 中的文档冗余和冲突问题。
|
||||
|
||||
## Purpose
|
||||
|
||||
- 检测重复定义(State Schema、映射表、类型定义等)
|
||||
- 检测冲突定义(优先级定义不一致、实现与文档漂移等)
|
||||
- 生成合并和解决冲突的建议
|
||||
|
||||
## Preconditions
|
||||
|
||||
- [ ] `state.status === 'running'`
|
||||
- [ ] `state.target_skill !== null`
|
||||
- [ ] `!state.diagnosis.docs`
|
||||
- [ ] 用户指定 focus_areas 包含 'docs' 或 'all',或需要全面诊断
|
||||
|
||||
## Detection Patterns
|
||||
|
||||
### DOC-RED-001: 核心定义重复
|
||||
|
||||
检测 State Schema、核心接口等在多处定义:
|
||||
|
||||
```javascript
|
||||
async function detectDefinitionDuplicates(skillPath) {
|
||||
const patterns = [
|
||||
{ name: 'state_schema', regex: /interface\s+(TuningState|State)\s*\{/g },
|
||||
{ name: 'fix_strategy', regex: /type\s+FixStrategy\s*=/g },
|
||||
{ name: 'issue_type', regex: /type:\s*['"]?(context_explosion|memory_loss|dataflow_break)/g }
|
||||
];
|
||||
|
||||
const files = Glob('**/*.md', { cwd: skillPath });
|
||||
const duplicates = [];
|
||||
|
||||
for (const pattern of patterns) {
|
||||
const matches = [];
|
||||
for (const file of files) {
|
||||
const content = Read(`${skillPath}/${file}`);
|
||||
if (pattern.regex.test(content)) {
|
||||
matches.push({ file, pattern: pattern.name });
|
||||
}
|
||||
}
|
||||
if (matches.length > 1) {
|
||||
duplicates.push({
|
||||
type: pattern.name,
|
||||
files: matches.map(m => m.file),
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return duplicates;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-RED-002: 硬编码配置重复
|
||||
|
||||
检测 action 文件中硬编码与 spec 文档的重复:
|
||||
|
||||
```javascript
|
||||
async function detectHardcodedDuplicates(skillPath) {
|
||||
const actionFiles = Glob('phases/actions/*.md', { cwd: skillPath });
|
||||
const specFiles = Glob('specs/*.md', { cwd: skillPath });
|
||||
|
||||
const duplicates = [];
|
||||
|
||||
for (const actionFile of actionFiles) {
|
||||
const content = Read(`${skillPath}/${actionFile}`);
|
||||
|
||||
// 检测硬编码的映射对象
|
||||
const hardcodedPatterns = [
|
||||
/const\s+\w*[Mm]apping\s*=\s*\{/g,
|
||||
/patternMapping\s*=\s*\{/g,
|
||||
/strategyMapping\s*=\s*\{/g
|
||||
];
|
||||
|
||||
for (const pattern of hardcodedPatterns) {
|
||||
if (pattern.test(content)) {
|
||||
duplicates.push({
|
||||
type: 'hardcoded_mapping',
|
||||
file: actionFile,
|
||||
description: '硬编码映射可能与 specs/ 中的定义重复',
|
||||
severity: 'high'
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return duplicates;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-CON-001: 优先级定义冲突
|
||||
|
||||
检测 P0-P3 等优先级在不同文件中的定义不一致:
|
||||
|
||||
```javascript
|
||||
async function detectPriorityConflicts(skillPath) {
|
||||
const files = Glob('**/*.md', { cwd: skillPath });
|
||||
const priorityDefs = {};
|
||||
|
||||
const priorityPattern = /\*\*P(\d+)\*\*[:\s]+([^\|]+)/g;
|
||||
|
||||
for (const file of files) {
|
||||
const content = Read(`${skillPath}/${file}`);
|
||||
let match;
|
||||
while ((match = priorityPattern.exec(content)) !== null) {
|
||||
const priority = `P${match[1]}`;
|
||||
const definition = match[2].trim();
|
||||
|
||||
if (!priorityDefs[priority]) {
|
||||
priorityDefs[priority] = [];
|
||||
}
|
||||
priorityDefs[priority].push({ file, definition });
|
||||
}
|
||||
}
|
||||
|
||||
const conflicts = [];
|
||||
for (const [priority, defs] of Object.entries(priorityDefs)) {
|
||||
const uniqueDefs = [...new Set(defs.map(d => d.definition))];
|
||||
if (uniqueDefs.length > 1) {
|
||||
conflicts.push({
|
||||
key: priority,
|
||||
definitions: defs,
|
||||
severity: 'critical'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
```
|
||||
|
||||
### DOC-CON-002: 实现与文档漂移
|
||||
|
||||
检测硬编码与文档表格的不一致:
|
||||
|
||||
```javascript
|
||||
async function detectImplementationDrift(skillPath) {
|
||||
// 比较 category-mappings.json 与 specs/*.md 中的表格
|
||||
const mappingsFile = `${skillPath}/specs/category-mappings.json`;
|
||||
|
||||
if (!fileExists(mappingsFile)) {
|
||||
return []; // 无集中配置,跳过
|
||||
}
|
||||
|
||||
const mappings = JSON.parse(Read(mappingsFile));
|
||||
const conflicts = [];
|
||||
|
||||
// 与 dimension-mapping.md 对比
|
||||
const dimMapping = Read(`${skillPath}/specs/dimension-mapping.md`);
|
||||
|
||||
for (const [category, config] of Object.entries(mappings.categories)) {
|
||||
// 检查策略是否在文档中提及
|
||||
for (const strategy of config.strategies || []) {
|
||||
if (!dimMapping.includes(strategy)) {
|
||||
conflicts.push({
|
||||
type: 'mapping',
|
||||
key: `${category}.strategies`,
|
||||
issue: `策略 ${strategy} 在 JSON 中定义但未在文档中提及`
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
async function executeDiagnosis(state, workDir) {
|
||||
console.log('=== Diagnosing Documentation Structure ===');
|
||||
|
||||
const skillPath = state.target_skill.path;
|
||||
const issues = [];
|
||||
|
||||
// 1. 检测冗余
|
||||
const definitionDups = await detectDefinitionDuplicates(skillPath);
|
||||
const hardcodedDups = await detectHardcodedDuplicates(skillPath);
|
||||
|
||||
for (const dup of [...definitionDups, ...hardcodedDups]) {
|
||||
issues.push({
|
||||
id: `DOC-RED-${issues.length + 1}`,
|
||||
type: 'doc_redundancy',
|
||||
severity: dup.severity,
|
||||
location: { files: dup.files || [dup.file] },
|
||||
description: dup.description || `${dup.type} 在多处定义`,
|
||||
evidence: dup.files || [dup.file],
|
||||
root_cause: '缺乏单一真相来源',
|
||||
impact: '维护困难,易产生不一致',
|
||||
suggested_fix: 'consolidate_to_ssot'
|
||||
});
|
||||
}
|
||||
|
||||
// 2. 检测冲突
|
||||
const priorityConflicts = await detectPriorityConflicts(skillPath);
|
||||
const driftConflicts = await detectImplementationDrift(skillPath);
|
||||
|
||||
for (const conflict of priorityConflicts) {
|
||||
issues.push({
|
||||
id: `DOC-CON-${issues.length + 1}`,
|
||||
type: 'doc_conflict',
|
||||
severity: 'critical',
|
||||
location: { files: conflict.definitions.map(d => d.file) },
|
||||
description: `${conflict.key} 在不同文件中定义不一致`,
|
||||
evidence: conflict.definitions.map(d => `${d.file}: ${d.definition}`),
|
||||
root_cause: '定义更新后未同步',
|
||||
impact: '行为不可预测',
|
||||
suggested_fix: 'reconcile_conflicting_definitions'
|
||||
});
|
||||
}
|
||||
|
||||
// 3. 生成报告
|
||||
const severity = issues.some(i => i.severity === 'critical') ? 'critical' :
|
||||
issues.some(i => i.severity === 'high') ? 'high' :
|
||||
issues.length > 0 ? 'medium' : 'none';
|
||||
|
||||
const result = {
|
||||
status: 'completed',
|
||||
issues_found: issues.length,
|
||||
severity: severity,
|
||||
execution_time_ms: Date.now() - startTime,
|
||||
details: {
|
||||
patterns_checked: ['DOC-RED-001', 'DOC-RED-002', 'DOC-CON-001', 'DOC-CON-002'],
|
||||
patterns_matched: issues.map(i => i.id.split('-').slice(0, 2).join('-')),
|
||||
evidence: issues.flatMap(i => i.evidence),
|
||||
recommendations: generateRecommendations(issues)
|
||||
},
|
||||
redundancies: issues.filter(i => i.type === 'doc_redundancy'),
|
||||
conflicts: issues.filter(i => i.type === 'doc_conflict')
|
||||
};
|
||||
|
||||
// 写入诊断结果
|
||||
Write(`${workDir}/diagnosis/docs-diagnosis.json`, JSON.stringify(result, null, 2));
|
||||
|
||||
return {
|
||||
stateUpdates: {
|
||||
'diagnosis.docs': result,
|
||||
issues: [...state.issues, ...issues]
|
||||
},
|
||||
outputFiles: [`${workDir}/diagnosis/docs-diagnosis.json`],
|
||||
summary: `文档诊断完成:发现 ${issues.length} 个问题 (${severity})`
|
||||
};
|
||||
}
|
||||
|
||||
function generateRecommendations(issues) {
|
||||
const recommendations = [];
|
||||
|
||||
if (issues.some(i => i.type === 'doc_redundancy')) {
|
||||
recommendations.push('使用 consolidate_to_ssot 策略合并重复定义');
|
||||
recommendations.push('考虑创建 specs/category-mappings.json 集中管理配置');
|
||||
}
|
||||
|
||||
if (issues.some(i => i.type === 'doc_conflict')) {
|
||||
recommendations.push('使用 reconcile_conflicting_definitions 策略解决冲突');
|
||||
recommendations.push('建立文档同步检查机制');
|
||||
}
|
||||
|
||||
return recommendations;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### State Updates
|
||||
|
||||
```javascript
|
||||
{
|
||||
stateUpdates: {
|
||||
'diagnosis.docs': {
|
||||
status: 'completed',
|
||||
issues_found: N,
|
||||
severity: 'critical|high|medium|low|none',
|
||||
redundancies: [...],
|
||||
conflicts: [...]
|
||||
},
|
||||
issues: [...existingIssues, ...newIssues]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Output Files
|
||||
|
||||
- `${workDir}/diagnosis/docs-diagnosis.json` - 完整诊断结果
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| 文件读取失败 | 记录警告,继续处理其他文件 |
|
||||
| 正则匹配超时 | 跳过该模式,记录 skipped |
|
||||
| JSON 解析失败 | 跳过配置对比,仅进行模式检测 |
|
||||
|
||||
## Next Actions
|
||||
|
||||
- 如果发现 critical 问题 → 优先进入 action-propose-fixes
|
||||
- 如果无问题 → 继续下一个诊断或 action-generate-report
|
||||
@@ -93,7 +93,7 @@ function selectNextAction(state) {
|
||||
}
|
||||
|
||||
// 4. Run diagnosis in order (only if not completed)
|
||||
const diagnosisOrder = ['context', 'memory', 'dataflow', 'agent'];
|
||||
const diagnosisOrder = ['context', 'memory', 'dataflow', 'agent', 'docs'];
|
||||
|
||||
for (const diagType of diagnosisOrder) {
|
||||
if (state.diagnosis[diagType] === null) {
|
||||
@@ -101,6 +101,10 @@ function selectNextAction(state) {
|
||||
if (!state.focus_areas.length || state.focus_areas.includes(diagType)) {
|
||||
return `action-diagnose-${diagType}`;
|
||||
}
|
||||
// For docs diagnosis, also check 'all' focus_area
|
||||
if (diagType === 'docs' && state.focus_areas.includes('all')) {
|
||||
return 'action-diagnose-docs';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -175,7 +179,7 @@ function shouldTriggerGeminiAnalysis(state) {
|
||||
}
|
||||
|
||||
// 标准诊断完成但问题未得到解决,需要深度分析
|
||||
const diagnosisComplete = ['context', 'memory', 'dataflow', 'agent'].every(
|
||||
const diagnosisComplete = ['context', 'memory', 'dataflow', 'agent', 'docs'].every(
|
||||
d => state.diagnosis[d] !== null
|
||||
);
|
||||
if (diagnosisComplete &&
|
||||
@@ -318,6 +322,7 @@ After completing the action:
|
||||
| [action-diagnose-memory](actions/action-diagnose-memory.md) | Analyze long-tail forgetting | status === 'running' | Sets diagnosis.memory |
|
||||
| [action-diagnose-dataflow](actions/action-diagnose-dataflow.md) | Analyze data flow issues | status === 'running' | Sets diagnosis.dataflow |
|
||||
| [action-diagnose-agent](actions/action-diagnose-agent.md) | Analyze agent coordination | status === 'running' | Sets diagnosis.agent |
|
||||
| [action-diagnose-docs](actions/action-diagnose-docs.md) | Analyze documentation structure | status === 'running', focus includes 'docs' | Sets diagnosis.docs |
|
||||
| [action-gemini-analysis](actions/action-gemini-analysis.md) | Deep analysis via Gemini CLI | User request OR critical issues | Sets gemini_analysis, adds issues |
|
||||
| [action-generate-report](actions/action-generate-report.md) | Generate consolidated report | All diagnoses complete | Creates tuning-report.md |
|
||||
| [action-propose-fixes](actions/action-propose-fixes.md) | Generate fix proposals | Report generated, issues > 0 | Sets proposed_fixes |
|
||||
|
||||
@@ -30,6 +30,7 @@ interface TuningState {
|
||||
memory: DiagnosisResult | null;
|
||||
dataflow: DiagnosisResult | null;
|
||||
agent: DiagnosisResult | null;
|
||||
docs: DocsDiagnosisResult | null; // 文档结构诊断
|
||||
};
|
||||
|
||||
// === Issues Found ===
|
||||
@@ -138,6 +139,33 @@ interface DiagnosisResult {
|
||||
};
|
||||
}
|
||||
|
||||
interface DocsDiagnosisResult extends DiagnosisResult {
|
||||
redundancies: Redundancy[];
|
||||
conflicts: Conflict[];
|
||||
}
|
||||
|
||||
interface Redundancy {
|
||||
id: string; // e.g., "DOC-RED-001"
|
||||
type: 'state_schema' | 'strategy_mapping' | 'type_definition' | 'other';
|
||||
files: string[]; // 涉及的文件列表
|
||||
description: string; // 冗余描述
|
||||
severity: 'high' | 'medium' | 'low';
|
||||
merge_suggestion: string; // 合并建议
|
||||
}
|
||||
|
||||
interface Conflict {
|
||||
id: string; // e.g., "DOC-CON-001"
|
||||
type: 'priority' | 'mapping' | 'definition';
|
||||
files: string[]; // 涉及的文件列表
|
||||
key: string; // 冲突的键/概念
|
||||
definitions: {
|
||||
file: string;
|
||||
value: string;
|
||||
location?: string;
|
||||
}[];
|
||||
resolution_suggestion: string; // 解决建议
|
||||
}
|
||||
|
||||
interface Evidence {
|
||||
file: string;
|
||||
line?: number;
|
||||
@@ -241,7 +269,8 @@ interface ErrorEntry {
|
||||
"context": null,
|
||||
"memory": null,
|
||||
"dataflow": null,
|
||||
"agent": null
|
||||
"agent": null,
|
||||
"docs": null
|
||||
},
|
||||
"issues": [],
|
||||
"issues_by_severity": {
|
||||
|
||||
263
.claude/skills/skill-tuning/specs/category-mappings.json
Normal file
263
.claude/skills/skill-tuning/specs/category-mappings.json
Normal file
@@ -0,0 +1,263 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"description": "Centralized category mappings for skill-tuning analysis and fix proposal",
|
||||
"categories": {
|
||||
"authoring_principles_violation": {
|
||||
"pattern_ids": ["APV-001", "APV-002", "APV-003", "APV-004", "APV-005", "APV-006"],
|
||||
"severity_hint": "critical",
|
||||
"strategies": ["eliminate_intermediate_files", "minimize_state", "context_passing"],
|
||||
"risk_levels": ["low", "low", "low"],
|
||||
"detection_focus": "Intermediate files, state bloat, file relay patterns",
|
||||
"priority_order": [1, 2, 3]
|
||||
},
|
||||
"context_explosion": {
|
||||
"pattern_ids": ["CTX-001", "CTX-002", "CTX-003", "CTX-004", "CTX-005"],
|
||||
"severity_hint": "high",
|
||||
"strategies": ["sliding_window", "path_reference", "context_summarization", "structured_state"],
|
||||
"risk_levels": ["low", "low", "low", "medium"],
|
||||
"detection_focus": "Token accumulation, content passing patterns",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"memory_loss": {
|
||||
"pattern_ids": ["MEM-001", "MEM-002", "MEM-003", "MEM-004", "MEM-005"],
|
||||
"severity_hint": "high",
|
||||
"strategies": ["constraint_injection", "state_constraints_field", "checkpoint_restore", "goal_embedding"],
|
||||
"risk_levels": ["low", "low", "low", "medium"],
|
||||
"detection_focus": "Constraint propagation, checkpoint mechanisms",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"dataflow_break": {
|
||||
"pattern_ids": ["DF-001", "DF-002", "DF-003", "DF-004", "DF-005"],
|
||||
"severity_hint": "critical",
|
||||
"strategies": ["state_centralization", "schema_enforcement", "field_normalization"],
|
||||
"risk_levels": ["medium", "low", "low"],
|
||||
"detection_focus": "State storage, schema validation",
|
||||
"priority_order": [1, 2, 3]
|
||||
},
|
||||
"agent_failure": {
|
||||
"pattern_ids": ["AGT-001", "AGT-002", "AGT-003", "AGT-004", "AGT-005", "AGT-006"],
|
||||
"severity_hint": "high",
|
||||
"strategies": ["error_wrapping", "result_validation", "flatten_nesting"],
|
||||
"risk_levels": ["low", "low", "medium"],
|
||||
"detection_focus": "Error handling, result validation",
|
||||
"priority_order": [1, 2, 3]
|
||||
},
|
||||
"prompt_quality": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["structured_prompt", "output_schema", "grounding_context", "format_enforcement"],
|
||||
"risk_levels": ["low", "low", "medium", "low"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"architecture": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["phase_decomposition", "interface_contracts", "plugin_architecture", "state_machine"],
|
||||
"risk_levels": ["medium", "medium", "high", "medium"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"performance": {
|
||||
"pattern_ids": ["CTX-001", "CTX-003"],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["token_budgeting", "parallel_execution", "result_caching", "lazy_loading"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": "Reuses context explosion detection",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"error_handling": {
|
||||
"pattern_ids": ["AGT-001", "AGT-002"],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["graceful_degradation", "error_propagation", "structured_logging", "error_context"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": "Reuses agent failure detection",
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"output_quality": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "medium",
|
||||
"strategies": ["quality_gates", "output_validation", "template_enforcement", "completeness_check"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
},
|
||||
"user_experience": {
|
||||
"pattern_ids": [],
|
||||
"severity_hint": "low",
|
||||
"strategies": ["progress_tracking", "status_communication", "interactive_checkpoints", "guided_workflow"],
|
||||
"risk_levels": ["low", "low", "low", "low"],
|
||||
"detection_focus": null,
|
||||
"needs_gemini_analysis": true,
|
||||
"priority_order": [1, 2, 3, 4]
|
||||
}
|
||||
},
|
||||
"keywords": {
|
||||
"chinese": {
|
||||
"token": "context_explosion",
|
||||
"上下文": "context_explosion",
|
||||
"爆炸": "context_explosion",
|
||||
"太长": "context_explosion",
|
||||
"超限": "context_explosion",
|
||||
"膨胀": "context_explosion",
|
||||
"遗忘": "memory_loss",
|
||||
"忘记": "memory_loss",
|
||||
"指令丢失": "memory_loss",
|
||||
"约束消失": "memory_loss",
|
||||
"目标漂移": "memory_loss",
|
||||
"状态": "dataflow_break",
|
||||
"数据": "dataflow_break",
|
||||
"格式": "dataflow_break",
|
||||
"不一致": "dataflow_break",
|
||||
"丢失": "dataflow_break",
|
||||
"损坏": "dataflow_break",
|
||||
"agent": "agent_failure",
|
||||
"子任务": "agent_failure",
|
||||
"失败": "agent_failure",
|
||||
"嵌套": "agent_failure",
|
||||
"调用": "agent_failure",
|
||||
"协调": "agent_failure",
|
||||
"慢": "performance",
|
||||
"性能": "performance",
|
||||
"效率": "performance",
|
||||
"延迟": "performance",
|
||||
"提示词": "prompt_quality",
|
||||
"输出不稳定": "prompt_quality",
|
||||
"幻觉": "prompt_quality",
|
||||
"架构": "architecture",
|
||||
"结构": "architecture",
|
||||
"模块": "architecture",
|
||||
"耦合": "architecture",
|
||||
"扩展": "architecture",
|
||||
"错误": "error_handling",
|
||||
"异常": "error_handling",
|
||||
"恢复": "error_handling",
|
||||
"降级": "error_handling",
|
||||
"崩溃": "error_handling",
|
||||
"输出": "output_quality",
|
||||
"质量": "output_quality",
|
||||
"验证": "output_quality",
|
||||
"不完整": "output_quality",
|
||||
"交互": "user_experience",
|
||||
"体验": "user_experience",
|
||||
"进度": "user_experience",
|
||||
"反馈": "user_experience",
|
||||
"不清晰": "user_experience",
|
||||
"中间文件": "authoring_principles_violation",
|
||||
"临时文件": "authoring_principles_violation",
|
||||
"文件中转": "authoring_principles_violation",
|
||||
"state膨胀": "authoring_principles_violation"
|
||||
},
|
||||
"english": {
|
||||
"token": "context_explosion",
|
||||
"context": "context_explosion",
|
||||
"explosion": "context_explosion",
|
||||
"overflow": "context_explosion",
|
||||
"bloat": "context_explosion",
|
||||
"forget": "memory_loss",
|
||||
"lost": "memory_loss",
|
||||
"drift": "memory_loss",
|
||||
"constraint": "memory_loss",
|
||||
"goal": "memory_loss",
|
||||
"state": "dataflow_break",
|
||||
"data": "dataflow_break",
|
||||
"format": "dataflow_break",
|
||||
"inconsistent": "dataflow_break",
|
||||
"corrupt": "dataflow_break",
|
||||
"agent": "agent_failure",
|
||||
"subtask": "agent_failure",
|
||||
"fail": "agent_failure",
|
||||
"nested": "agent_failure",
|
||||
"call": "agent_failure",
|
||||
"coordinate": "agent_failure",
|
||||
"slow": "performance",
|
||||
"performance": "performance",
|
||||
"efficiency": "performance",
|
||||
"latency": "performance",
|
||||
"prompt": "prompt_quality",
|
||||
"unstable": "prompt_quality",
|
||||
"hallucination": "prompt_quality",
|
||||
"architecture": "architecture",
|
||||
"structure": "architecture",
|
||||
"module": "architecture",
|
||||
"coupling": "architecture",
|
||||
"error": "error_handling",
|
||||
"exception": "error_handling",
|
||||
"recovery": "error_handling",
|
||||
"crash": "error_handling",
|
||||
"output": "output_quality",
|
||||
"quality": "output_quality",
|
||||
"validation": "output_quality",
|
||||
"incomplete": "output_quality",
|
||||
"interaction": "user_experience",
|
||||
"ux": "user_experience",
|
||||
"progress": "user_experience",
|
||||
"feedback": "user_experience",
|
||||
"intermediate": "authoring_principles_violation",
|
||||
"temp": "authoring_principles_violation",
|
||||
"relay": "authoring_principles_violation"
|
||||
}
|
||||
},
|
||||
"category_labels": {
|
||||
"context_explosion": "Context Explosion",
|
||||
"memory_loss": "Long-tail Forgetting",
|
||||
"dataflow_break": "Data Flow Disruption",
|
||||
"agent_failure": "Agent Coordination Failure",
|
||||
"prompt_quality": "Prompt Quality",
|
||||
"architecture": "Architecture",
|
||||
"performance": "Performance",
|
||||
"error_handling": "Error Handling",
|
||||
"output_quality": "Output Quality",
|
||||
"user_experience": "User Experience",
|
||||
"authoring_principles_violation": "Authoring Principles Violation",
|
||||
"custom": "Custom"
|
||||
},
|
||||
"category_labels_chinese": {
|
||||
"context_explosion": "Context Explosion",
|
||||
"memory_loss": "Long-tail Forgetting",
|
||||
"dataflow_break": "Data Flow Disruption",
|
||||
"agent_failure": "Agent Coordination Failure",
|
||||
"prompt_quality": "Prompt Quality",
|
||||
"architecture": "Architecture Issues",
|
||||
"performance": "Performance Issues",
|
||||
"error_handling": "Error Handling",
|
||||
"output_quality": "Output Quality",
|
||||
"user_experience": "User Experience",
|
||||
"authoring_principles_violation": "Authoring Principles Violation",
|
||||
"custom": "Other Issues"
|
||||
},
|
||||
"category_descriptions": {
|
||||
"context_explosion": "Token accumulation causing prompt size to grow unbounded",
|
||||
"memory_loss": "Early instructions or constraints lost in later phases",
|
||||
"dataflow_break": "State data inconsistency between phases",
|
||||
"agent_failure": "Sub-agent call failures or abnormal results",
|
||||
"prompt_quality": "Vague prompts causing unstable outputs",
|
||||
"architecture": "Improper phase division or module structure",
|
||||
"performance": "Slow execution or high token consumption",
|
||||
"error_handling": "Incomplete error recovery mechanisms",
|
||||
"output_quality": "Output validation or completeness issues",
|
||||
"user_experience": "Interaction or feedback clarity issues",
|
||||
"authoring_principles_violation": "Violation of skill authoring principles",
|
||||
"custom": "Requires custom analysis"
|
||||
},
|
||||
"fix_priority_order": {
|
||||
"P0": ["dataflow_break", "authoring_principles_violation"],
|
||||
"P1": ["agent_failure"],
|
||||
"P2": ["context_explosion"],
|
||||
"P3": ["memory_loss"]
|
||||
},
|
||||
"cross_category_dependencies": {
|
||||
"context_explosion": ["memory_loss"],
|
||||
"dataflow_break": ["agent_failure"],
|
||||
"agent_failure": ["context_explosion"]
|
||||
},
|
||||
"fallback": {
|
||||
"strategies": ["custom"],
|
||||
"risk_levels": ["medium"],
|
||||
"has_fix": true,
|
||||
"needs_gemini_analysis": true
|
||||
}
|
||||
}
|
||||
@@ -29,6 +29,8 @@
|
||||
| 错误, 异常, 恢复, 降级, 崩溃 | error, exception, recovery, crash | error_handling | agent_failure |
|
||||
| 输出, 质量, 格式, 验证, 不完整 | output, quality, validation, incomplete | output_quality | - |
|
||||
| 交互, 体验, 进度, 反馈, 不清晰 | interaction, ux, progress, feedback | user_experience | - |
|
||||
| 重复, 冗余, 多处定义, 相同内容 | duplicate, redundant, multiple definitions | doc_redundancy | - |
|
||||
| 冲突, 不一致, 定义不同, 矛盾 | conflict, inconsistent, mismatch, contradiction | doc_conflict | - |
|
||||
|
||||
### Matching Algorithm
|
||||
|
||||
@@ -86,6 +88,8 @@ function matchCategory(keywords) {
|
||||
| error_handling | AGT-001, AGT-002 | (复用 agent 检测) |
|
||||
| output_quality | - | (无内置检测,需 Gemini 分析) |
|
||||
| user_experience | - | (无内置检测,需 Gemini 分析) |
|
||||
| doc_redundancy | DOC-RED-001, DOC-RED-002, DOC-RED-003 | 重复定义检测 |
|
||||
| doc_conflict | DOC-CON-001, DOC-CON-002 | 冲突定义检测 |
|
||||
|
||||
---
|
||||
|
||||
@@ -101,6 +105,8 @@ function matchCategory(keywords) {
|
||||
| memory_loss | constraint_injection, state_constraints_field, checkpoint_restore, goal_embedding | Low-Medium |
|
||||
| dataflow_break | state_centralization, schema_enforcement, field_normalization | Low-Medium |
|
||||
| agent_failure | error_wrapping, result_validation, flatten_nesting | Low-Medium |
|
||||
| doc_redundancy | consolidate_to_ssot, centralize_mapping_config | Low-Medium |
|
||||
| doc_conflict | reconcile_conflicting_definitions | Low |
|
||||
|
||||
### Extended Categories (需 Gemini 生成策略)
|
||||
|
||||
|
||||
@@ -156,6 +156,53 @@ Classification of skill execution issues with detection patterns and severity cr
|
||||
|
||||
---
|
||||
|
||||
### 5. Documentation Redundancy (P5)
|
||||
|
||||
**Definition**: 同一定义(如 State Schema、映射表、类型定义)在多个文件中重复出现,导致维护困难和不一致风险。
|
||||
|
||||
**Root Causes**:
|
||||
- 缺乏单一真相来源 (SSOT)
|
||||
- 复制粘贴代替引用
|
||||
- 硬编码配置代替集中管理
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| DOC-RED-001 | 跨文件语义比较 | 找到 State Schema 等核心概念的重复定义 |
|
||||
| DOC-RED-002 | 代码块 vs 规范表对比 | action 文件中硬编码与 spec 文档的重复 |
|
||||
| DOC-RED-003 | `/interface\s+(\w+)/` 同名扫描 | 多处定义的 interface/type |
|
||||
|
||||
**Impact Levels**:
|
||||
- **High**: 核心定义(State Schema, 映射表)重复
|
||||
- **Medium**: 类型定义重复
|
||||
- **Low**: 示例代码重复
|
||||
|
||||
---
|
||||
|
||||
### 6. Documentation Conflict (P6)
|
||||
|
||||
**Definition**: 同一概念在不同文件中定义不一致,导致行为不可预测和文档误导。
|
||||
|
||||
**Root Causes**:
|
||||
- 定义更新后未同步其他位置
|
||||
- 实现与文档漂移
|
||||
- 缺乏一致性校验
|
||||
|
||||
**Detection Patterns**:
|
||||
|
||||
| Pattern ID | Regex/Check | Description |
|
||||
|------------|-------------|-------------|
|
||||
| DOC-CON-001 | 键值一致性校验 | 同一键(如优先级)在不同文件中值不同 |
|
||||
| DOC-CON-002 | 实现 vs 文档对比 | 硬编码配置与文档对应项不一致 |
|
||||
|
||||
**Impact Levels**:
|
||||
- **Critical**: 优先级/类别定义冲突
|
||||
- **High**: 策略映射不一致
|
||||
- **Medium**: 示例与实际不符
|
||||
|
||||
---
|
||||
|
||||
## Severity Criteria
|
||||
|
||||
### Global Severity Matrix
|
||||
@@ -215,6 +262,8 @@ function calculateIssueSeverity(issue) {
|
||||
| Long-tail Forgetting | constraint_injection, state_constraints_field, checkpoint | 1, 2, 3 |
|
||||
| Data Flow Disruption | state_centralization, schema_enforcement, field_normalization | 1, 2, 3 |
|
||||
| Agent Coordination | error_wrapping, result_validation, flatten_nesting | 1, 2, 3 |
|
||||
| **Documentation Redundancy** | consolidate_to_ssot, centralize_mapping_config | 1, 2 |
|
||||
| **Documentation Conflict** | reconcile_conflicting_definitions | 1 |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -675,6 +675,144 @@ if (parsedA.needs_agent_b) {
|
||||
|
||||
---
|
||||
|
||||
## Documentation Strategies
|
||||
|
||||
文档去重和冲突解决策略。
|
||||
|
||||
---
|
||||
|
||||
### Strategy: consolidate_to_ssot
|
||||
|
||||
**Purpose**: 将重复定义合并到单一真相来源 (Single Source of Truth)。
|
||||
|
||||
**Implementation**:
|
||||
```javascript
|
||||
// 合并流程
|
||||
async function consolidateToSSOT(state, duplicates) {
|
||||
// 1. 识别最完整的定义位置
|
||||
const canonical = selectCanonicalSource(duplicates);
|
||||
|
||||
// 2. 确保规范位置包含完整定义
|
||||
const fullDefinition = mergeDefinitions(duplicates);
|
||||
Write(canonical.file, fullDefinition);
|
||||
|
||||
// 3. 替换其他位置为引用
|
||||
for (const dup of duplicates.filter(d => d.file !== canonical.file)) {
|
||||
const reference = generateReference(canonical.file, dup.type);
|
||||
// 例如: "详见 [state-schema.md](phases/state-schema.md)"
|
||||
replaceWithReference(dup.file, dup.location, reference);
|
||||
}
|
||||
|
||||
return { canonical: canonical.file, removed: duplicates.length - 1 };
|
||||
}
|
||||
|
||||
function selectCanonicalSource(duplicates) {
|
||||
// 优先级: specs/ > phases/ > SKILL.md
|
||||
const priority = ['specs/', 'phases/', 'SKILL.md'];
|
||||
return duplicates.sort((a, b) => {
|
||||
const aIdx = priority.findIndex(p => a.file.includes(p));
|
||||
const bIdx = priority.findIndex(p => b.file.includes(p));
|
||||
return aIdx - bIdx;
|
||||
})[0];
|
||||
}
|
||||
```
|
||||
|
||||
**Risk**: Low
|
||||
**Verification**:
|
||||
- 确认只有一处包含完整定义
|
||||
- 其他位置包含有效引用链接
|
||||
|
||||
---
|
||||
|
||||
### Strategy: centralize_mapping_config
|
||||
|
||||
**Purpose**: 将硬编码配置提取到集中的 JSON 文件,代码改为加载配置。
|
||||
|
||||
**Implementation**:
|
||||
```javascript
|
||||
// 1. 创建集中配置文件
|
||||
const config = {
|
||||
version: "1.0.0",
|
||||
categories: {
|
||||
context_explosion: {
|
||||
pattern_ids: ["CTX-001", "CTX-002"],
|
||||
strategies: ["sliding_window", "path_reference"]
|
||||
}
|
||||
// ... 从硬编码中提取
|
||||
}
|
||||
};
|
||||
Write('specs/category-mappings.json', JSON.stringify(config, null, 2));
|
||||
|
||||
// 2. 重构代码加载配置
|
||||
// Before:
|
||||
function findTaxonomyMatch(category) {
|
||||
const patternMapping = {
|
||||
'context_explosion': { category: 'context_explosion', pattern_ids: [...] }
|
||||
// 硬编码
|
||||
};
|
||||
return patternMapping[category];
|
||||
}
|
||||
|
||||
// After:
|
||||
function findTaxonomyMatch(category) {
|
||||
const mappings = JSON.parse(Read('specs/category-mappings.json'));
|
||||
const config = mappings.categories[category];
|
||||
if (!config) return null;
|
||||
return { category, pattern_ids: config.pattern_ids, severity_hint: config.severity_hint };
|
||||
}
|
||||
```
|
||||
|
||||
**Risk**: Medium (需要测试配置加载逻辑)
|
||||
**Verification**:
|
||||
- JSON 文件语法正确
|
||||
- 所有原硬编码的值都已迁移
|
||||
- 功能行为不变
|
||||
|
||||
---
|
||||
|
||||
### Strategy: reconcile_conflicting_definitions
|
||||
|
||||
**Purpose**: 调和冲突的定义,由用户选择正确版本后统一更新。
|
||||
|
||||
**Implementation**:
|
||||
```javascript
|
||||
async function reconcileConflicts(conflicts) {
|
||||
for (const conflict of conflicts) {
|
||||
// 1. 展示冲突给用户
|
||||
const options = conflict.definitions.map(def => ({
|
||||
label: `${def.file}: ${def.value}`,
|
||||
description: `来自 ${def.file}`
|
||||
}));
|
||||
|
||||
const choice = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `发现冲突定义: "${conflict.key}",请选择正确版本`,
|
||||
header: '冲突解决',
|
||||
options: options,
|
||||
multiSelect: false
|
||||
}]
|
||||
});
|
||||
|
||||
// 2. 更新所有文件为选中的版本
|
||||
const selected = conflict.definitions[choice.index];
|
||||
for (const def of conflict.definitions) {
|
||||
if (def.file !== selected.file) {
|
||||
updateDefinition(def.file, conflict.key, selected.value);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { resolved: conflicts.length };
|
||||
}
|
||||
```
|
||||
|
||||
**Risk**: Low (用户确认后执行)
|
||||
**Verification**:
|
||||
- 所有位置的定义一致
|
||||
- 无新冲突产生
|
||||
|
||||
---
|
||||
|
||||
## Strategy Selection Guide
|
||||
|
||||
```
|
||||
@@ -735,6 +873,13 @@ Issue Type: User Experience
|
||||
├── unclear status? → status_communication
|
||||
├── no feedback? → interactive_checkpoints
|
||||
└── confusing flow? → guided_workflow
|
||||
|
||||
Issue Type: Documentation Redundancy
|
||||
├── 核心定义重复? → consolidate_to_ssot
|
||||
└── 硬编码配置重复? → centralize_mapping_config
|
||||
|
||||
Issue Type: Documentation Conflict
|
||||
└── 定义不一致? → reconcile_conflicting_definitions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user