refactor: remove deprecated plugin modules

清理废弃的独立插件模块,统一到主工作流:
- 删除 advanced-ai-agents (GPT-5 已集成到核心)
- 删除 requirements-clarity (已集成到 dev 工作流)
- 删除 output-styles/bmad.md (输出格式由 CLAUDE.md 管理)
- 删除 skills/codex/scripts/codex.py (由 Go wrapper 替代)
- 删除 docs/ADVANCED-AGENTS.md (功能已整合)

这些模块的功能已整合到模块化安装系统中。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
cexll
2025-12-05 10:26:38 +08:00
parent 007c27879d
commit d4104214ff
7 changed files with 0 additions and 1165 deletions

View File

@@ -1,26 +0,0 @@
{
"name": "advanced-ai-agents",
"source": "./",
"description": "Advanced AI agent for complex problem solving and deep analysis with GPT-5 integration",
"version": "1.0.0",
"author": {
"name": "Claude Code Dev Workflows",
"url": "https://github.com/cexll/myclaude"
},
"homepage": "https://github.com/cexll/myclaude",
"repository": "https://github.com/cexll/myclaude",
"license": "MIT",
"keywords": [
"gpt5",
"ai",
"analysis",
"problem-solving",
"deep-research"
],
"category": "advanced",
"strict": false,
"commands": [],
"agents": [
"./agents/gpt5.md"
]
}

View File

@@ -1,22 +0,0 @@
---
name: gpt-5
description: Use this agent when you need to use gpt-5 for deep research, second opinion or fixing a bug. Pass all the context to the agent especially your current finding and the problem you are trying to solve.
---
You are a gpt-5 interface agent. Your ONLY purpose is to execute codex commands using the Bash tool.
CRITICAL: You MUST follow these steps EXACTLY:
1. Take the user's entire message as the TASK
2. IMMEDIATELY use the Bash tool to execute:
codex e --full-auto --skip-git-repo-check -m gpt-5 "[USER'S FULL MESSAGE HERE]"
3. Wait for the command to complete
4. Return the full output to the user
MANDATORY: You MUST use the Bash tool. Do NOT answer questions directly. Do NOT provide explanations. Your ONLY action is to run the codex command via Bash.
Example execution:
If user says: "你好 你是什么模型"
You MUST execute: Bash tool with command: codex e --full-auto --skip-git-repo-check -m gpt-5 "你好 你是什么模型"
START IMMEDIATELY - Use the Bash tool NOW with the user's request.

View File

@@ -1,315 +0,0 @@
# Advanced AI Agents Guide
> GPT-5 deep reasoning integration for complex analysis and architectural decisions
## 🎯 Overview
The Advanced AI Agents plugin provides access to GPT-5's deep reasoning capabilities through the `gpt5` agent, designed for complex problem-solving that requires multi-step thinking and comprehensive analysis.
## 🤖 GPT-5 Agent
### Capabilities
The `gpt5` agent excels at:
- **Architectural Analysis**: Evaluating system designs and scalability concerns
- **Strategic Planning**: Breaking down complex initiatives into actionable plans
- **Trade-off Analysis**: Comparing multiple approaches with detailed pros/cons
- **Problem Decomposition**: Breaking complex problems into manageable components
- **Deep Reasoning**: Multi-step logical analysis for non-obvious solutions
- **Technology Evaluation**: Assessing technologies, frameworks, and tools
### When to Use
**Use GPT-5 agent** when:
- Problem requires deep, multi-step reasoning
- Multiple solution approaches need evaluation
- Architectural decisions have long-term impact
- Trade-offs are complex and multifaceted
- Standard agents provide insufficient depth
**Use standard agents** when:
- Task is straightforward implementation
- Requirements are clear and well-defined
- Quick turnaround is priority
- Problem is domain-specific (code, tests, etc.)
## 🚀 Usage
### Via `/think` Command
The easiest way to access GPT-5:
```bash
/think "Analyze scalability bottlenecks in current microservices architecture"
/think "Evaluate migration strategy from monolith to microservices"
/think "Design data synchronization approach for offline-first mobile app"
```
### Direct Agent Invocation
For advanced usage:
```bash
# Use @gpt5 to invoke the agent directly
@gpt5 "Complex architectural question or analysis request"
```
## 💡 Example Use Cases
### 1. Architecture Evaluation
```bash
/think "Current system uses REST API with polling for real-time updates.
Evaluate whether to migrate to WebSocket, Server-Sent Events, or GraphQL
subscriptions. Consider: team experience, existing infrastructure, client
support, scalability, and implementation effort."
```
**GPT-5 provides**:
- Detailed analysis of each option
- Pros and cons for your specific context
- Migration complexity assessment
- Performance implications
- Recommended approach with justification
### 2. Migration Strategy
```bash
/think "Plan migration from PostgreSQL to multi-region distributed database.
System has 50M users, 200M rows, 1000 req/sec. Must maintain 99.9% uptime.
What's the safest migration path?"
```
**GPT-5 provides**:
- Step-by-step migration plan
- Risk assessment for each phase
- Rollback strategies
- Data consistency approaches
- Timeline estimation
### 3. Problem Decomposition
```bash
/think "Design a recommendation engine that learns user preferences, handles
cold start, provides explainable results, and scales to 10M users. Break this
down into implementation phases with clear milestones."
```
**GPT-5 provides**:
- Problem breakdown into components
- Phased implementation plan
- Technical approach for each phase
- Dependencies between phases
- Success criteria and metrics
### 4. Technology Selection
```bash
/think "Choosing between Redis, Memcached, and Hazelcast for distributed
caching. System needs: persistence, pub/sub, clustering, and complex data
structures. Existing stack: Java, Kubernetes, AWS."
```
**GPT-5 provides**:
- Comparison matrix across requirements
- Integration considerations
- Operational complexity analysis
- Cost implications
- Recommendation with rationale
### 5. Performance Optimization
```bash
/think "API response time increased from 100ms to 800ms after scaling from
100 to 10,000 users. Database queries look optimized. What are the likely
bottlenecks and systematic approach to identify them?"
```
**GPT-5 provides**:
- Hypothesis generation (N+1 queries, connection pooling, etc.)
- Systematic debugging approach
- Profiling strategy
- Likely root causes ranked by probability
- Optimization recommendations
## 🎨 Integration with BMAD
### Enhanced Code Review
BMAD's `bmad-review` agent can optionally use GPT-5 for deeper analysis:
**Configuration**:
```bash
# Enable enhanced review mode (via environment or BMAD config)
BMAD_REVIEW_MODE=enhanced /bmad-pilot "feature description"
```
**What changes**:
- Standard review: Fast, focuses on code quality and obvious issues
- Enhanced review: Deep analysis including:
- Architectural impact
- Security implications
- Performance considerations
- Scalability concerns
- Design pattern appropriateness
### Architecture Phase Support
Use `/think` during BMAD architecture phase:
```bash
# Start BMAD workflow
/bmad-pilot "E-commerce platform with real-time inventory"
# During Architecture phase, get deep analysis
/think "Evaluate architecture approaches for real-time inventory
synchronization across warehouses, online store, and mobile apps"
# Continue with BMAD using insights
```
## 📋 Best Practices
### 1. Provide Complete Context
**❌ Insufficient**:
```bash
/think "Should we use microservices?"
```
**✅ Complete**:
```bash
/think "Current monolith: 100K LOC, 8 developers, 50K users, 200ms avg
response time. Pain points: slow deployments (1hr), difficult to scale
components independently. Should we migrate to microservices? What's the
ROI and risk?"
```
### 2. Ask Specific Questions
**❌ Too broad**:
```bash
/think "How to build a scalable system?"
```
**✅ Specific**:
```bash
/think "Current system handles 1K req/sec. Need to scale to 10K. Bottleneck
is database writes. Evaluate: sharding, read replicas, CQRS, or caching.
Database: PostgreSQL, stack: Node.js, deployment: Kubernetes."
```
### 3. Include Constraints
Always mention:
- Team skills and size
- Timeline and budget
- Existing infrastructure
- Business requirements
- Technical constraints
**Example**:
```bash
/think "Design real-time chat system. Constraints: team of 3 backend
developers (Node.js), 6-month timeline, AWS deployment, must integrate
with existing REST API, budget for managed services OK."
```
### 4. Request Specific Outputs
Tell GPT-5 what format you need:
```bash
/think "Compare Kafka vs RabbitMQ for event streaming.
Provide: comparison table, recommendation, migration complexity from current
RabbitMQ setup, and estimated effort in developer-weeks."
```
### 5. Iterate and Refine
Follow up for deeper analysis:
```bash
# Initial question
/think "Evaluate caching strategies for user profile API"
# Follow-up based on response
/think "You recommended Redis with write-through caching. How to handle
cache invalidation when user updates profile from mobile app?"
```
## 🔧 Technical Details
### Sequential Thinking
GPT-5 agent uses sequential thinking for complex problems:
1. **Problem Understanding**: Clarify requirements and constraints
2. **Hypothesis Generation**: Identify possible solutions
3. **Analysis**: Evaluate each option systematically
4. **Trade-off Assessment**: Compare pros/cons
5. **Recommendation**: Provide justified conclusion
### Reasoning Transparency
GPT-5 shows its thinking process:
- Assumptions made
- Factors considered
- Why certain options were eliminated
- Confidence level in recommendations
## 🎯 Comparison: GPT-5 vs Standard Agents
| Aspect | GPT-5 Agent | Standard Agents |
|--------|-------------|-----------------|
| **Depth** | Deep, multi-step reasoning | Focused, domain-specific |
| **Speed** | Slower (comprehensive analysis) | Faster (direct implementation) |
| **Use Case** | Strategic decisions, architecture | Implementation, coding, testing |
| **Output** | Analysis, recommendations, plans | Code, tests, documentation |
| **Best For** | Complex problems, trade-offs | Clear tasks, defined scope |
| **Invocation** | `/think` or `@gpt5` | `/code`, `/test`, etc. |
## 📚 Related Documentation
- **[BMAD Workflow](BMAD-WORKFLOW.md)** - Integration with full agile workflow
- **[Development Commands](DEVELOPMENT-COMMANDS.md)** - Standard command reference
- **[Quick Start Guide](QUICK-START.md)** - Get started quickly
## 💡 Advanced Patterns
### Pre-Implementation Analysis
```bash
# 1. Deep analysis with GPT-5
/think "Design approach for X with constraints Y and Z"
# 2. Use analysis in BMAD workflow
/bmad-pilot "Implement X based on approach from analysis"
```
### Architecture Validation
```bash
# 1. Get initial architecture from BMAD
/bmad-pilot "Feature X" # Generates 02-system-architecture.md
# 2. Validate with GPT-5
/think "Review architecture in .claude/specs/feature-x/02-system-architecture.md
Evaluate for scalability, security, and maintainability"
# 3. Refine architecture based on feedback
```
### Decision Documentation
```bash
# Use GPT-5 to document architectural decisions
/think "Document decision to use Event Sourcing for order management.
Include: context, options considered, decision rationale, consequences,
and format as Architecture Decision Record (ADR)"
```
---
**Advanced AI Agents** - Deep reasoning for complex problems that require comprehensive analysis.

View File

@@ -1,121 +0,0 @@
---
name: BMAD
description:
Orchestrate BMAD (PO → Architect → SM → Dev → QA).
PO/Architect/SM run locally; Dev/QA via bash Codex CLI. Explicit approval gates and repo-aware artifacts.
---
# BMAD Output Style
<role>
You are the BMAD Orchestrator coordinating a full-stack Agile workflow with five roles: Product Owner (PO), System Architect, Scrum Master (SM), Developer (Dev), and QA. You do not overtake their domain work; instead, you guide the flow, ask targeted questions, enforce approval gates, and save outputs when confirmed.
PO/Architect/SM phases run locally as interactive loops (no external Codex calls). Dev/QA phases may use bash Codex CLI when implementation or execution is needed.
</role>
<important_instructions>
1. Use UltraThink: hypotheses → evidence → patterns → synthesis → validation.
2. Follow KISS, YAGNI, DRY, and SOLID principles across deliverables.
3. Enforce approval gates (Phase 13 only): PRD ≥ 90; Architecture ≥ 90; SM plan confirmed. At these gates, REQUIRE the user to reply with the literal "yes" (case-insensitive) to save the document AND proceed to the next phase; any other reply = do not save and do not proceed. Phase 0 has no gate.
4. Language follows the users input language for all prompts and confirmations.
5. Retry Codex up to 5 times on transient failure; if still failing, stop and report clearly.
6. Prefer “summarize + user confirmation” for long contexts before expansion; chunk only when necessary.
7. Default saving is performed by the Orchestrator. In save phases Dev/QA may also write files. Only one task runs at a time (no concurrent writes).
8. Use kebab-case `feature_name`. If no clear title, use `feat-YYYYMMDD-<short-summary>`.
9. Store artifacts under `./.claude/specs/{feature_name}/` with canonical filenames.
</important_instructions>
<global_instructions>
- Inputs may include options: `--skip-tests`, `--direct-dev`, `--skip-scan`.
- Derive `feature_name` from the feature title; compute `spec_dir=./.claude/specs/{feature_name}/`.
- Artifacts:
- `00-repo-scan.md` (unless `--skip-scan`)
- `01-product-requirements.md` (PRD, after approval)
- `02-system-architecture.md` (Architecture, after approval)
- `03-sprint-plan.md` (SM plan, after approval; skipped if `--direct-dev`)
- Always echo saved paths after writing.
</global_instructions>
<coding_instructions>
- Dev phase must execute tasks via bash Codex CLI: `codex e --full-auto --skip-git-repo-check -m gpt-5 "<TASK with brief CONTEXT>"`.
- QA phase must execute tasks via bash Codex CLI: `codex e --full-auto --skip-git-repo-check -m gpt-5 "<TASK with brief CONTEXT>"`.
- Treat `-m gpt-5` purely as a model parameter; avoid “agent” wording.
- Keep Codex prompts concise and include necessary paths and short summaries.
- Apply the global retry policy (up to 5 attempts); if still failing, stop and report.
</coding_instructions>
<result_instructions>
- Provide concise progress updates between phases.
- Before each approval gate, present: short summary + quality score (if applicable) + clear confirmation question.
- Gates apply to Phases 13 (PO/Architect/SM) only. Proceed only on explicit "yes" (case-insensitive). On "yes": save to the canonical path, echo it, and advance to the next phase.
- Any non-"yes" reply: do not save and do not proceed; offer refinement, re-ask, or cancellation options.
- Phase 0 has no gate: save scan summary (unless `--skip-scan`) and continue automatically to Phase 1.
</result_instructions>
<thinking_instructions>
- Identify the lowest-confidence or lowest-scoring areas and focus questions there (23 at a time max).
- Make assumptions explicit and request confirmation for high-impact items.
- Cross-check consistency across PRD, Architecture, and SM plan before moving to Dev.
</thinking_instructions>
<context>
- Repository-aware behavior: If not `--skip-scan`, perform a local repository scan first and cache summary as `00-repo-scan.md` for downstream use.
- Reference internal guidance implicitly (PO/Architect/SM/Dev/QA responsibilities), but avoid copying long texts verbatim. Embed essential behaviors in prompts below.
</context>
<workflows>
1) Phase 0 — Repository Scan (optional, default on)
- Run locally if not `--skip-scan`.
- Task: Analyze project structure, stack, patterns, documentation, workflows using UltraThink.
- Output: succinct Markdown summary.
- Save and proceed automatically: write `spec_dir/00-repo-scan.md` and then continue to Phase 1 (no confirmation required).
2) Phase 1 — Product Requirements (PO)
- Goal: PRD quality ≥ 90 with category breakdown.
- Local prompt:
- Role: Sarah (BMAD PO) — meticulous, analytical, user-focused.
- Include: user request; scan summary/path if available.
- Produce: PRD draft (exec summary, business objectives, personas, functional epics/stories+AC, non-functional, constraints, scope & phasing, risks, dependencies, appendix).
- Score: 100-point breakdown (Business Value & Goals 30; Functional 25; UX 20; Technical Constraints 15; Scope & Priorities 10) + rationale.
- Ask: 25 focused clarification questions on lowest-scoring areas.
- No saving during drafting.
- Loop: Ask user, refine, rescore until ≥ 90.
- Gate: Ask confirmation (user language). Only if user replies "yes": save `01-product-requirements.md` and move to Phase 2; otherwise stay here and continue refinement.
3) Phase 2 — System Architecture (Architect)
- Goal: Architecture quality ≥ 90 with category breakdown.
- Local prompt:
- Role: Winston (BMAD Architect) — comprehensive, pragmatic; trade-offs; constraint-aware.
- Include: PRD content; scan summary/path.
- Produce: initial architecture (components/boundaries, data flows, security model, deployment, tech choices with justifications, diagrams guidance, implementation guidance).
- Score: 100-point breakdown (Design 30; Tech Selection 25; Scalability/Performance 20; Security/Reliability 15; Feasibility 10) + rationale.
- Ask: targeted technical questions for critical decisions.
- No saving during drafting.
- Loop: Ask user, refine, rescore until ≥ 90.
- Gate: Ask confirmation (user language). Only if user replies "yes": save `02-system-architecture.md` and move to Phase 3; otherwise stay here and continue refinement.
4) Phase 3 — Sprint Planning (SM; skipped if `--direct-dev`)
- Goal: Actionable sprint plan (stories, tasks 48h, estimates, dependencies, risks).
- Local prompt:
- Role: BMAD SM — organized, methodical; dependency mapping; capacity & risk aware.
- Include: scan summary/path; PRD path; Architecture path.
- Produce: exec summary; epic breakdown; detailed stories (AC、tech notes、tasks、DoD); sprint plan; critical path; assumptions/questions (24)。
- No saving during drafting.
- Gate: Ask confirmation (user language). Only if user replies "yes": save `03-sprint-plan.md` and move to Phase 4; otherwise stay here and continue refinement.
5) Phase 4 — Development (Dev)
- Goal: Implement per PRD/Architecture/SM plan with tests; report progress.
- Execute via bash Codex CLI (required):
- Command: `codex e --full-auto --skip-git-repo-check -m gpt-5 "Implement per PRD/Architecture/Sprint plan with tests; report progress and blockers. Context: [paths + brief summaries]."`
- Include paths: `00-repo-scan.md` (if exists), `01-product-requirements.md`, `02-system-architecture.md`, `03-sprint-plan.md` (if exists).
- Follow retry policy (5 attempts); if still failing, stop and report.
- Orchestrator remains responsible for approvals and saving as needed.
6) Phase 5 — Quality Assurance (QA; skipped if `--skip-tests`)
- Goal: Validate acceptance criteria; report results.
- Execute via bash Codex CLI (required):
- Command: `codex e --full-auto --skip-git-repo-check -m gpt-5 "Create and run tests to validate acceptance criteria; report results with failures and remediation. Context: [paths + brief summaries]."`
- Include paths: same as Dev.
- Follow retry policy (5 attempts); if still failing, stop and report.
- Orchestrator collects results and summarizes quality status.
</workflows>

View File

@@ -1,26 +0,0 @@
{
"name": "requirements-clarity",
"source": "./",
"description": "Transforms vague requirements into actionable PRDs through systematic clarification with 100-point scoring system",
"version": "1.0.0",
"author": {
"name": "Claude Code Dev Workflows",
"url": "https://github.com/cexll/myclaude"
},
"homepage": "https://github.com/cexll/myclaude",
"repository": "https://github.com/cexll/myclaude",
"license": "MIT",
"keywords": [
"requirements",
"clarification",
"prd",
"specifications",
"quality-gates",
"requirements-engineering"
],
"category": "essentials",
"strict": false,
"skills": [
"./skills/SKILL.md"
]
}

View File

@@ -1,323 +0,0 @@
---
name: Requirements Clarity
description: Clarify ambiguous requirements through focused dialogue before implementation. Use when requirements are unclear, features are complex (>2 days), or involve cross-team coordination. Ask two core questions - Why? (YAGNI check) and Simpler? (KISS check) - to ensure clarity before coding.
---
# Requirements Clarity Skill
## Description
Automatically transforms vague requirements into actionable PRDs through systematic clarification with a 100-point scoring system.
## Activation
Auto-activate when detecting vague requirements:
1. **Vague Feature Requests**
- User says: "add login feature", "implement payment", "create dashboard"
- Missing: How, with what technology, what constraints?
2. **Missing Technical Context**
- No technology stack mentioned
- No integration points identified
- No performance/security constraints
3. **Incomplete Specifications**
- No acceptance criteria
- No success metrics
- No edge cases considered
- No error handling mentioned
4. **Ambiguous Scope**
- Unclear boundaries ("user management" - what exactly?)
- No distinction between MVP and future enhancements
- Missing "what's NOT included"
**Do NOT activate when**:
- Specific file paths mentioned (e.g., "auth.go:45")
- Code snippets included
- Existing functions/classes referenced
- Bug fixes with clear reproduction steps
## Core Principles
1. **Systematic Questioning**
- Ask focused, specific questions
- One category at a time (2-3 questions per round)
- Build on previous answers
- Avoid overwhelming users
2. **Quality-Driven Iteration**
- Continuously assess clarity score (0-100)
- Identify gaps systematically
- Iterate until ≥ 90 points
- Document all clarification rounds
3. **Actionable Output**
- Generate concrete specifications
- Include measurable acceptance criteria
- Provide executable phases
- Enable direct implementation
## Clarification Process
### Step 1: Initial Requirement Analysis
**Input**: User's requirement description
**Tasks**:
1. Parse and understand core requirement
2. Generate feature name (kebab-case format)
3. Determine document version (default `1.0` unless user specifies otherwise)
4. Ensure `./docs/prds/` exists for PRD output
5. Perform initial clarity assessment (0-100)
**Assessment Rubric**:
```
Functional Clarity: /30 points
- Clear inputs/outputs: 10 pts
- User interaction defined: 10 pts
- Success criteria stated: 10 pts
Technical Specificity: /25 points
- Technology stack mentioned: 8 pts
- Integration points identified: 8 pts
- Constraints specified: 9 pts
Implementation Completeness: /25 points
- Edge cases considered: 8 pts
- Error handling mentioned: 9 pts
- Data validation specified: 8 pts
Business Context: /20 points
- Problem statement clear: 7 pts
- Target users identified: 7 pts
- Success metrics defined: 6 pts
```
**Initial Response Format**:
```markdown
I understand your requirement. Let me help you refine this specification.
**Current Clarity Score**: X/100
**Clear Aspects**:
- [List what's clear]
**Needs Clarification**:
- [List gaps]
Let me systematically clarify these points...
```
### Step 2: Gap Analysis
Identify missing information across four dimensions:
**1. Functional Scope**
- What is the core functionality?
- What are the boundaries?
- What is out of scope?
- What are edge cases?
**2. User Interaction**
- How do users interact?
- What are the inputs?
- What are the outputs?
- What are success/failure scenarios?
**3. Technical Constraints**
- Performance requirements?
- Compatibility requirements?
- Security considerations?
- Scalability needs?
**4. Business Value**
- What problem does this solve?
- Who are the target users?
- What are success metrics?
- What is the priority?
### Step 3: Interactive Clarification
**Question Strategy**:
1. Start with highest-impact gaps
2. Ask 2-3 questions per round
3. Build context progressively
4. Use user's language
5. Provide examples when helpful
**Question Format**:
```markdown
I need to clarify the following points to complete the requirements document:
1. **[Category]**: [Specific question]?
- For example: [Example if helpful]
2. **[Category]**: [Specific question]?
3. **[Category]**: [Specific question]?
Please provide your answers, and I'll continue refining the PRD.
```
**After Each User Response**:
1. Update clarity score
2. Capture new information in the working PRD outline
3. Identify remaining gaps
4. If score < 90: Continue with next round of questions
5. If score ≥ 90: Proceed to PRD generation
**Score Update Format**:
```markdown
Thank you for the additional information!
**Clarity Score Update**: X/100 → Y/100
**New Clarified Content**:
- [Summarize new information]
**Remaining Points to Clarify**:
- [List remaining gaps if score < 90]
[If score < 90: Continue with next round of questions]
[If score ≥ 90: "Perfect! I will now generate the complete PRD document..."]
```
### Step 4: PRD Generation
Once clarity score ≥ 90, generate comprehensive PRD.
**Output File**:
1. **Final PRD**: `./docs/prds/{feature_name}-v{version}-prd.md`
Use the `Write` tool to create or update this file. Derive `{version}` from the document version recorded in the PRD (default `1.0`).
## PRD Document Structure
```markdown
# {Feature Name} - Product Requirements Document (PRD)
## Requirements Description
### Background
- **Business Problem**: [Describe the business problem to solve]
- **Target Users**: [Target user groups]
- **Value Proposition**: [Value this feature brings]
### Feature Overview
- **Core Features**: [List of main features]
- **Feature Boundaries**: [What is and isn't included]
- **User Scenarios**: [Typical usage scenarios]
### Detailed Requirements
- **Input/Output**: [Specific input/output specifications]
- **User Interaction**: [User operation flow]
- **Data Requirements**: [Data structures and validation rules]
- **Edge Cases**: [Edge case handling]
## Design Decisions
### Technical Approach
- **Architecture Choice**: [Technical architecture decisions and rationale]
- **Key Components**: [List of main technical components]
- **Data Storage**: [Data models and storage solutions]
- **Interface Design**: [API/interface specifications]
### Constraints
- **Performance Requirements**: [Response time, throughput, etc.]
- **Compatibility**: [System compatibility requirements]
- **Security**: [Security considerations]
- **Scalability**: [Future expansion considerations]
### Risk Assessment
- **Technical Risks**: [Potential technical risks and mitigation plans]
- **Dependency Risks**: [External dependencies and alternatives]
- **Schedule Risks**: [Timeline risks and response strategies]
## Acceptance Criteria
### Functional Acceptance
- [ ] Feature 1: [Specific acceptance conditions]
- [ ] Feature 2: [Specific acceptance conditions]
- [ ] Feature 3: [Specific acceptance conditions]
### Quality Standards
- [ ] Code Quality: [Code standards and review requirements]
- [ ] Test Coverage: [Testing requirements and coverage]
- [ ] Performance Metrics: [Performance test pass criteria]
- [ ] Security Review: [Security review requirements]
### User Acceptance
- [ ] User Experience: [UX acceptance criteria]
- [ ] Documentation: [Documentation delivery requirements]
- [ ] Training Materials: [If needed, training material requirements]
## Execution Phases
### Phase 1: Preparation
**Goal**: Environment preparation and technical validation
- [ ] Task 1: [Specific task description]
- [ ] Task 2: [Specific task description]
- **Deliverables**: [Phase deliverables]
- **Time**: [Estimated time]
### Phase 2: Core Development
**Goal**: Implement core functionality
- [ ] Task 1: [Specific task description]
- [ ] Task 2: [Specific task description]
- **Deliverables**: [Phase deliverables]
- **Time**: [Estimated time]
### Phase 3: Integration & Testing
**Goal**: Integration and quality assurance
- [ ] Task 1: [Specific task description]
- [ ] Task 2: [Specific task description]
- **Deliverables**: [Phase deliverables]
- **Time**: [Estimated time]
### Phase 4: Deployment
**Goal**: Release and monitoring
- [ ] Task 1: [Specific task description]
- [ ] Task 2: [Specific task description]
- **Deliverables**: [Phase deliverables]
- **Time**: [Estimated time]
---
**Document Version**: 1.0
**Created**: {timestamp}
**Clarification Rounds**: {clarification_rounds}
**Quality Score**: {quality_score}/100
```
## Behavioral Guidelines
### DO
- Ask specific, targeted questions
- Build on previous answers
- Provide examples to guide users
- Maintain conversational tone
- Summarize clarification rounds within the PRD
- Use clear, professional English
- Generate concrete specifications
- Stay in clarification mode until score ≥ 90
### DON'T
- Ask all questions at once
- Make assumptions without confirmation
- Generate PRD before 90+ score
- Skip any required sections
- Use vague or abstract language
- Proceed without user responses
- Exit skill mode prematurely
## Success Criteria
- Clarity score ≥ 90/100
- All PRD sections complete with substance
- Acceptance criteria checklistable (using `- [ ]` format)
- Execution phases actionable with concrete tasks
- User approves final PRD
- Ready for development handoff

View File

@@ -1,332 +0,0 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.8"
# dependencies = []
# ///
"""
Codex CLI wrapper with cross-platform support and session management.
**FIXED**: Auto-detect long inputs and use stdin mode to avoid shell argument issues.
Usage:
New session: uv run codex.py "task" [workdir]
Stdin mode: uv run codex.py - [workdir]
Resume: uv run codex.py resume <session_id> "task" [workdir]
Resume stdin: uv run codex.py resume <session_id> - [workdir]
Alternative: python3 codex.py "task"
Direct exec: ./codex.py "task"
Model configuration: Set CODEX_MODEL environment variable (default: gpt-5.1-codex)
"""
import subprocess
import json
import sys
import os
from typing import Optional
DEFAULT_MODEL = os.environ.get('CODEX_MODEL', 'gpt-5.1-codex')
DEFAULT_WORKDIR = '.'
DEFAULT_TIMEOUT = 7200 # 2 hours in seconds
FORCE_KILL_DELAY = 5
def log_error(message: str):
"""输出错误信息到 stderr"""
sys.stderr.write(f"ERROR: {message}\n")
def log_warn(message: str):
"""输出警告信息到 stderr"""
sys.stderr.write(f"WARN: {message}\n")
def log_info(message: str):
"""输出信息到 stderr"""
sys.stderr.write(f"INFO: {message}\n")
def resolve_timeout() -> int:
"""解析超时配置(秒)"""
raw = os.environ.get('CODEX_TIMEOUT', '')
if not raw:
return DEFAULT_TIMEOUT
try:
parsed = int(raw)
if parsed <= 0:
log_warn(f"Invalid CODEX_TIMEOUT '{raw}', falling back to {DEFAULT_TIMEOUT}s")
return DEFAULT_TIMEOUT
# 环境变量是毫秒,转换为秒
return parsed // 1000 if parsed > 10000 else parsed
except ValueError:
log_warn(f"Invalid CODEX_TIMEOUT '{raw}', falling back to {DEFAULT_TIMEOUT}s")
return DEFAULT_TIMEOUT
def normalize_text(text) -> Optional[str]:
"""规范化文本:字符串或字符串数组"""
if isinstance(text, str):
return text
if isinstance(text, list):
return ''.join(text)
return None
def parse_args():
"""解析命令行参数"""
if len(sys.argv) < 2:
log_error('Task required')
sys.exit(1)
# 检测是否为 resume 模式
if sys.argv[1] == 'resume':
if len(sys.argv) < 4:
log_error('Resume mode requires: resume <session_id> <task>')
sys.exit(1)
task_arg = sys.argv[3]
return {
'mode': 'resume',
'session_id': sys.argv[2],
'task': task_arg,
'explicit_stdin': task_arg == '-',
'workdir': sys.argv[4] if len(sys.argv) > 4 else DEFAULT_WORKDIR,
}
task_arg = sys.argv[1]
return {
'mode': 'new',
'task': task_arg,
'explicit_stdin': task_arg == '-',
'workdir': sys.argv[2] if len(sys.argv) > 2 else DEFAULT_WORKDIR,
}
def read_piped_task() -> Optional[str]:
"""
从 stdin 读取任务文本:
- 如果 stdin 是管道(非 tty且存在内容返回读取到的字符串
- 否则返回 None
"""
stdin = sys.stdin
if stdin is None or stdin.isatty():
log_info("Stdin is tty or None, skipping pipe read")
return None
log_info("Reading from stdin pipe...")
data = stdin.read()
if not data:
log_info("Stdin pipe returned empty data")
return None
log_info(f"Read {len(data)} bytes from stdin pipe")
return data
def should_stream_via_stdin(task_text: str, piped: bool) -> bool:
"""
判定是否通过 stdin 传递任务:
- 有管道输入
- 文本包含换行
- 文本包含反斜杠
- 文本长度 > 800
"""
if piped:
return True
if '\n' in task_text:
return True
if '\\' in task_text:
return True
if len(task_text) > 800:
return True
return False
def build_codex_args(params: dict, target_arg: str) -> list:
"""
构建 codex CLI 参数
Args:
params: 参数字典
target_arg: 最终传递给 codex 的参数('-' 或具体 task 文本)
"""
if params['mode'] == 'resume':
return [
'codex', 'e',
'-m', DEFAULT_MODEL,
'--skip-git-repo-check',
'--json',
'resume',
params['session_id'],
target_arg
]
else:
base_args = [
'codex', 'e',
'-m', DEFAULT_MODEL,
'--dangerously-bypass-approvals-and-sandbox',
'--skip-git-repo-check',
'-C', params['workdir'],
'--json',
target_arg
]
return base_args
def run_codex_process(codex_args, task_text: str, use_stdin: bool, timeout_sec: int):
"""
启动 codex 子进程,处理 stdin / JSON 行输出和错误,成功时返回 (last_agent_message, thread_id)。
失败路径上负责日志和退出码。
"""
thread_id: Optional[str] = None
last_agent_message: Optional[str] = None
process: Optional[subprocess.Popen] = None
try:
# 启动 codex 子进程(文本模式管道)
log_info(f"Starting codex with args: {' '.join(codex_args[:5])}...")
process = subprocess.Popen(
codex_args,
stdin=subprocess.PIPE if use_stdin else None,
stdout=subprocess.PIPE,
stderr=sys.stderr,
text=True,
bufsize=1,
)
log_info(f"Process started with PID: {process.pid}")
# 如果使用 stdin 模式,写入任务到 stdin 并关闭
if use_stdin and process.stdin is not None:
log_info(f"Writing {len(task_text)} chars to stdin...")
process.stdin.write(task_text)
process.stdin.flush() # 强制刷新缓冲区,避免大任务死锁
process.stdin.close()
log_info("Stdin closed")
# 逐行解析 JSON 输出
if process.stdout is None:
log_error('Codex stdout pipe not available')
sys.exit(1)
log_info("Reading stdout...")
for line in process.stdout:
line = line.strip()
if not line:
continue
try:
event = json.loads(line)
# 捕获 thread_id
if event.get('type') == 'thread.started':
thread_id = event.get('thread_id')
# 捕获 agent_message
if (event.get('type') == 'item.completed' and
event.get('item', {}).get('type') == 'agent_message'):
text = normalize_text(event['item'].get('text'))
if text:
last_agent_message = text
except json.JSONDecodeError:
log_warn(f"Failed to parse line: {line}")
# 等待进程结束并检查退出码
returncode = process.wait(timeout=timeout_sec)
if returncode != 0:
log_error(f'Codex exited with status {returncode}')
sys.exit(returncode)
if not last_agent_message:
log_error('Codex completed without agent_message output')
sys.exit(1)
return last_agent_message, thread_id
except subprocess.TimeoutExpired:
log_error('Codex execution timeout')
if process is not None:
process.kill()
try:
process.wait(timeout=FORCE_KILL_DELAY)
except subprocess.TimeoutExpired:
pass
sys.exit(124)
except FileNotFoundError:
log_error("codex command not found in PATH")
sys.exit(127)
except KeyboardInterrupt:
log_error("Codex interrupted by user")
if process is not None:
process.terminate()
try:
process.wait(timeout=FORCE_KILL_DELAY)
except subprocess.TimeoutExpired:
process.kill()
sys.exit(130)
def main():
log_info("Script started")
params = parse_args()
log_info(f"Parsed args: mode={params['mode']}, task_len={len(params['task'])}")
timeout_sec = resolve_timeout()
log_info(f"Timeout: {timeout_sec}s")
explicit_stdin = params.get('explicit_stdin', False)
if explicit_stdin:
log_info("Explicit stdin mode: reading task from stdin")
task_text = sys.stdin.read()
if not task_text:
log_error("Explicit stdin mode requires task input from stdin")
sys.exit(1)
piped = not sys.stdin.isatty()
else:
piped_task = read_piped_task()
piped = piped_task is not None
task_text = piped_task if piped else params['task']
use_stdin = explicit_stdin or should_stream_via_stdin(task_text, piped)
if use_stdin:
reasons = []
if piped:
reasons.append('piped input')
if explicit_stdin:
reasons.append('explicit "-"')
if '\n' in task_text:
reasons.append('newline')
if '\\' in task_text:
reasons.append('backslash')
if len(task_text) > 800:
reasons.append('length>800')
if reasons:
log_warn(f"Using stdin mode for task due to: {', '.join(reasons)}")
target_arg = '-' if use_stdin else params['task']
codex_args = build_codex_args(params, target_arg)
log_info('codex running...')
last_agent_message, thread_id = run_codex_process(
codex_args=codex_args,
task_text=task_text,
use_stdin=use_stdin,
timeout_sec=timeout_sec,
)
# 输出 agent_message
sys.stdout.write(f"{last_agent_message}\n")
# 输出 session_id如果存在
if thread_id:
sys.stdout.write(f"\n---\nSESSION_ID: {thread_id}\n")
sys.exit(0)
if __name__ == '__main__':
main()