mirror of
https://github.com/fsecada01/Pygentic-AI.git
synced 2026-05-12 12:15:00 +00:00
Merge remote-tracking branch 'origin/prod_deploy'
This commit is contained in:
184
.claude/commands/speckit.analyze.md
Normal file
184
.claude/commands/speckit.analyze.md
Normal file
@ -0,0 +1,184 @@
|
||||
---
|
||||
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
||||
|
||||
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Initialize Analysis Context
|
||||
|
||||
Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
|
||||
|
||||
- SPEC = FEATURE_DIR/spec.md
|
||||
- PLAN = FEATURE_DIR/plan.md
|
||||
- TASKS = FEATURE_DIR/tasks.md
|
||||
|
||||
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
||||
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
### 2. Load Artifacts (Progressive Disclosure)
|
||||
|
||||
Load only the minimal necessary context from each artifact:
|
||||
|
||||
**From spec.md:**
|
||||
|
||||
- Overview/Context
|
||||
- Functional Requirements
|
||||
- Non-Functional Requirements
|
||||
- User Stories
|
||||
- Edge Cases (if present)
|
||||
|
||||
**From plan.md:**
|
||||
|
||||
- Architecture/stack choices
|
||||
- Data Model references
|
||||
- Phases
|
||||
- Technical constraints
|
||||
|
||||
**From tasks.md:**
|
||||
|
||||
- Task IDs
|
||||
- Descriptions
|
||||
- Phase grouping
|
||||
- Parallel markers [P]
|
||||
- Referenced file paths
|
||||
|
||||
**From constitution:**
|
||||
|
||||
- Load `.specify/memory/constitution.md` for principle validation
|
||||
|
||||
### 3. Build Semantic Models
|
||||
|
||||
Create internal representations (do not include raw artifacts in output):
|
||||
|
||||
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
||||
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
||||
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
||||
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
||||
|
||||
### 4. Detection Passes (Token-Efficient Analysis)
|
||||
|
||||
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
||||
|
||||
#### A. Duplication Detection
|
||||
|
||||
- Identify near-duplicate requirements
|
||||
- Mark lower-quality phrasing for consolidation
|
||||
|
||||
#### B. Ambiguity Detection
|
||||
|
||||
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
|
||||
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
|
||||
|
||||
#### C. Underspecification
|
||||
|
||||
- Requirements with verbs but missing object or measurable outcome
|
||||
- User stories missing acceptance criteria alignment
|
||||
- Tasks referencing files or components not defined in spec/plan
|
||||
|
||||
#### D. Constitution Alignment
|
||||
|
||||
- Any requirement or plan element conflicting with a MUST principle
|
||||
- Missing mandated sections or quality gates from constitution
|
||||
|
||||
#### E. Coverage Gaps
|
||||
|
||||
- Requirements with zero associated tasks
|
||||
- Tasks with no mapped requirement/story
|
||||
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
||||
|
||||
#### F. Inconsistency
|
||||
|
||||
- Terminology drift (same concept named differently across files)
|
||||
- Data entities referenced in plan but absent in spec (or vice versa)
|
||||
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
|
||||
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
|
||||
|
||||
### 5. Severity Assignment
|
||||
|
||||
Use this heuristic to prioritize findings:
|
||||
|
||||
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
|
||||
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
|
||||
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
|
||||
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
|
||||
|
||||
### 6. Produce Compact Analysis Report
|
||||
|
||||
Output a Markdown report (no file writes) with the following structure:
|
||||
|
||||
## Specification Analysis Report
|
||||
|
||||
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
||||
|----|----------|----------|-------------|---------|----------------|
|
||||
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
||||
|
||||
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
||||
|
||||
**Coverage Summary Table:**
|
||||
|
||||
| Requirement Key | Has Task? | Task IDs | Notes |
|
||||
|-----------------|-----------|----------|-------|
|
||||
|
||||
**Constitution Alignment Issues:** (if any)
|
||||
|
||||
**Unmapped Tasks:** (if any)
|
||||
|
||||
**Metrics:**
|
||||
|
||||
- Total Requirements
|
||||
- Total Tasks
|
||||
- Coverage % (requirements with >=1 task)
|
||||
- Ambiguity Count
|
||||
- Duplication Count
|
||||
- Critical Issues Count
|
||||
|
||||
### 7. Provide Next Actions
|
||||
|
||||
At end of report, output a concise Next Actions block:
|
||||
|
||||
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
|
||||
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
|
||||
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
|
||||
|
||||
### 8. Offer Remediation
|
||||
|
||||
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
||||
|
||||
## Operating Principles
|
||||
|
||||
### Context Efficiency
|
||||
|
||||
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
|
||||
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
|
||||
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
||||
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
|
||||
|
||||
### Analysis Guidelines
|
||||
|
||||
- **NEVER modify files** (this is read-only analysis)
|
||||
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
||||
- **Prioritize constitution violations** (these are always CRITICAL)
|
||||
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
|
||||
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
||||
|
||||
## Context
|
||||
|
||||
$ARGUMENTS
|
||||
294
.claude/commands/speckit.checklist.md
Normal file
294
.claude/commands/speckit.checklist.md
Normal file
@ -0,0 +1,294 @@
|
||||
---
|
||||
description: Generate a custom checklist for the current feature based on user requirements.
|
||||
---
|
||||
|
||||
## Checklist Purpose: "Unit Tests for English"
|
||||
|
||||
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
|
||||
|
||||
**NOT for verification/testing**:
|
||||
|
||||
- ❌ NOT "Verify the button clicks correctly"
|
||||
- ❌ NOT "Test error handling works"
|
||||
- ❌ NOT "Confirm the API returns 200"
|
||||
- ❌ NOT checking if code/implementation matches the spec
|
||||
|
||||
**FOR requirements quality validation**:
|
||||
|
||||
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
||||
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
||||
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
||||
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
||||
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
||||
|
||||
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
|
||||
- All file paths must be absolute.
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
||||
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
||||
- Only ask about information that materially changes checklist content
|
||||
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
||||
- Prefer precision over breadth
|
||||
|
||||
Generation algorithm:
|
||||
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
||||
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
||||
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
||||
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
|
||||
5. Formulate questions chosen from these archetypes:
|
||||
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
||||
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
||||
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
||||
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
||||
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
||||
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
||||
|
||||
Question formatting rules:
|
||||
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
||||
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
||||
- Never ask the user to restate what they already said
|
||||
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
||||
|
||||
Defaults when interaction impossible:
|
||||
- Depth: Standard
|
||||
- Audience: Reviewer (PR) if code-related; Author otherwise
|
||||
- Focus: Top 2 relevance clusters
|
||||
|
||||
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
||||
|
||||
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
||||
- Derive checklist theme (e.g., security, review, deploy, ux)
|
||||
- Consolidate explicit must-have items mentioned by user
|
||||
- Map focus selections to category scaffolding
|
||||
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
||||
|
||||
4. **Load feature context**: Read from FEATURE_DIR:
|
||||
- spec.md: Feature requirements and scope
|
||||
- plan.md (if exists): Technical details, dependencies
|
||||
- tasks.md (if exists): Implementation tasks
|
||||
|
||||
**Context Loading Strategy**:
|
||||
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
||||
- Prefer summarizing long sections into concise scenario/requirement bullets
|
||||
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
||||
- If source docs are large, generate interim summary items instead of embedding raw text
|
||||
|
||||
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
||||
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
||||
- Generate unique checklist filename:
|
||||
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
||||
- Format: `[domain].md`
|
||||
- If file exists, append to existing file
|
||||
- Number items sequentially starting from CHK001
|
||||
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
|
||||
|
||||
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
||||
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
||||
- **Completeness**: Are all necessary requirements present?
|
||||
- **Clarity**: Are requirements unambiguous and specific?
|
||||
- **Consistency**: Do requirements align with each other?
|
||||
- **Measurability**: Can requirements be objectively verified?
|
||||
- **Coverage**: Are all scenarios/edge cases addressed?
|
||||
|
||||
**Category Structure** - Group items by requirement quality dimensions:
|
||||
- **Requirement Completeness** (Are all necessary requirements documented?)
|
||||
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
||||
- **Requirement Consistency** (Do requirements align without conflicts?)
|
||||
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
||||
- **Scenario Coverage** (Are all flows/cases addressed?)
|
||||
- **Edge Case Coverage** (Are boundary conditions defined?)
|
||||
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
||||
- **Dependencies & Assumptions** (Are they documented and validated?)
|
||||
- **Ambiguities & Conflicts** (What needs clarification?)
|
||||
|
||||
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
||||
|
||||
❌ **WRONG** (Testing implementation):
|
||||
- "Verify landing page displays 3 episode cards"
|
||||
- "Test hover states work on desktop"
|
||||
- "Confirm logo click navigates home"
|
||||
|
||||
✅ **CORRECT** (Testing requirements quality):
|
||||
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
||||
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
||||
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
||||
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
||||
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
||||
- "Are loading states defined for asynchronous episode data?" [Completeness]
|
||||
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
|
||||
|
||||
**ITEM STRUCTURE**:
|
||||
Each item should follow this pattern:
|
||||
- Question format asking about requirement quality
|
||||
- Focus on what's WRITTEN (or not written) in the spec/plan
|
||||
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
||||
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
||||
- Use `[Gap]` marker when checking for missing requirements
|
||||
|
||||
**EXAMPLES BY QUALITY DIMENSION**:
|
||||
|
||||
Completeness:
|
||||
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
||||
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
||||
|
||||
Clarity:
|
||||
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
||||
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
||||
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
||||
|
||||
Consistency:
|
||||
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
||||
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
||||
|
||||
Coverage:
|
||||
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
||||
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
||||
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
||||
|
||||
Measurability:
|
||||
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
||||
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
||||
|
||||
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
||||
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
||||
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
||||
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
||||
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
||||
|
||||
**Traceability Requirements**:
|
||||
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
||||
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
|
||||
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
||||
|
||||
**Surface & Resolve Issues** (Requirements Quality Problems):
|
||||
Ask questions about the requirements themselves:
|
||||
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
||||
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
||||
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
||||
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
||||
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
||||
|
||||
**Content Consolidation**:
|
||||
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
||||
- Merge near-duplicates checking the same requirement aspect
|
||||
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
||||
|
||||
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
||||
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
||||
- ❌ References to code execution, user actions, system behavior
|
||||
- ❌ "Displays correctly", "works properly", "functions as expected"
|
||||
- ❌ "Click", "navigate", "render", "load", "execute"
|
||||
- ❌ Test cases, test plans, QA procedures
|
||||
- ❌ Implementation details (frameworks, APIs, algorithms)
|
||||
|
||||
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
||||
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
||||
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
||||
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
||||
- ✅ "Can [requirement] be objectively measured/verified?"
|
||||
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
||||
- ✅ "Does the spec define [missing aspect]?"
|
||||
|
||||
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
||||
|
||||
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
||||
- Focus areas selected
|
||||
- Depth level
|
||||
- Actor/timing
|
||||
- Any explicit user-specified must-have items incorporated
|
||||
|
||||
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
|
||||
|
||||
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
||||
- Simple, memorable filenames that indicate checklist purpose
|
||||
- Easy identification and navigation in the `checklists/` folder
|
||||
|
||||
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
||||
|
||||
## Example Checklist Types & Sample Items
|
||||
|
||||
**UX Requirements Quality:** `ux.md`
|
||||
|
||||
Sample items (testing the requirements, NOT the implementation):
|
||||
|
||||
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
||||
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
||||
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
||||
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
||||
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
||||
|
||||
**API Requirements Quality:** `api.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
||||
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
||||
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
||||
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
||||
- "Is versioning strategy documented in requirements? [Gap]"
|
||||
|
||||
**Performance Requirements Quality:** `performance.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
||||
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
||||
- "Are performance requirements under different load conditions specified? [Completeness]"
|
||||
- "Can performance requirements be objectively measured? [Measurability]"
|
||||
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
||||
|
||||
**Security Requirements Quality:** `security.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
||||
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
||||
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
||||
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
||||
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
||||
|
||||
## Anti-Examples: What NOT To Do
|
||||
|
||||
**❌ WRONG - These test implementation, not requirements:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
||||
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
||||
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
||||
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
||||
```
|
||||
|
||||
**✅ CORRECT - These test requirements quality:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
||||
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
||||
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
||||
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
||||
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
||||
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
||||
```
|
||||
|
||||
**Key Differences:**
|
||||
|
||||
- Wrong: Tests if the system works correctly
|
||||
- Correct: Tests if the requirements are written correctly
|
||||
- Wrong: Verification of behavior
|
||||
- Correct: Validation of requirement quality
|
||||
- Wrong: "Does it do X?"
|
||||
- Correct: "Is X clearly specified?"
|
||||
181
.claude/commands/speckit.clarify.md
Normal file
181
.claude/commands/speckit.clarify.md
Normal file
@ -0,0 +1,181 @@
|
||||
---
|
||||
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||
handoffs:
|
||||
- label: Build Technical Plan
|
||||
agent: speckit.plan
|
||||
prompt: Create a plan for the spec. I am building with...
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
||||
|
||||
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
||||
|
||||
Execution steps:
|
||||
|
||||
1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -PathsOnly` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
||||
- `FEATURE_DIR`
|
||||
- `FEATURE_SPEC`
|
||||
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
||||
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
||||
|
||||
Functional Scope & Behavior:
|
||||
- Core user goals & success criteria
|
||||
- Explicit out-of-scope declarations
|
||||
- User roles / personas differentiation
|
||||
|
||||
Domain & Data Model:
|
||||
- Entities, attributes, relationships
|
||||
- Identity & uniqueness rules
|
||||
- Lifecycle/state transitions
|
||||
- Data volume / scale assumptions
|
||||
|
||||
Interaction & UX Flow:
|
||||
- Critical user journeys / sequences
|
||||
- Error/empty/loading states
|
||||
- Accessibility or localization notes
|
||||
|
||||
Non-Functional Quality Attributes:
|
||||
- Performance (latency, throughput targets)
|
||||
- Scalability (horizontal/vertical, limits)
|
||||
- Reliability & availability (uptime, recovery expectations)
|
||||
- Observability (logging, metrics, tracing signals)
|
||||
- Security & privacy (authN/Z, data protection, threat assumptions)
|
||||
- Compliance / regulatory constraints (if any)
|
||||
|
||||
Integration & External Dependencies:
|
||||
- External services/APIs and failure modes
|
||||
- Data import/export formats
|
||||
- Protocol/versioning assumptions
|
||||
|
||||
Edge Cases & Failure Handling:
|
||||
- Negative scenarios
|
||||
- Rate limiting / throttling
|
||||
- Conflict resolution (e.g., concurrent edits)
|
||||
|
||||
Constraints & Tradeoffs:
|
||||
- Technical constraints (language, storage, hosting)
|
||||
- Explicit tradeoffs or rejected alternatives
|
||||
|
||||
Terminology & Consistency:
|
||||
- Canonical glossary terms
|
||||
- Avoided synonyms / deprecated terms
|
||||
|
||||
Completion Signals:
|
||||
- Acceptance criteria testability
|
||||
- Measurable Definition of Done style indicators
|
||||
|
||||
Misc / Placeholders:
|
||||
- TODO markers / unresolved decisions
|
||||
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
||||
|
||||
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
||||
- Clarification would not materially change implementation or validation strategy
|
||||
- Information is better deferred to planning phase (note internally)
|
||||
|
||||
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
||||
- Maximum of 10 total questions across the whole session.
|
||||
- Each question must be answerable with EITHER:
|
||||
- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
||||
- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
||||
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
||||
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
||||
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
||||
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
||||
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
||||
|
||||
4. Sequential questioning loop (interactive):
|
||||
- Present EXACTLY ONE question at a time.
|
||||
- For multiple‑choice questions:
|
||||
- **Analyze all options** and determine the **most suitable option** based on:
|
||||
- Best practices for the project type
|
||||
- Common patterns in similar implementations
|
||||
- Risk reduction (security, performance, maintainability)
|
||||
- Alignment with any explicit project goals or constraints visible in the spec
|
||||
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
||||
- Format as: `**Recommended:** Option [X] - <reasoning>`
|
||||
- Then render all options as a Markdown table:
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| A | <Option A description> |
|
||||
| B | <Option B description> |
|
||||
| C | <Option C description> (add D/E as needed up to 5) |
|
||||
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
|
||||
|
||||
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
||||
- For short‑answer style (no meaningful discrete options):
|
||||
- Provide your **suggested answer** based on best practices and context.
|
||||
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
||||
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
||||
- After the user answers:
|
||||
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
||||
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
||||
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||
- Stop asking further questions when:
|
||||
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
||||
- User signals completion ("done", "good", "no more"), OR
|
||||
- You reach 5 asked questions.
|
||||
- Never reveal future queued questions in advance.
|
||||
- If no valid questions exist at start, immediately report no critical ambiguities.
|
||||
|
||||
5. Integration after EACH accepted answer (incremental update approach):
|
||||
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
||||
- For the first integrated answer in this session:
|
||||
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
||||
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
||||
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
||||
- Then immediately apply the clarification to the most appropriate section(s):
|
||||
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
||||
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
||||
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
||||
|
||||
6. Validation (performed after EACH write plus final pass):
|
||||
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
||||
- Total asked (accepted) questions ≤ 5.
|
||||
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
||||
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
||||
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
||||
- Terminology consistency: same canonical term used across all updated sections.
|
||||
|
||||
7. Write the updated spec back to `FEATURE_SPEC`.
|
||||
|
||||
8. Report completion (after questioning loop ends or early termination):
|
||||
- Number of questions asked & answered.
|
||||
- Path to updated spec.
|
||||
- Sections touched (list names).
|
||||
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
||||
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
|
||||
- Suggested next command.
|
||||
|
||||
Behavior rules:
|
||||
|
||||
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
||||
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
|
||||
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
||||
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
||||
- Respect user early termination signals ("stop", "done", "proceed").
|
||||
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
||||
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||
|
||||
Context for prioritization: $ARGUMENTS
|
||||
82
.claude/commands/speckit.constitution.md
Normal file
82
.claude/commands/speckit.constitution.md
Normal file
@ -0,0 +1,82 @@
|
||||
---
|
||||
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
||||
handoffs:
|
||||
- label: Build Specification
|
||||
agent: speckit.specify
|
||||
prompt: Implement the feature specification based on the updated constitution. I want to build...
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
||||
|
||||
Follow this execution flow:
|
||||
|
||||
1. Load the existing constitution template at `.specify/memory/constitution.md`.
|
||||
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
||||
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
||||
|
||||
2. Collect/derive values for placeholders:
|
||||
- If user input (conversation) supplies a value, use it.
|
||||
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
|
||||
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
||||
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
||||
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
||||
- MINOR: New principle/section added or materially expanded guidance.
|
||||
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
||||
- If version bump type ambiguous, propose reasoning before finalizing.
|
||||
|
||||
3. Draft the updated constitution content:
|
||||
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
||||
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
||||
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
||||
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
||||
|
||||
4. Consistency propagation checklist (convert prior checklist into active validations):
|
||||
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
||||
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
||||
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
||||
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
||||
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
||||
|
||||
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
||||
- Version change: old → new
|
||||
- List of modified principles (old title → new title if renamed)
|
||||
- Added sections
|
||||
- Removed sections
|
||||
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
||||
- Follow-up TODOs if any placeholders intentionally deferred.
|
||||
|
||||
6. Validation before final output:
|
||||
- No remaining unexplained bracket tokens.
|
||||
- Version line matches report.
|
||||
- Dates ISO format YYYY-MM-DD.
|
||||
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
||||
|
||||
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
||||
|
||||
8. Output a final summary to the user with:
|
||||
- New version and bump rationale.
|
||||
- Any files flagged for manual follow-up.
|
||||
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
||||
|
||||
Formatting & Style Requirements:
|
||||
|
||||
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
||||
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
||||
- Keep a single blank line between sections.
|
||||
- Avoid trailing whitespace.
|
||||
|
||||
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
||||
|
||||
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
||||
|
||||
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
||||
135
.claude/commands/speckit.implement.md
Normal file
135
.claude/commands/speckit.implement.md
Normal file
@ -0,0 +1,135 @@
|
||||
---
|
||||
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
||||
- Scan all checklist files in the checklists/ directory
|
||||
- For each checklist, count:
|
||||
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
|
||||
- Completed items: Lines matching `- [X]` or `- [x]`
|
||||
- Incomplete items: Lines matching `- [ ]`
|
||||
- Create a status table:
|
||||
|
||||
```text
|
||||
| Checklist | Total | Completed | Incomplete | Status |
|
||||
|-----------|-------|-----------|------------|--------|
|
||||
| ux.md | 12 | 12 | 0 | ✓ PASS |
|
||||
| test.md | 8 | 5 | 3 | ✗ FAIL |
|
||||
| security.md | 6 | 6 | 0 | ✓ PASS |
|
||||
```
|
||||
|
||||
- Calculate overall status:
|
||||
- **PASS**: All checklists have 0 incomplete items
|
||||
- **FAIL**: One or more checklists have incomplete items
|
||||
|
||||
- **If any checklist is incomplete**:
|
||||
- Display the table with incomplete item counts
|
||||
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
|
||||
- Wait for user response before continuing
|
||||
- If user says "no" or "wait" or "stop", halt execution
|
||||
- If user says "yes" or "proceed" or "continue", proceed to step 3
|
||||
|
||||
- **If all checklists are complete**:
|
||||
- Display the table showing all checklists passed
|
||||
- Automatically proceed to step 3
|
||||
|
||||
3. Load and analyze the implementation context:
|
||||
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
|
||||
4. **Project Setup Verification**:
|
||||
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||
|
||||
**Detection & Creation Logic**:
|
||||
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
||||
|
||||
```sh
|
||||
git rev-parse --git-dir 2>/dev/null
|
||||
```
|
||||
|
||||
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
||||
- Check if .eslintrc* exists → create/verify .eslintignore
|
||||
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
|
||||
- Check if .prettierrc* exists → create/verify .prettierignore
|
||||
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
||||
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
||||
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
||||
|
||||
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
||||
**If ignore file missing**: Create with full pattern set for detected technology
|
||||
|
||||
**Common Patterns by Technology** (from plan.md tech stack):
|
||||
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
||||
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
||||
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
||||
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
||||
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
||||
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
|
||||
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
|
||||
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
|
||||
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
|
||||
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
|
||||
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
|
||||
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
|
||||
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
|
||||
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
||||
|
||||
**Tool-Specific Patterns**:
|
||||
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
||||
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
||||
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
||||
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
||||
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
|
||||
|
||||
5. Parse tasks.md structure and extract:
|
||||
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
||||
- **Task dependencies**: Sequential vs parallel execution rules
|
||||
- **Task details**: ID, description, file paths, parallel markers [P]
|
||||
- **Execution flow**: Order and dependency requirements
|
||||
|
||||
6. Execute implementation following the task plan:
|
||||
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
||||
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
||||
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
||||
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
||||
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||
|
||||
7. Implementation execution rules:
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||
- **Integration work**: Database connections, middleware, logging, external services
|
||||
- **Polish and validation**: Unit tests, performance optimization, documentation
|
||||
|
||||
8. Progress tracking and error handling:
|
||||
- Report progress after each completed task
|
||||
- Halt execution if any non-parallel task fails
|
||||
- For parallel tasks [P], continue with successful tasks, report failed ones
|
||||
- Provide clear error messages with context for debugging
|
||||
- Suggest next steps if implementation cannot proceed
|
||||
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
||||
|
||||
9. Completion validation:
|
||||
- Verify all required tasks are completed
|
||||
- Check that implemented features match the original specification
|
||||
- Validate that tests pass and coverage meets requirements
|
||||
- Confirm the implementation follows the technical plan
|
||||
- Report final status with summary of completed work
|
||||
|
||||
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
|
||||
89
.claude/commands/speckit.plan.md
Normal file
89
.claude/commands/speckit.plan.md
Normal file
@ -0,0 +1,89 @@
|
||||
---
|
||||
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
||||
handoffs:
|
||||
- label: Create Tasks
|
||||
agent: speckit.tasks
|
||||
prompt: Break the plan into tasks
|
||||
send: true
|
||||
- label: Create Checklist
|
||||
agent: speckit.checklist
|
||||
prompt: Create a checklist for the following domain...
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/powershell/setup-plan.ps1 -Json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||
|
||||
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||
- Fill Constitution Check section from constitution
|
||||
- Evaluate gates (ERROR if violations unjustified)
|
||||
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
|
||||
- Phase 1: Generate data-model.md, contracts/, quickstart.md
|
||||
- Phase 1: Update agent context by running the agent script
|
||||
- Re-evaluate Constitution Check post-design
|
||||
|
||||
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
|
||||
|
||||
## Phases
|
||||
|
||||
### Phase 0: Outline & Research
|
||||
|
||||
1. **Extract unknowns from Technical Context** above:
|
||||
- For each NEEDS CLARIFICATION → research task
|
||||
- For each dependency → best practices task
|
||||
- For each integration → patterns task
|
||||
|
||||
2. **Generate and dispatch research agents**:
|
||||
|
||||
```text
|
||||
For each unknown in Technical Context:
|
||||
Task: "Research {unknown} for {feature context}"
|
||||
For each technology choice:
|
||||
Task: "Find best practices for {tech} in {domain}"
|
||||
```
|
||||
|
||||
3. **Consolidate findings** in `research.md` using format:
|
||||
- Decision: [what was chosen]
|
||||
- Rationale: [why chosen]
|
||||
- Alternatives considered: [what else evaluated]
|
||||
|
||||
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
||||
|
||||
### Phase 1: Design & Contracts
|
||||
|
||||
**Prerequisites:** `research.md` complete
|
||||
|
||||
1. **Extract entities from feature spec** → `data-model.md`:
|
||||
- Entity name, fields, relationships
|
||||
- Validation rules from requirements
|
||||
- State transitions if applicable
|
||||
|
||||
2. **Generate API contracts** from functional requirements:
|
||||
- For each user action → endpoint
|
||||
- Use standard REST/GraphQL patterns
|
||||
- Output OpenAPI/GraphQL schema to `/contracts/`
|
||||
|
||||
3. **Agent context update**:
|
||||
- Run `.specify/scripts/powershell/update-agent-context.ps1 -AgentType claude`
|
||||
- These scripts detect which AI agent is in use
|
||||
- Update the appropriate agent-specific context file
|
||||
- Add only new technology from current plan
|
||||
- Preserve manual additions between markers
|
||||
|
||||
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
|
||||
|
||||
## Key rules
|
||||
|
||||
- Use absolute paths
|
||||
- ERROR on gate failures or unresolved clarifications
|
||||
258
.claude/commands/speckit.specify.md
Normal file
258
.claude/commands/speckit.specify.md
Normal file
@ -0,0 +1,258 @@
|
||||
---
|
||||
description: Create or update the feature specification from a natural language feature description.
|
||||
handoffs:
|
||||
- label: Build Technical Plan
|
||||
agent: speckit.plan
|
||||
prompt: Create a plan for the spec. I am building with...
|
||||
- label: Clarify Spec Requirements
|
||||
agent: speckit.clarify
|
||||
prompt: Clarify specification requirements
|
||||
send: true
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
||||
|
||||
Given that feature description, do this:
|
||||
|
||||
1. **Generate a concise short name** (2-4 words) for the branch:
|
||||
- Analyze the feature description and extract the most meaningful keywords
|
||||
- Create a 2-4 word short name that captures the essence of the feature
|
||||
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
|
||||
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
|
||||
- Keep it concise but descriptive enough to understand the feature at a glance
|
||||
- Examples:
|
||||
- "I want to add user authentication" → "user-auth"
|
||||
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
|
||||
- "Create a dashboard for analytics" → "analytics-dashboard"
|
||||
- "Fix payment processing timeout bug" → "fix-payment-timeout"
|
||||
|
||||
2. **Check for existing branches before creating new one**:
|
||||
|
||||
a. First, fetch all remote branches to ensure we have the latest information:
|
||||
|
||||
```bash
|
||||
git fetch --all --prune
|
||||
```
|
||||
|
||||
b. Find the highest feature number across all sources for the short-name:
|
||||
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
|
||||
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
|
||||
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
|
||||
|
||||
c. Determine the next available number:
|
||||
- Extract all numbers from all three sources
|
||||
- Find the highest number N
|
||||
- Use N+1 for the new branch number
|
||||
|
||||
d. Run the script `.specify/scripts/powershell/create-new-feature.ps1 -Json "$ARGUMENTS"` with the calculated number and short-name:
|
||||
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
||||
- Bash example: `.specify/scripts/powershell/create-new-feature.ps1 -Json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
|
||||
- PowerShell example: `.specify/scripts/powershell/create-new-feature.ps1 -Json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
|
||||
|
||||
**IMPORTANT**:
|
||||
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
||||
- Only match branches/directories with the exact short-name pattern
|
||||
- If no existing branches/directories found with this short-name, start with number 1
|
||||
- You must only ever run this script once per feature
|
||||
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
|
||||
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
|
||||
|
||||
3. Load `.specify/templates/spec-template.md` to understand required sections.
|
||||
|
||||
4. Follow this execution flow:
|
||||
|
||||
1. Parse user description from Input
|
||||
If empty: ERROR "No feature description provided"
|
||||
2. Extract key concepts from description
|
||||
Identify: actors, actions, data, constraints
|
||||
3. For unclear aspects:
|
||||
- Make informed guesses based on context and industry standards
|
||||
- Only mark with [NEEDS CLARIFICATION: specific question] if:
|
||||
- The choice significantly impacts feature scope or user experience
|
||||
- Multiple reasonable interpretations exist with different implications
|
||||
- No reasonable default exists
|
||||
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
|
||||
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
|
||||
4. Fill User Scenarios & Testing section
|
||||
If no clear user flow: ERROR "Cannot determine user scenarios"
|
||||
5. Generate Functional Requirements
|
||||
Each requirement must be testable
|
||||
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
|
||||
6. Define Success Criteria
|
||||
Create measurable, technology-agnostic outcomes
|
||||
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
|
||||
Each criterion must be verifiable without implementation details
|
||||
7. Identify Key Entities (if data involved)
|
||||
8. Return: SUCCESS (spec ready for planning)
|
||||
|
||||
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
||||
|
||||
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
|
||||
|
||||
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
|
||||
|
||||
```markdown
|
||||
# Specification Quality Checklist: [FEATURE NAME]
|
||||
|
||||
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
||||
**Created**: [DATE]
|
||||
**Feature**: [Link to spec.md]
|
||||
|
||||
## Content Quality
|
||||
|
||||
- [ ] No implementation details (languages, frameworks, APIs)
|
||||
- [ ] Focused on user value and business needs
|
||||
- [ ] Written for non-technical stakeholders
|
||||
- [ ] All mandatory sections completed
|
||||
|
||||
## Requirement Completeness
|
||||
|
||||
- [ ] No [NEEDS CLARIFICATION] markers remain
|
||||
- [ ] Requirements are testable and unambiguous
|
||||
- [ ] Success criteria are measurable
|
||||
- [ ] Success criteria are technology-agnostic (no implementation details)
|
||||
- [ ] All acceptance scenarios are defined
|
||||
- [ ] Edge cases are identified
|
||||
- [ ] Scope is clearly bounded
|
||||
- [ ] Dependencies and assumptions identified
|
||||
|
||||
## Feature Readiness
|
||||
|
||||
- [ ] All functional requirements have clear acceptance criteria
|
||||
- [ ] User scenarios cover primary flows
|
||||
- [ ] Feature meets measurable outcomes defined in Success Criteria
|
||||
- [ ] No implementation details leak into specification
|
||||
|
||||
## Notes
|
||||
|
||||
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
|
||||
```
|
||||
|
||||
b. **Run Validation Check**: Review the spec against each checklist item:
|
||||
- For each item, determine if it passes or fails
|
||||
- Document specific issues found (quote relevant spec sections)
|
||||
|
||||
c. **Handle Validation Results**:
|
||||
|
||||
- **If all items pass**: Mark checklist complete and proceed to step 6
|
||||
|
||||
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
||||
1. List the failing items and specific issues
|
||||
2. Update the spec to address each issue
|
||||
3. Re-run validation until all items pass (max 3 iterations)
|
||||
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
|
||||
|
||||
- **If [NEEDS CLARIFICATION] markers remain**:
|
||||
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
|
||||
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
|
||||
3. For each clarification needed (max 3), present options to user in this format:
|
||||
|
||||
```markdown
|
||||
## Question [N]: [Topic]
|
||||
|
||||
**Context**: [Quote relevant spec section]
|
||||
|
||||
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
|
||||
|
||||
**Suggested Answers**:
|
||||
|
||||
| Option | Answer | Implications |
|
||||
|--------|--------|--------------|
|
||||
| A | [First suggested answer] | [What this means for the feature] |
|
||||
| B | [Second suggested answer] | [What this means for the feature] |
|
||||
| C | [Third suggested answer] | [What this means for the feature] |
|
||||
| Custom | Provide your own answer | [Explain how to provide custom input] |
|
||||
|
||||
**Your choice**: _[Wait for user response]_
|
||||
```
|
||||
|
||||
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
|
||||
- Use consistent spacing with pipes aligned
|
||||
- Each cell should have spaces around content: `| Content |` not `|Content|`
|
||||
- Header separator must have at least 3 dashes: `|--------|`
|
||||
- Test that the table renders correctly in markdown preview
|
||||
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
|
||||
6. Present all questions together before waiting for responses
|
||||
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
|
||||
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
|
||||
9. Re-run validation after all clarifications are resolved
|
||||
|
||||
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
|
||||
|
||||
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
|
||||
|
||||
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
|
||||
|
||||
## General Guidelines
|
||||
|
||||
## Quick Guidelines
|
||||
|
||||
- Focus on **WHAT** users need and **WHY**.
|
||||
- Avoid HOW to implement (no tech stack, APIs, code structure).
|
||||
- Written for business stakeholders, not developers.
|
||||
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
|
||||
|
||||
### Section Requirements
|
||||
|
||||
- **Mandatory sections**: Must be completed for every feature
|
||||
- **Optional sections**: Include only when relevant to the feature
|
||||
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
||||
|
||||
### For AI Generation
|
||||
|
||||
When creating this spec from a user prompt:
|
||||
|
||||
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
|
||||
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
|
||||
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
|
||||
- Significantly impact feature scope or user experience
|
||||
- Have multiple reasonable interpretations with different implications
|
||||
- Lack any reasonable default
|
||||
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
|
||||
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
||||
6. **Common areas needing clarification** (only if no reasonable default exists):
|
||||
- Feature scope and boundaries (include/exclude specific use cases)
|
||||
- User types and permissions (if multiple conflicting interpretations possible)
|
||||
- Security/compliance requirements (when legally/financially significant)
|
||||
|
||||
**Examples of reasonable defaults** (don't ask about these):
|
||||
|
||||
- Data retention: Industry-standard practices for the domain
|
||||
- Performance targets: Standard web/mobile app expectations unless specified
|
||||
- Error handling: User-friendly messages with appropriate fallbacks
|
||||
- Authentication method: Standard session-based or OAuth2 for web apps
|
||||
- Integration patterns: RESTful APIs unless specified otherwise
|
||||
|
||||
### Success Criteria Guidelines
|
||||
|
||||
Success criteria must be:
|
||||
|
||||
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
|
||||
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
|
||||
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
|
||||
4. **Verifiable**: Can be tested/validated without knowing implementation details
|
||||
|
||||
**Good examples**:
|
||||
|
||||
- "Users can complete checkout in under 3 minutes"
|
||||
- "System supports 10,000 concurrent users"
|
||||
- "95% of searches return results in under 1 second"
|
||||
- "Task completion rate improves by 40%"
|
||||
|
||||
**Bad examples** (implementation-focused):
|
||||
|
||||
- "API response time is under 200ms" (too technical, use "Users see results instantly")
|
||||
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
||||
- "React components render efficiently" (framework-specific)
|
||||
- "Redis cache hit rate above 80%" (technology-specific)
|
||||
137
.claude/commands/speckit.tasks.md
Normal file
137
.claude/commands/speckit.tasks.md
Normal file
@ -0,0 +1,137 @@
|
||||
---
|
||||
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
||||
handoffs:
|
||||
- label: Analyze For Consistency
|
||||
agent: speckit.analyze
|
||||
prompt: Run a project analysis for consistency
|
||||
send: true
|
||||
- label: Implement Project
|
||||
agent: speckit.implement
|
||||
prompt: Start the implementation in phases
|
||||
send: true
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load design documents**: Read from FEATURE_DIR:
|
||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
||||
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
|
||||
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||
|
||||
3. **Execute task generation workflow**:
|
||||
- Load plan.md and extract tech stack, libraries, project structure
|
||||
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
|
||||
- If data-model.md exists: Extract entities and map to user stories
|
||||
- If contracts/ exists: Map endpoints to user stories
|
||||
- If research.md exists: Extract decisions for setup tasks
|
||||
- Generate tasks organized by user story (see Task Generation Rules below)
|
||||
- Generate dependency graph showing user story completion order
|
||||
- Create parallel execution examples per user story
|
||||
- Validate task completeness (each user story has all needed tasks, independently testable)
|
||||
|
||||
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
|
||||
- Correct feature name from plan.md
|
||||
- Phase 1: Setup tasks (project initialization)
|
||||
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
||||
- Phase 3+: One phase per user story (in priority order from spec.md)
|
||||
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
||||
- Final Phase: Polish & cross-cutting concerns
|
||||
- All tasks must follow the strict checklist format (see Task Generation Rules below)
|
||||
- Clear file paths for each task
|
||||
- Dependencies section showing story completion order
|
||||
- Parallel execution examples per story
|
||||
- Implementation strategy section (MVP first, incremental delivery)
|
||||
|
||||
5. **Report**: Output path to generated tasks.md and summary:
|
||||
- Total task count
|
||||
- Task count per user story
|
||||
- Parallel opportunities identified
|
||||
- Independent test criteria for each story
|
||||
- Suggested MVP scope (typically just User Story 1)
|
||||
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
|
||||
|
||||
Context for task generation: $ARGUMENTS
|
||||
|
||||
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
||||
|
||||
## Task Generation Rules
|
||||
|
||||
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
||||
|
||||
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
||||
|
||||
### Checklist Format (REQUIRED)
|
||||
|
||||
Every task MUST strictly follow this format:
|
||||
|
||||
```text
|
||||
- [ ] [TaskID] [P?] [Story?] Description with file path
|
||||
```
|
||||
|
||||
**Format Components**:
|
||||
|
||||
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
|
||||
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
|
||||
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
|
||||
4. **[Story] label**: REQUIRED for user story phase tasks only
|
||||
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
|
||||
- Setup phase: NO story label
|
||||
- Foundational phase: NO story label
|
||||
- User Story phases: MUST have story label
|
||||
- Polish phase: NO story label
|
||||
5. **Description**: Clear action with exact file path
|
||||
|
||||
**Examples**:
|
||||
|
||||
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
|
||||
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
|
||||
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
|
||||
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
|
||||
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
|
||||
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
|
||||
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
|
||||
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
|
||||
|
||||
### Task Organization
|
||||
|
||||
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
||||
- Each user story (P1, P2, P3...) gets its own phase
|
||||
- Map all related components to their story:
|
||||
- Models needed for that story
|
||||
- Services needed for that story
|
||||
- Endpoints/UI needed for that story
|
||||
- If tests requested: Tests specific to that story
|
||||
- Mark story dependencies (most stories should be independent)
|
||||
|
||||
2. **From Contracts**:
|
||||
- Map each contract/endpoint → to the user story it serves
|
||||
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||
|
||||
3. **From Data Model**:
|
||||
- Map each entity to the user story(ies) that need it
|
||||
- If entity serves multiple stories: Put in earliest story or Setup phase
|
||||
- Relationships → service layer tasks in appropriate story phase
|
||||
|
||||
4. **From Setup/Infrastructure**:
|
||||
- Shared infrastructure → Setup phase (Phase 1)
|
||||
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
||||
- Story-specific setup → within that story's phase
|
||||
|
||||
### Phase Structure
|
||||
|
||||
- **Phase 1**: Setup (project initialization)
|
||||
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
|
||||
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
|
||||
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||
- Each phase should be a complete, independently testable increment
|
||||
- **Final Phase**: Polish & Cross-Cutting Concerns
|
||||
30
.claude/commands/speckit.taskstoissues.md
Normal file
30
.claude/commands/speckit.taskstoissues.md
Normal file
@ -0,0 +1,30 @@
|
||||
---
|
||||
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
||||
tools: ['github/github-mcp-server/issue_write']
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
1. From the executed script, extract the path to **tasks**.
|
||||
1. Get the Git remote by running:
|
||||
|
||||
```bash
|
||||
git config --get remote.origin.url
|
||||
```
|
||||
|
||||
> [!CAUTION]
|
||||
> ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
|
||||
|
||||
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
|
||||
|
||||
> [!CAUTION]
|
||||
> UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL
|
||||
403
.claude/system-prompt.md
Normal file
403
.claude/system-prompt.md
Normal file
@ -0,0 +1,403 @@
|
||||
# Pygentic-AI Multi-Agent Orchestration System
|
||||
|
||||
## Core Identity
|
||||
|
||||
You are an AI assistant with multi-agent orchestration capabilities working on the **Pygentic-AI** project. You dynamically activate domain-specific personas and coordinate with MCP servers to deliver comprehensive solutions.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Persona System
|
||||
|
||||
Activate the appropriate persona(s) based on the task context. You may combine multiple personas for complex tasks.
|
||||
|
||||
### 🏗️ Architect Persona
|
||||
**Activation**: System design, architecture decisions, tech stack evaluation, scaling concerns
|
||||
|
||||
**Expertise**:
|
||||
- FastAPI application architecture and async patterns
|
||||
- Celery distributed task queue design
|
||||
- Database schema design and optimization
|
||||
- AI agent orchestration and tool use patterns
|
||||
- Microservices and API design
|
||||
- Performance optimization and caching strategies
|
||||
|
||||
**Patterns**:
|
||||
- Always consider scalability and maintainability
|
||||
- Prefer async/await patterns for I/O operations
|
||||
- Use Pydantic for data validation
|
||||
- Design for testability and observability
|
||||
- Follow 12-factor app principles
|
||||
|
||||
---
|
||||
|
||||
### 🎨 Frontend Persona
|
||||
**Activation**: UI/UX, styling, accessibility, user interactions
|
||||
|
||||
**Expertise**:
|
||||
- SCSS modular architecture (7-1 pattern)
|
||||
- BEM naming conventions
|
||||
- Jinjax component-based templating
|
||||
- HTMX progressive enhancement
|
||||
- WCAG 2.1 AA accessibility compliance
|
||||
- Responsive design with mobile-first approach
|
||||
- CSS custom properties for theming
|
||||
|
||||
**Patterns**:
|
||||
- Maintain WCAG 2.1 AA accessibility standards
|
||||
- Use semantic HTML and ARIA attributes
|
||||
- Compile SCSS with `just scss` after changes
|
||||
- Test keyboard navigation and screen readers
|
||||
- Follow established design system in `_variables.scss`
|
||||
- Progressive enhancement over graceful degradation
|
||||
|
||||
**Key Files**:
|
||||
- `src/frontend/scss/_*.scss` - SCSS partials
|
||||
- `src/frontend/templates/` - Jinjax components
|
||||
- `src/frontend/static/js/app.js` - Client-side logic
|
||||
|
||||
---
|
||||
|
||||
### ⚙️ Backend Persona
|
||||
**Activation**: API endpoints, business logic, database, AI agents, task processing
|
||||
|
||||
**Expertise**:
|
||||
- FastAPI routing and dependency injection
|
||||
- Celery task definitions and workflows
|
||||
- SQLAlchemy models and queries
|
||||
- AI agent development (Claude, GPT)
|
||||
- Tool-augmented generation patterns
|
||||
- Async task orchestration
|
||||
- Error handling and logging
|
||||
|
||||
**Patterns**:
|
||||
- Use async endpoints for I/O-bound operations
|
||||
- Implement proper error handling with structured responses
|
||||
- Log important events with proper severity levels
|
||||
- Validate all inputs with Pydantic models
|
||||
- Use Celery for long-running tasks (>5 seconds)
|
||||
- Implement retry logic for external API calls
|
||||
|
||||
**Key Files**:
|
||||
- `src/backend/core/` - Business logic and AI agents
|
||||
- `src/backend/server/` - FastAPI routes
|
||||
- `src/backend/db/` - Database models
|
||||
- `src/backend/settings/` - Configuration management
|
||||
|
||||
---
|
||||
|
||||
### 🔒 Security Persona
|
||||
**Activation**: Authentication, authorization, secrets, input validation, security vulnerabilities
|
||||
|
||||
**Expertise**:
|
||||
- Secrets management and environment variables
|
||||
- Input validation and sanitization
|
||||
- SQL injection prevention
|
||||
- XSS and CSRF protection
|
||||
- Secure API design
|
||||
- Dependency vulnerability scanning
|
||||
- HTTPS and certificate management
|
||||
|
||||
**Patterns**:
|
||||
- Never commit secrets to version control
|
||||
- Use `.env.example` as template
|
||||
- Validate and sanitize all user inputs
|
||||
- Use parameterized queries (SQLAlchemy handles this)
|
||||
- Implement rate limiting on public endpoints
|
||||
- Run `just security` for vulnerability scanning
|
||||
- Follow OWASP Top 10 guidelines
|
||||
|
||||
**Security Checklist**:
|
||||
- [ ] No hardcoded credentials
|
||||
- [ ] Input validation with Pydantic
|
||||
- [ ] SQL injection protection via ORM
|
||||
- [ ] XSS prevention in templates
|
||||
- [ ] HTTPS enforced in production
|
||||
- [ ] Environment-based secrets management
|
||||
|
||||
---
|
||||
|
||||
### 🚀 DevOps Persona
|
||||
**Activation**: Docker, CI/CD, deployment, monitoring, infrastructure
|
||||
|
||||
**Expertise**:
|
||||
- Docker multi-stage builds
|
||||
- Docker Compose orchestration
|
||||
- GitHub Actions CI/CD pipelines
|
||||
- Komodo deployment workflows
|
||||
- Traefik reverse proxy configuration
|
||||
- Health checks and monitoring
|
||||
- Resource limits and optimization
|
||||
|
||||
**Patterns**:
|
||||
- Use environment variables for configuration
|
||||
- Implement health checks in all services
|
||||
- Tag images with branch + date
|
||||
- Use `.justfile` for consistent commands
|
||||
- Monitor resource usage and set limits
|
||||
- Implement graceful shutdown handling
|
||||
|
||||
**Key Files**:
|
||||
- `Dockerfile` - Multi-stage image build
|
||||
- `compose.yaml` - Service orchestration
|
||||
- `.github/workflows/` - CI/CD pipelines
|
||||
- `justfile` - Development workflows
|
||||
|
||||
---
|
||||
|
||||
## MCP Server Integration
|
||||
|
||||
Route tasks to appropriate MCP servers based on complexity and domain:
|
||||
|
||||
### Sequential MCP
|
||||
**Use for**: Complex multi-step analysis, systematic execution planning
|
||||
- Long-running workflows
|
||||
- Multi-dependency task chains
|
||||
- Architectural planning sessions
|
||||
|
||||
### Context7 MCP
|
||||
**Use for**: Framework-specific patterns, best practices
|
||||
- FastAPI patterns
|
||||
- Celery task patterns
|
||||
- SQLAlchemy optimization
|
||||
|
||||
### Magic MCP
|
||||
**Use for**: UI/UX coordination, design system tasks
|
||||
- Component design
|
||||
- Accessibility improvements
|
||||
- CSS architecture
|
||||
|
||||
### Playwright MCP
|
||||
**Use for**: End-to-end testing, browser automation
|
||||
- User flow testing
|
||||
- Visual regression testing
|
||||
- Performance testing
|
||||
|
||||
### Morphllm MCP
|
||||
**Use for**: Large-scale transformations, pattern-based optimization
|
||||
- Code refactoring at scale
|
||||
- Migration scripts
|
||||
- Bulk updates
|
||||
|
||||
### Serena MCP
|
||||
**Use for**: Cross-session persistence, project memory
|
||||
- Task tracking across sessions
|
||||
- Long-term project goals
|
||||
- Architecture decision records
|
||||
|
||||
---
|
||||
|
||||
## Task Coordination Patterns
|
||||
|
||||
### Simple Tasks (Single Persona)
|
||||
1. Activate appropriate persona
|
||||
2. Execute task with domain expertise
|
||||
3. Validate results
|
||||
4. Update relevant documentation
|
||||
|
||||
### Complex Tasks (Multi-Persona)
|
||||
1. **Analyze**: Break down into domain-specific subtasks
|
||||
2. **Delegate**: Activate relevant personas
|
||||
3. **Coordinate**: Execute in proper sequence
|
||||
4. **Integrate**: Combine results
|
||||
5. **Validate**: Cross-domain quality checks
|
||||
|
||||
### Example: Adding Authentication
|
||||
```
|
||||
🏗️ Architect: Design auth architecture, choose strategy
|
||||
🔒 Security: Implement secure password hashing, session management
|
||||
⚙️ Backend: Create FastAPI auth endpoints and middleware
|
||||
🎨 Frontend: Build login/logout UI components
|
||||
🚀 DevOps: Update Docker secrets, environment variables
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Automation (justfile)
|
||||
|
||||
Always use `justfile` recipes instead of raw commands:
|
||||
|
||||
### Development
|
||||
```bash
|
||||
just setup # Initialize project
|
||||
just dev # Start dev server
|
||||
just celery # Start Celery worker
|
||||
just scss-watch # Auto-compile SCSS
|
||||
```
|
||||
|
||||
### Building & Testing
|
||||
```bash
|
||||
just build [tag] # Build Docker image
|
||||
just test # Run test suite
|
||||
just lint # Run linters
|
||||
just check # All quality checks
|
||||
```
|
||||
|
||||
### Deployment
|
||||
```bash
|
||||
just deploy [tag] # Deploy with specific tag
|
||||
just deploy-dev # Deploy dev environment
|
||||
just deploy-main # Deploy production
|
||||
```
|
||||
|
||||
### Docker Operations
|
||||
```bash
|
||||
just up # Start services
|
||||
just down # Stop services
|
||||
just logs-web # View web logs
|
||||
just health # Check service health
|
||||
just clean # Remove containers
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### Python (Backend)
|
||||
- **Formatter**: Black
|
||||
- **Linter**: Ruff
|
||||
- **Type Hints**: Required for public APIs
|
||||
- **Docstrings**: Google style for complex functions
|
||||
- **Testing**: pytest with >80% coverage
|
||||
- **Security**: Bandit for vulnerability scanning
|
||||
|
||||
### SCSS (Frontend)
|
||||
- **Architecture**: 7-1 pattern with partials
|
||||
- **Naming**: BEM convention
|
||||
- **Compilation**: `just scss` or `just scss-watch`
|
||||
- **Variables**: Use CSS custom properties for theming
|
||||
- **Accessibility**: WCAG 2.1 AA compliance
|
||||
|
||||
### JavaScript
|
||||
- **Style**: ES6+ vanilla JavaScript
|
||||
- **Framework**: Minimal - prefer HTMX for dynamic updates
|
||||
- **Formatting**: js-beautify via pre-commit hooks
|
||||
- **Modules**: ES modules where appropriate
|
||||
|
||||
---
|
||||
|
||||
## Git Workflow
|
||||
|
||||
### Commit Message Format
|
||||
```
|
||||
<type>: <subject>
|
||||
|
||||
<body>
|
||||
|
||||
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
||||
```
|
||||
|
||||
**Types**: `feat`, `fix`, `refactor`, `docs`, `style`, `test`, `chore`
|
||||
|
||||
### Branch Strategy
|
||||
- `main` - Production
|
||||
- `dev_deploy` - Development/Staging
|
||||
- `feature/*` - Feature development
|
||||
- `*_deploy` - Auto-deploy branches
|
||||
|
||||
---
|
||||
|
||||
## Decision Framework
|
||||
|
||||
When making decisions, consider:
|
||||
|
||||
1. **Architecture**: Does it scale? Is it maintainable?
|
||||
2. **Security**: Are there vulnerabilities? Proper validation?
|
||||
3. **Performance**: Will it handle load? Proper caching?
|
||||
4. **User Experience**: Is it accessible? Intuitive?
|
||||
5. **Operations**: Easy to deploy? Monitor? Debug?
|
||||
6. **Testing**: Is it testable? Covered by tests?
|
||||
|
||||
---
|
||||
|
||||
## Communication Patterns
|
||||
|
||||
### When Responding
|
||||
1. **Identify** the domain(s) involved
|
||||
2. **Activate** appropriate persona(s)
|
||||
3. **Explain** the approach
|
||||
4. **Execute** the task
|
||||
5. **Validate** the results
|
||||
6. **Document** key decisions
|
||||
|
||||
### When Proposing Changes
|
||||
- Explain the "why" before the "how"
|
||||
- Consider impact across all domains
|
||||
- Provide migration path if breaking changes
|
||||
- Reference justfile commands where applicable
|
||||
|
||||
---
|
||||
|
||||
## Project-Specific Patterns
|
||||
|
||||
### AI Agent Development
|
||||
- Use Anthropic SDK for Claude integration
|
||||
- Implement tool use for Reddit intelligence
|
||||
- Validate outputs with Pydantic models
|
||||
- Handle rate limits and retries
|
||||
- Log all AI interactions for debugging
|
||||
|
||||
### Async Task Processing
|
||||
- Celery tasks for operations >5 seconds
|
||||
- Redis as message broker
|
||||
- Store results in database
|
||||
- Implement progress updates via polling
|
||||
- Graceful failure handling with retries
|
||||
|
||||
### Frontend Interactivity
|
||||
- HTMX for dynamic updates
|
||||
- Progressive enhancement approach
|
||||
- Client-side validation + server-side validation
|
||||
- Loading states and error handling
|
||||
- Accessibility-first implementation
|
||||
|
||||
---
|
||||
|
||||
## Common Scenarios
|
||||
|
||||
### Scenario: Add New Feature
|
||||
1. 🏗️ **Architect**: Design feature architecture
|
||||
2. ⚙️ **Backend**: Implement API endpoints and logic
|
||||
3. 🎨 **Frontend**: Create UI components
|
||||
4. 🔒 **Security**: Validate inputs and permissions
|
||||
5. 🚀 **DevOps**: Update deployment configs if needed
|
||||
6. ✅ Run `just test` and `just check`
|
||||
|
||||
### Scenario: Fix Bug
|
||||
1. Identify affected domain(s)
|
||||
2. Activate relevant persona(s)
|
||||
3. Reproduce issue
|
||||
4. Implement fix
|
||||
5. Add regression test
|
||||
6. Validate across domains
|
||||
|
||||
### Scenario: Improve Performance
|
||||
1. 🏗️ **Architect**: Identify bottlenecks
|
||||
2. ⚙️ **Backend**: Optimize queries, add caching
|
||||
3. 🎨 **Frontend**: Minimize assets, lazy loading
|
||||
4. 🚀 **DevOps**: Resource tuning, monitoring
|
||||
|
||||
---
|
||||
|
||||
## Initialization
|
||||
|
||||
When starting a new session:
|
||||
1. Review CLAUDE.md for current project state
|
||||
2. Check `justfile` for available commands
|
||||
3. Verify environment with `just check-env`
|
||||
4. Understand current branch and deployment status
|
||||
5. Activate appropriate persona(s) for the task
|
||||
|
||||
---
|
||||
|
||||
## Remember
|
||||
|
||||
- **Use justfile**: Always prefer `just` commands over raw Docker/npm
|
||||
- **Multi-persona**: Complex tasks require coordination across domains
|
||||
- **Quality first**: Run `just check` before committing
|
||||
- **Accessibility**: WCAG 2.1 AA is non-negotiable
|
||||
- **Security**: Never commit secrets, always validate inputs
|
||||
- **Documentation**: Update CLAUDE.md when architecture changes
|
||||
|
||||
---
|
||||
|
||||
**You are ready to assist with any aspect of the Pygentic-AI project!**
|
||||
59
.env.example
Normal file
59
.env.example
Normal file
@ -0,0 +1,59 @@
|
||||
# OpenAI Configuration
|
||||
OPENAI_API_KEY=sk-proj-your-openai-api-key-here
|
||||
OPENAI_MODEL=gpt-4o-mini
|
||||
# OPENAI_MODEL=gpt-4o
|
||||
|
||||
# Anthropic Configuration
|
||||
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key-here
|
||||
|
||||
# Security
|
||||
HTTPS_ONLY=False
|
||||
SECRET_KEY=your-secret-key-here
|
||||
|
||||
# Local Database Configuration
|
||||
LOCAL_DB_UN=pygentic_ai
|
||||
LOCAL_DB_PW=your-local-db-password
|
||||
LOCAL_DB_DB=fsecada_server
|
||||
LOCAL_DB_HOST=localhost
|
||||
LOCAL_DB_PORT=5433
|
||||
|
||||
# Cloud Database Configuration
|
||||
CLOUD_DB_UN=pygentic_ai
|
||||
CLOUD_DB_PW=your-cloud-db-password
|
||||
CLOUD_DB_DB=fsecada_server
|
||||
CLOUD_DB_HOST=pgserver.example.com
|
||||
CLOUD_DB_PORT=5433
|
||||
|
||||
# Database Configuration
|
||||
SQL_DIALECT=postgresql
|
||||
|
||||
# Application Environment
|
||||
DEBUG=True
|
||||
SERVER_ENV=dev
|
||||
|
||||
# Reddit API Configuration (for intelligence gathering)
|
||||
REDDIT_MAX_INSIGHTS=5
|
||||
REDDIT_MAX_INSIGHT_LENGTH=400
|
||||
REDDIT_USER_AGENT=your-reddit-user-agent
|
||||
REDDIT_CLIENT_ID=your-reddit-client-id
|
||||
REDDIT_CLIENT_SECRET=your-reddit-client-secret
|
||||
REDDIT_SUBREDDIT=python,kubernetes,devops,rust,selfhosted,algorithms,typescript,programming,webdev,strategy,DigitalMarketing,Entrepreneur
|
||||
|
||||
# Celery Configuration
|
||||
CELERY_PORT=5052
|
||||
FLOWER_USERNAME=admin
|
||||
FLOWER_PASSWORD=your-flower-password-here
|
||||
|
||||
# Server Configuration
|
||||
PORT=5051
|
||||
INTERNAL_PORT=5051
|
||||
WORKERS=4
|
||||
TIMEOUT=120
|
||||
|
||||
# Docker Configuration (for compose.yaml)
|
||||
IMAGE_TAG=main-latest
|
||||
MEMORY_LIMIT=1024mb
|
||||
MEMORY_RESERVATION=512mb
|
||||
CELERY_MEMORY_LIMIT=512mb
|
||||
CELERY_MEMORY_RESERVATION=256mb
|
||||
DOCKER_HOST_IP=192.168.99.85
|
||||
22
.github/workflows/docker-image.yml
vendored
22
.github/workflows/docker-image.yml
vendored
@ -19,18 +19,18 @@ jobs:
|
||||
steps:
|
||||
- name: Get current date
|
||||
id: date
|
||||
run: echo "::set-output name=date::$(date +'%Y-%m-%d')"
|
||||
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
|
||||
- name: Extract branch name
|
||||
shell: bash
|
||||
run: echo "branch=${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}" >> $GITHUB_OUTPUT
|
||||
id: extract_branch
|
||||
- name: Set SSH Agent
|
||||
uses: webfactory/ssh-agent@v0.9.0
|
||||
with:
|
||||
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
|
||||
- name: Sanitize branch name for Docker tag
|
||||
shell: bash
|
||||
run: echo "sanitized_branch=$(echo '${{ steps.extract_branch.outputs.branch }}' | sed 's/\//-/g')" >> $GITHUB_OUTPUT
|
||||
id: sanitize_branch
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v3
|
||||
with:
|
||||
@ -38,9 +38,9 @@ jobs:
|
||||
- name: Build the Docker image
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
ssh: |
|
||||
default=${{ env.SSH_AUTH_SOCK }}
|
||||
build-args: |
|
||||
GIT_BRANCH=${{ steps.extract_branch.outputs.branch }}
|
||||
context: .
|
||||
push: true
|
||||
tags: s3docker.francissecada.com/ranked_jobs:${{ steps.extract_branch.outputs.branch }}.${{ steps.date.outputs.date }}
|
||||
tags: |
|
||||
s3docker.francissecada.com/pygentic_ai:${{ steps.sanitize_branch.outputs.sanitized_branch }}.${{ steps.date.outputs.date }}
|
||||
s3docker.francissecada.com/pygentic_ai:${{ steps.sanitize_branch.outputs.sanitized_branch }}-latest
|
||||
pull: true
|
||||
|
||||
42
.github/workflows/komodo-deploy.yml
vendored
Normal file
42
.github/workflows/komodo-deploy.yml
vendored
Normal file
@ -0,0 +1,42 @@
|
||||
name: Komodo Deployment Trigger
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: ["Docker Image CI"]
|
||||
types:
|
||||
- completed
|
||||
branches: [ "**_deploy", "main" ]
|
||||
|
||||
jobs:
|
||||
trigger_komodo_deployment:
|
||||
runs-on: ubuntu-latest
|
||||
# Only run if the 'Docker Image CI' workflow was successful
|
||||
if: github.event.workflow_run.conclusion == 'success'
|
||||
steps:
|
||||
- name: Extract branch name from triggering workflow
|
||||
id: branch
|
||||
run: echo "BRANCH_NAME=$(echo ${{ github.event.workflow_run.head_branch }} | sed 's/\//-/g')" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Trigger Komodo Stack Deployment
|
||||
env:
|
||||
# The webhook URL is constructed from secrets
|
||||
KOMODO_WEBHOOK_URL: "https://${{ secrets.KOMODO_HOST }}/listener/github/stack/${{ secrets.KOMODO_STACK_ID_OR_NAME }}/deploy"
|
||||
run: |
|
||||
# Get the branch name from the previous step's output
|
||||
branch_name="${{ steps.branch.outputs.BRANCH_NAME }}"
|
||||
|
||||
# Construct a JSON payload containing the branch name
|
||||
# This tells Komodo which specific branch/tag to deploy
|
||||
payload=$(printf '{"ref":"%s"}' "$branch_name")
|
||||
|
||||
# Generate the signature based on the payload
|
||||
signature=$(echo -n "$payload" | openssl dgst -sha256 -hmac "${{ secrets.KOMODO_WEBHOOK_SECRET }}" | sed 's/^.* //')
|
||||
|
||||
# Send the request to the Komodo webhook with the payload
|
||||
echo "Triggering Pygentic-AI deployment for branch: $branch_name"
|
||||
curl -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-Hub-Signature-256: sha256=$signature" \
|
||||
-d "$payload" \
|
||||
--fail-with-body \
|
||||
"${KOMODO_WEBHOOK_URL}"
|
||||
@ -1 +0,0 @@
|
||||
3.13.1
|
||||
183
.specify/memory/constitution.md
Normal file
183
.specify/memory/constitution.md
Normal file
@ -0,0 +1,183 @@
|
||||
<!--
|
||||
Sync Impact Report:
|
||||
Version: 1.0.0 (Initial ratification)
|
||||
Modified Principles: N/A (initial version)
|
||||
|
||||
Added Sections:
|
||||
- Core Principles (6 principles: TDD, API-First, Async, Observability, Type Safety, Security)
|
||||
- Development Standards (Code Quality, Testing Requirements, Architecture Patterns)
|
||||
- Quality Assurance (Pre-Commit, Code Review, Deployment Gates)
|
||||
- Governance (Amendment Process, Versioning, Compliance, Complexity Justification)
|
||||
|
||||
Removed Sections: None (initial version)
|
||||
|
||||
Templates Updated:
|
||||
✅ plan-template.md - Constitution Check section expanded with all 6 principles as checkboxes
|
||||
✅ spec-template.md - Added Non-Functional Requirements section with constitution-mandated NFRs
|
||||
✅ tasks-template.md - Updated all phases with constitution-aligned task structure:
|
||||
- Phase 1: Added pre-commit hooks, security scanning
|
||||
- Phase 2: Expanded with constitution-mandated infrastructure (28 tasks covering all principles)
|
||||
- User Story phases: Added TDD enforcement, type safety, observability, security tasks
|
||||
- Phase N: Added constitution compliance verification checklist
|
||||
|
||||
Command Files: No command files exist yet in .specify/templates/commands/
|
||||
|
||||
Follow-up TODOs: None - All templates synchronized with constitution v1.0.0
|
||||
-->
|
||||
|
||||
# Pygentic-AI Constitution
|
||||
|
||||
## Core Principles
|
||||
|
||||
### I. Test-Driven Development (NON-NEGOTIABLE)
|
||||
|
||||
Tests MUST be written before implementation. The TDD cycle is strictly enforced:
|
||||
1. Write tests that define expected behavior
|
||||
2. Verify tests fail (Red)
|
||||
3. Implement minimal code to pass tests (Green)
|
||||
4. Refactor while maintaining passing tests (Refactor)
|
||||
|
||||
**Rationale**: TDD ensures code correctness, maintainability, and prevents regressions. Pre-written tests serve as living documentation and enable confident refactoring.
|
||||
|
||||
### II. API-First Design
|
||||
|
||||
All features MUST expose well-defined APIs with clear contracts. API specifications MUST be documented before implementation, including:
|
||||
- Request/response schemas using Pydantic models
|
||||
- Error handling and status codes
|
||||
- Authentication/authorization requirements
|
||||
- Rate limiting and performance characteristics
|
||||
|
||||
**Rationale**: API-first design enables parallel development of frontend/backend, facilitates integration testing, and ensures consistent interfaces across the system.
|
||||
|
||||
### III. Asynchronous Architecture
|
||||
|
||||
The system MUST leverage asynchronous patterns for I/O-bound operations:
|
||||
- FastAPI async endpoints for web APIs
|
||||
- Celery for long-running background tasks
|
||||
- Redis for caching and task queuing
|
||||
- Async database operations with asyncpg/aiomysql
|
||||
|
||||
**Rationale**: AI model invocations and external API calls are I/O-bound. Async patterns maximize throughput, reduce latency, and improve resource utilization for concurrent requests.
|
||||
|
||||
### IV. Observability and Monitoring
|
||||
|
||||
Every component MUST implement comprehensive observability:
|
||||
- Structured logging with contextual information (logfire, loguru)
|
||||
- OpenTelemetry instrumentation for distributed tracing
|
||||
- Metrics collection for performance monitoring (Prometheus)
|
||||
- Error tracking with detailed context
|
||||
|
||||
**Rationale**: AI systems are inherently complex and opaque. Observability is essential for debugging, performance optimization, and understanding system behavior in production.
|
||||
|
||||
### V. Type Safety and Validation
|
||||
|
||||
Code MUST be type-annotated and validated at runtime:
|
||||
- Pydantic models for all data structures
|
||||
- Type hints for all function signatures
|
||||
- Runtime validation at API boundaries
|
||||
- Static type checking with mypy (when enabled)
|
||||
|
||||
**Rationale**: Type safety catches errors early, improves IDE support, serves as documentation, and prevents invalid data from propagating through the system.
|
||||
|
||||
### VI. Security by Default
|
||||
|
||||
Security MUST be built into every layer:
|
||||
- No credentials in code (use environment variables)
|
||||
- Input validation and sanitization at all entry points
|
||||
- Authentication and authorization for all protected endpoints
|
||||
- SQL injection prevention through parameterized queries
|
||||
- Regular security audits (Bandit for Python)
|
||||
|
||||
**Rationale**: AI services handle sensitive data and API keys. A single security vulnerability can compromise the entire system and user data.
|
||||
|
||||
## Development Standards
|
||||
|
||||
### Code Quality
|
||||
|
||||
- **Formatting**: Black (line length: 80), isort for imports
|
||||
- **Linting**: Ruff with enabled rules (E, F, B), max complexity: 10
|
||||
- **Documentation**: Docstrings for all public APIs, inline comments for complex logic
|
||||
- **Error Handling**: Explicit exception handling with meaningful error messages
|
||||
- **Dependencies**: Use `uv` for dependency management, pin versions in requirements
|
||||
|
||||
### Testing Requirements
|
||||
|
||||
- **Unit Tests**: Test individual functions/methods in isolation
|
||||
- **Integration Tests**: Test component interactions (API + database, service + external APIs)
|
||||
- **Contract Tests**: Verify API contracts match specifications
|
||||
- **Coverage**: Aim for >80% code coverage, 100% for critical paths
|
||||
- **Test Independence**: Tests MUST be independent and runnable in any order
|
||||
|
||||
### Architecture Patterns
|
||||
|
||||
- **Service Layer**: Business logic isolated from API/database layers
|
||||
- **Repository Pattern**: Database access abstracted behind repositories (using FastCRUD)
|
||||
- **Dependency Injection**: FastAPI's dependency injection for services and configurations
|
||||
- **Background Tasks**: Long-running operations delegated to Celery workers
|
||||
- **Caching Strategy**: Redis caching for expensive operations (AI model results, API responses)
|
||||
|
||||
## Quality Assurance
|
||||
|
||||
### Pre-Commit Requirements
|
||||
|
||||
All commits MUST pass:
|
||||
1. Code formatting (Black, isort)
|
||||
2. Linting (Ruff)
|
||||
3. Type checking (if enabled)
|
||||
4. Security scan (Bandit)
|
||||
5. All tests passing
|
||||
|
||||
Use pre-commit hooks to enforce these checks automatically.
|
||||
|
||||
### Code Review Standards
|
||||
|
||||
All changes MUST be reviewed before merge:
|
||||
- Verify tests exist and pass
|
||||
- Check for security vulnerabilities
|
||||
- Validate error handling and edge cases
|
||||
- Ensure observability (logging, metrics)
|
||||
- Confirm documentation is updated
|
||||
- Validate performance impact for critical paths
|
||||
|
||||
### Deployment Gates
|
||||
|
||||
Production deployments MUST satisfy:
|
||||
- All tests passing in CI/CD pipeline
|
||||
- Security scan clean (no high/critical vulnerabilities)
|
||||
- Performance benchmarks within acceptable thresholds
|
||||
- Database migrations tested and reversible
|
||||
- Rollback plan documented
|
||||
|
||||
## Governance
|
||||
|
||||
### Amendment Process
|
||||
|
||||
Constitution amendments require:
|
||||
1. Proposal with clear rationale and impact assessment
|
||||
2. Discussion period (minimum 3 days for minor, 7 days for major changes)
|
||||
3. Team consensus or majority vote
|
||||
4. Documentation of decision and migration plan
|
||||
5. Version bump according to semantic versioning
|
||||
|
||||
### Versioning Policy
|
||||
|
||||
- **MAJOR**: Backward-incompatible changes to core principles or mandatory requirements
|
||||
- **MINOR**: New principles added or existing principles materially expanded
|
||||
- **PATCH**: Clarifications, wording improvements, non-semantic refinements
|
||||
|
||||
### Compliance
|
||||
|
||||
- All code reviews MUST verify compliance with this constitution
|
||||
- Violations MUST be justified with documented rationale
|
||||
- Technical debt MUST be tracked and addressed systematically
|
||||
- Constitution supersedes all other development practices
|
||||
|
||||
### Complexity Justification
|
||||
|
||||
Any deviation from YAGNI (You Aren't Gonna Need It) MUST be justified:
|
||||
- Document the specific problem being solved
|
||||
- Explain why simpler alternatives are insufficient
|
||||
- Provide measurable success criteria
|
||||
- Plan for future simplification if possible
|
||||
|
||||
**Version**: 1.0.0 | **Ratified**: 2025-01-16 | **Last Amended**: 2026-02-02
|
||||
148
.specify/scripts/powershell/check-prerequisites.ps1
Normal file
148
.specify/scripts/powershell/check-prerequisites.ps1
Normal file
@ -0,0 +1,148 @@
|
||||
#!/usr/bin/env pwsh
|
||||
|
||||
# Consolidated prerequisite checking script (PowerShell)
|
||||
#
|
||||
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
|
||||
# It replaces the functionality previously spread across multiple scripts.
|
||||
#
|
||||
# Usage: ./check-prerequisites.ps1 [OPTIONS]
|
||||
#
|
||||
# OPTIONS:
|
||||
# -Json Output in JSON format
|
||||
# -RequireTasks Require tasks.md to exist (for implementation phase)
|
||||
# -IncludeTasks Include tasks.md in AVAILABLE_DOCS list
|
||||
# -PathsOnly Only output path variables (no validation)
|
||||
# -Help, -h Show help message
|
||||
|
||||
[CmdletBinding()]
|
||||
param(
|
||||
[switch]$Json,
|
||||
[switch]$RequireTasks,
|
||||
[switch]$IncludeTasks,
|
||||
[switch]$PathsOnly,
|
||||
[switch]$Help
|
||||
)
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
# Show help if requested
|
||||
if ($Help) {
|
||||
Write-Output @"
|
||||
Usage: check-prerequisites.ps1 [OPTIONS]
|
||||
|
||||
Consolidated prerequisite checking for Spec-Driven Development workflow.
|
||||
|
||||
OPTIONS:
|
||||
-Json Output in JSON format
|
||||
-RequireTasks Require tasks.md to exist (for implementation phase)
|
||||
-IncludeTasks Include tasks.md in AVAILABLE_DOCS list
|
||||
-PathsOnly Only output path variables (no prerequisite validation)
|
||||
-Help, -h Show this help message
|
||||
|
||||
EXAMPLES:
|
||||
# Check task prerequisites (plan.md required)
|
||||
.\check-prerequisites.ps1 -Json
|
||||
|
||||
# Check implementation prerequisites (plan.md + tasks.md required)
|
||||
.\check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks
|
||||
|
||||
# Get feature paths only (no validation)
|
||||
.\check-prerequisites.ps1 -PathsOnly
|
||||
|
||||
"@
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Source common functions
|
||||
. "$PSScriptRoot/common.ps1"
|
||||
|
||||
# Get feature paths and validate branch
|
||||
$paths = Get-FeaturePathsEnv
|
||||
|
||||
if (-not (Test-FeatureBranch -Branch $paths.CURRENT_BRANCH -HasGit:$paths.HAS_GIT)) {
|
||||
exit 1
|
||||
}
|
||||
|
||||
# If paths-only mode, output paths and exit (support combined -Json -PathsOnly)
|
||||
if ($PathsOnly) {
|
||||
if ($Json) {
|
||||
[PSCustomObject]@{
|
||||
REPO_ROOT = $paths.REPO_ROOT
|
||||
BRANCH = $paths.CURRENT_BRANCH
|
||||
FEATURE_DIR = $paths.FEATURE_DIR
|
||||
FEATURE_SPEC = $paths.FEATURE_SPEC
|
||||
IMPL_PLAN = $paths.IMPL_PLAN
|
||||
TASKS = $paths.TASKS
|
||||
} | ConvertTo-Json -Compress
|
||||
} else {
|
||||
Write-Output "REPO_ROOT: $($paths.REPO_ROOT)"
|
||||
Write-Output "BRANCH: $($paths.CURRENT_BRANCH)"
|
||||
Write-Output "FEATURE_DIR: $($paths.FEATURE_DIR)"
|
||||
Write-Output "FEATURE_SPEC: $($paths.FEATURE_SPEC)"
|
||||
Write-Output "IMPL_PLAN: $($paths.IMPL_PLAN)"
|
||||
Write-Output "TASKS: $($paths.TASKS)"
|
||||
}
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Validate required directories and files
|
||||
if (-not (Test-Path $paths.FEATURE_DIR -PathType Container)) {
|
||||
Write-Output "ERROR: Feature directory not found: $($paths.FEATURE_DIR)"
|
||||
Write-Output "Run /speckit.specify first to create the feature structure."
|
||||
exit 1
|
||||
}
|
||||
|
||||
if (-not (Test-Path $paths.IMPL_PLAN -PathType Leaf)) {
|
||||
Write-Output "ERROR: plan.md not found in $($paths.FEATURE_DIR)"
|
||||
Write-Output "Run /speckit.plan first to create the implementation plan."
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check for tasks.md if required
|
||||
if ($RequireTasks -and -not (Test-Path $paths.TASKS -PathType Leaf)) {
|
||||
Write-Output "ERROR: tasks.md not found in $($paths.FEATURE_DIR)"
|
||||
Write-Output "Run /speckit.tasks first to create the task list."
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Build list of available documents
|
||||
$docs = @()
|
||||
|
||||
# Always check these optional docs
|
||||
if (Test-Path $paths.RESEARCH) { $docs += 'research.md' }
|
||||
if (Test-Path $paths.DATA_MODEL) { $docs += 'data-model.md' }
|
||||
|
||||
# Check contracts directory (only if it exists and has files)
|
||||
if ((Test-Path $paths.CONTRACTS_DIR) -and (Get-ChildItem -Path $paths.CONTRACTS_DIR -ErrorAction SilentlyContinue | Select-Object -First 1)) {
|
||||
$docs += 'contracts/'
|
||||
}
|
||||
|
||||
if (Test-Path $paths.QUICKSTART) { $docs += 'quickstart.md' }
|
||||
|
||||
# Include tasks.md if requested and it exists
|
||||
if ($IncludeTasks -and (Test-Path $paths.TASKS)) {
|
||||
$docs += 'tasks.md'
|
||||
}
|
||||
|
||||
# Output results
|
||||
if ($Json) {
|
||||
# JSON output
|
||||
[PSCustomObject]@{
|
||||
FEATURE_DIR = $paths.FEATURE_DIR
|
||||
AVAILABLE_DOCS = $docs
|
||||
} | ConvertTo-Json -Compress
|
||||
} else {
|
||||
# Text output
|
||||
Write-Output "FEATURE_DIR:$($paths.FEATURE_DIR)"
|
||||
Write-Output "AVAILABLE_DOCS:"
|
||||
|
||||
# Show status of each potential document
|
||||
Test-FileExists -Path $paths.RESEARCH -Description 'research.md' | Out-Null
|
||||
Test-FileExists -Path $paths.DATA_MODEL -Description 'data-model.md' | Out-Null
|
||||
Test-DirHasFiles -Path $paths.CONTRACTS_DIR -Description 'contracts/' | Out-Null
|
||||
Test-FileExists -Path $paths.QUICKSTART -Description 'quickstart.md' | Out-Null
|
||||
|
||||
if ($IncludeTasks) {
|
||||
Test-FileExists -Path $paths.TASKS -Description 'tasks.md' | Out-Null
|
||||
}
|
||||
}
|
||||
137
.specify/scripts/powershell/common.ps1
Normal file
137
.specify/scripts/powershell/common.ps1
Normal file
@ -0,0 +1,137 @@
|
||||
#!/usr/bin/env pwsh
|
||||
# Common PowerShell functions analogous to common.sh
|
||||
|
||||
function Get-RepoRoot {
|
||||
try {
|
||||
$result = git rev-parse --show-toplevel 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
return $result
|
||||
}
|
||||
} catch {
|
||||
# Git command failed
|
||||
}
|
||||
|
||||
# Fall back to script location for non-git repos
|
||||
return (Resolve-Path (Join-Path $PSScriptRoot "../../..")).Path
|
||||
}
|
||||
|
||||
function Get-CurrentBranch {
|
||||
# First check if SPECIFY_FEATURE environment variable is set
|
||||
if ($env:SPECIFY_FEATURE) {
|
||||
return $env:SPECIFY_FEATURE
|
||||
}
|
||||
|
||||
# Then check git if available
|
||||
try {
|
||||
$result = git rev-parse --abbrev-ref HEAD 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
return $result
|
||||
}
|
||||
} catch {
|
||||
# Git command failed
|
||||
}
|
||||
|
||||
# For non-git repos, try to find the latest feature directory
|
||||
$repoRoot = Get-RepoRoot
|
||||
$specsDir = Join-Path $repoRoot "specs"
|
||||
|
||||
if (Test-Path $specsDir) {
|
||||
$latestFeature = ""
|
||||
$highest = 0
|
||||
|
||||
Get-ChildItem -Path $specsDir -Directory | ForEach-Object {
|
||||
if ($_.Name -match '^(\d{3})-') {
|
||||
$num = [int]$matches[1]
|
||||
if ($num -gt $highest) {
|
||||
$highest = $num
|
||||
$latestFeature = $_.Name
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ($latestFeature) {
|
||||
return $latestFeature
|
||||
}
|
||||
}
|
||||
|
||||
# Final fallback
|
||||
return "main"
|
||||
}
|
||||
|
||||
function Test-HasGit {
|
||||
try {
|
||||
git rev-parse --show-toplevel 2>$null | Out-Null
|
||||
return ($LASTEXITCODE -eq 0)
|
||||
} catch {
|
||||
return $false
|
||||
}
|
||||
}
|
||||
|
||||
function Test-FeatureBranch {
|
||||
param(
|
||||
[string]$Branch,
|
||||
[bool]$HasGit = $true
|
||||
)
|
||||
|
||||
# For non-git repos, we can't enforce branch naming but still provide output
|
||||
if (-not $HasGit) {
|
||||
Write-Warning "[specify] Warning: Git repository not detected; skipped branch validation"
|
||||
return $true
|
||||
}
|
||||
|
||||
if ($Branch -notmatch '^[0-9]{3}-') {
|
||||
Write-Output "ERROR: Not on a feature branch. Current branch: $Branch"
|
||||
Write-Output "Feature branches should be named like: 001-feature-name"
|
||||
return $false
|
||||
}
|
||||
return $true
|
||||
}
|
||||
|
||||
function Get-FeatureDir {
|
||||
param([string]$RepoRoot, [string]$Branch)
|
||||
Join-Path $RepoRoot "specs/$Branch"
|
||||
}
|
||||
|
||||
function Get-FeaturePathsEnv {
|
||||
$repoRoot = Get-RepoRoot
|
||||
$currentBranch = Get-CurrentBranch
|
||||
$hasGit = Test-HasGit
|
||||
$featureDir = Get-FeatureDir -RepoRoot $repoRoot -Branch $currentBranch
|
||||
|
||||
[PSCustomObject]@{
|
||||
REPO_ROOT = $repoRoot
|
||||
CURRENT_BRANCH = $currentBranch
|
||||
HAS_GIT = $hasGit
|
||||
FEATURE_DIR = $featureDir
|
||||
FEATURE_SPEC = Join-Path $featureDir 'spec.md'
|
||||
IMPL_PLAN = Join-Path $featureDir 'plan.md'
|
||||
TASKS = Join-Path $featureDir 'tasks.md'
|
||||
RESEARCH = Join-Path $featureDir 'research.md'
|
||||
DATA_MODEL = Join-Path $featureDir 'data-model.md'
|
||||
QUICKSTART = Join-Path $featureDir 'quickstart.md'
|
||||
CONTRACTS_DIR = Join-Path $featureDir 'contracts'
|
||||
}
|
||||
}
|
||||
|
||||
function Test-FileExists {
|
||||
param([string]$Path, [string]$Description)
|
||||
if (Test-Path -Path $Path -PathType Leaf) {
|
||||
Write-Output " ✓ $Description"
|
||||
return $true
|
||||
} else {
|
||||
Write-Output " ✗ $Description"
|
||||
return $false
|
||||
}
|
||||
}
|
||||
|
||||
function Test-DirHasFiles {
|
||||
param([string]$Path, [string]$Description)
|
||||
if ((Test-Path -Path $Path -PathType Container) -and (Get-ChildItem -Path $Path -ErrorAction SilentlyContinue | Where-Object { -not $_.PSIsContainer } | Select-Object -First 1)) {
|
||||
Write-Output " ✓ $Description"
|
||||
return $true
|
||||
} else {
|
||||
Write-Output " ✗ $Description"
|
||||
return $false
|
||||
}
|
||||
}
|
||||
|
||||
283
.specify/scripts/powershell/create-new-feature.ps1
Normal file
283
.specify/scripts/powershell/create-new-feature.ps1
Normal file
@ -0,0 +1,283 @@
|
||||
#!/usr/bin/env pwsh
|
||||
# Create a new feature
|
||||
[CmdletBinding()]
|
||||
param(
|
||||
[switch]$Json,
|
||||
[string]$ShortName,
|
||||
[int]$Number = 0,
|
||||
[switch]$Help,
|
||||
[Parameter(ValueFromRemainingArguments = $true)]
|
||||
[string[]]$FeatureDescription
|
||||
)
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
# Show help if requested
|
||||
if ($Help) {
|
||||
Write-Host "Usage: ./create-new-feature.ps1 [-Json] [-ShortName <name>] [-Number N] <feature description>"
|
||||
Write-Host ""
|
||||
Write-Host "Options:"
|
||||
Write-Host " -Json Output in JSON format"
|
||||
Write-Host " -ShortName <name> Provide a custom short name (2-4 words) for the branch"
|
||||
Write-Host " -Number N Specify branch number manually (overrides auto-detection)"
|
||||
Write-Host " -Help Show this help message"
|
||||
Write-Host ""
|
||||
Write-Host "Examples:"
|
||||
Write-Host " ./create-new-feature.ps1 'Add user authentication system' -ShortName 'user-auth'"
|
||||
Write-Host " ./create-new-feature.ps1 'Implement OAuth2 integration for API'"
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Check if feature description provided
|
||||
if (-not $FeatureDescription -or $FeatureDescription.Count -eq 0) {
|
||||
Write-Error "Usage: ./create-new-feature.ps1 [-Json] [-ShortName <name>] <feature description>"
|
||||
exit 1
|
||||
}
|
||||
|
||||
$featureDesc = ($FeatureDescription -join ' ').Trim()
|
||||
|
||||
# Resolve repository root. Prefer git information when available, but fall back
|
||||
# to searching for repository markers so the workflow still functions in repositories that
|
||||
# were initialized with --no-git.
|
||||
function Find-RepositoryRoot {
|
||||
param(
|
||||
[string]$StartDir,
|
||||
[string[]]$Markers = @('.git', '.specify')
|
||||
)
|
||||
$current = Resolve-Path $StartDir
|
||||
while ($true) {
|
||||
foreach ($marker in $Markers) {
|
||||
if (Test-Path (Join-Path $current $marker)) {
|
||||
return $current
|
||||
}
|
||||
}
|
||||
$parent = Split-Path $current -Parent
|
||||
if ($parent -eq $current) {
|
||||
# Reached filesystem root without finding markers
|
||||
return $null
|
||||
}
|
||||
$current = $parent
|
||||
}
|
||||
}
|
||||
|
||||
function Get-HighestNumberFromSpecs {
|
||||
param([string]$SpecsDir)
|
||||
|
||||
$highest = 0
|
||||
if (Test-Path $SpecsDir) {
|
||||
Get-ChildItem -Path $SpecsDir -Directory | ForEach-Object {
|
||||
if ($_.Name -match '^(\d+)') {
|
||||
$num = [int]$matches[1]
|
||||
if ($num -gt $highest) { $highest = $num }
|
||||
}
|
||||
}
|
||||
}
|
||||
return $highest
|
||||
}
|
||||
|
||||
function Get-HighestNumberFromBranches {
|
||||
param()
|
||||
|
||||
$highest = 0
|
||||
try {
|
||||
$branches = git branch -a 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
foreach ($branch in $branches) {
|
||||
# Clean branch name: remove leading markers and remote prefixes
|
||||
$cleanBranch = $branch.Trim() -replace '^\*?\s+', '' -replace '^remotes/[^/]+/', ''
|
||||
|
||||
# Extract feature number if branch matches pattern ###-*
|
||||
if ($cleanBranch -match '^(\d+)-') {
|
||||
$num = [int]$matches[1]
|
||||
if ($num -gt $highest) { $highest = $num }
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
# If git command fails, return 0
|
||||
Write-Verbose "Could not check Git branches: $_"
|
||||
}
|
||||
return $highest
|
||||
}
|
||||
|
||||
function Get-NextBranchNumber {
|
||||
param(
|
||||
[string]$SpecsDir
|
||||
)
|
||||
|
||||
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
|
||||
try {
|
||||
git fetch --all --prune 2>$null | Out-Null
|
||||
} catch {
|
||||
# Ignore fetch errors
|
||||
}
|
||||
|
||||
# Get highest number from ALL branches (not just matching short name)
|
||||
$highestBranch = Get-HighestNumberFromBranches
|
||||
|
||||
# Get highest number from ALL specs (not just matching short name)
|
||||
$highestSpec = Get-HighestNumberFromSpecs -SpecsDir $SpecsDir
|
||||
|
||||
# Take the maximum of both
|
||||
$maxNum = [Math]::Max($highestBranch, $highestSpec)
|
||||
|
||||
# Return next number
|
||||
return $maxNum + 1
|
||||
}
|
||||
|
||||
function ConvertTo-CleanBranchName {
|
||||
param([string]$Name)
|
||||
|
||||
return $Name.ToLower() -replace '[^a-z0-9]', '-' -replace '-{2,}', '-' -replace '^-', '' -replace '-$', ''
|
||||
}
|
||||
$fallbackRoot = (Find-RepositoryRoot -StartDir $PSScriptRoot)
|
||||
if (-not $fallbackRoot) {
|
||||
Write-Error "Error: Could not determine repository root. Please run this script from within the repository."
|
||||
exit 1
|
||||
}
|
||||
|
||||
try {
|
||||
$repoRoot = git rev-parse --show-toplevel 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
$hasGit = $true
|
||||
} else {
|
||||
throw "Git not available"
|
||||
}
|
||||
} catch {
|
||||
$repoRoot = $fallbackRoot
|
||||
$hasGit = $false
|
||||
}
|
||||
|
||||
Set-Location $repoRoot
|
||||
|
||||
$specsDir = Join-Path $repoRoot 'specs'
|
||||
New-Item -ItemType Directory -Path $specsDir -Force | Out-Null
|
||||
|
||||
# Function to generate branch name with stop word filtering and length filtering
|
||||
function Get-BranchName {
|
||||
param([string]$Description)
|
||||
|
||||
# Common stop words to filter out
|
||||
$stopWords = @(
|
||||
'i', 'a', 'an', 'the', 'to', 'for', 'of', 'in', 'on', 'at', 'by', 'with', 'from',
|
||||
'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had',
|
||||
'do', 'does', 'did', 'will', 'would', 'should', 'could', 'can', 'may', 'might', 'must', 'shall',
|
||||
'this', 'that', 'these', 'those', 'my', 'your', 'our', 'their',
|
||||
'want', 'need', 'add', 'get', 'set'
|
||||
)
|
||||
|
||||
# Convert to lowercase and extract words (alphanumeric only)
|
||||
$cleanName = $Description.ToLower() -replace '[^a-z0-9\s]', ' '
|
||||
$words = $cleanName -split '\s+' | Where-Object { $_ }
|
||||
|
||||
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
|
||||
$meaningfulWords = @()
|
||||
foreach ($word in $words) {
|
||||
# Skip stop words
|
||||
if ($stopWords -contains $word) { continue }
|
||||
|
||||
# Keep words that are length >= 3 OR appear as uppercase in original (likely acronyms)
|
||||
if ($word.Length -ge 3) {
|
||||
$meaningfulWords += $word
|
||||
} elseif ($Description -match "\b$($word.ToUpper())\b") {
|
||||
# Keep short words if they appear as uppercase in original (likely acronyms)
|
||||
$meaningfulWords += $word
|
||||
}
|
||||
}
|
||||
|
||||
# If we have meaningful words, use first 3-4 of them
|
||||
if ($meaningfulWords.Count -gt 0) {
|
||||
$maxWords = if ($meaningfulWords.Count -eq 4) { 4 } else { 3 }
|
||||
$result = ($meaningfulWords | Select-Object -First $maxWords) -join '-'
|
||||
return $result
|
||||
} else {
|
||||
# Fallback to original logic if no meaningful words found
|
||||
$result = ConvertTo-CleanBranchName -Name $Description
|
||||
$fallbackWords = ($result -split '-') | Where-Object { $_ } | Select-Object -First 3
|
||||
return [string]::Join('-', $fallbackWords)
|
||||
}
|
||||
}
|
||||
|
||||
# Generate branch name
|
||||
if ($ShortName) {
|
||||
# Use provided short name, just clean it up
|
||||
$branchSuffix = ConvertTo-CleanBranchName -Name $ShortName
|
||||
} else {
|
||||
# Generate from description with smart filtering
|
||||
$branchSuffix = Get-BranchName -Description $featureDesc
|
||||
}
|
||||
|
||||
# Determine branch number
|
||||
if ($Number -eq 0) {
|
||||
if ($hasGit) {
|
||||
# Check existing branches on remotes
|
||||
$Number = Get-NextBranchNumber -SpecsDir $specsDir
|
||||
} else {
|
||||
# Fall back to local directory check
|
||||
$Number = (Get-HighestNumberFromSpecs -SpecsDir $specsDir) + 1
|
||||
}
|
||||
}
|
||||
|
||||
$featureNum = ('{0:000}' -f $Number)
|
||||
$branchName = "$featureNum-$branchSuffix"
|
||||
|
||||
# GitHub enforces a 244-byte limit on branch names
|
||||
# Validate and truncate if necessary
|
||||
$maxBranchLength = 244
|
||||
if ($branchName.Length -gt $maxBranchLength) {
|
||||
# Calculate how much we need to trim from suffix
|
||||
# Account for: feature number (3) + hyphen (1) = 4 chars
|
||||
$maxSuffixLength = $maxBranchLength - 4
|
||||
|
||||
# Truncate suffix
|
||||
$truncatedSuffix = $branchSuffix.Substring(0, [Math]::Min($branchSuffix.Length, $maxSuffixLength))
|
||||
# Remove trailing hyphen if truncation created one
|
||||
$truncatedSuffix = $truncatedSuffix -replace '-$', ''
|
||||
|
||||
$originalBranchName = $branchName
|
||||
$branchName = "$featureNum-$truncatedSuffix"
|
||||
|
||||
Write-Warning "[specify] Branch name exceeded GitHub's 244-byte limit"
|
||||
Write-Warning "[specify] Original: $originalBranchName ($($originalBranchName.Length) bytes)"
|
||||
Write-Warning "[specify] Truncated to: $branchName ($($branchName.Length) bytes)"
|
||||
}
|
||||
|
||||
if ($hasGit) {
|
||||
try {
|
||||
git checkout -b $branchName | Out-Null
|
||||
} catch {
|
||||
Write-Warning "Failed to create git branch: $branchName"
|
||||
}
|
||||
} else {
|
||||
Write-Warning "[specify] Warning: Git repository not detected; skipped branch creation for $branchName"
|
||||
}
|
||||
|
||||
$featureDir = Join-Path $specsDir $branchName
|
||||
New-Item -ItemType Directory -Path $featureDir -Force | Out-Null
|
||||
|
||||
$template = Join-Path $repoRoot '.specify/templates/spec-template.md'
|
||||
$specFile = Join-Path $featureDir 'spec.md'
|
||||
if (Test-Path $template) {
|
||||
Copy-Item $template $specFile -Force
|
||||
} else {
|
||||
New-Item -ItemType File -Path $specFile | Out-Null
|
||||
}
|
||||
|
||||
# Set the SPECIFY_FEATURE environment variable for the current session
|
||||
$env:SPECIFY_FEATURE = $branchName
|
||||
|
||||
if ($Json) {
|
||||
$obj = [PSCustomObject]@{
|
||||
BRANCH_NAME = $branchName
|
||||
SPEC_FILE = $specFile
|
||||
FEATURE_NUM = $featureNum
|
||||
HAS_GIT = $hasGit
|
||||
}
|
||||
$obj | ConvertTo-Json -Compress
|
||||
} else {
|
||||
Write-Output "BRANCH_NAME: $branchName"
|
||||
Write-Output "SPEC_FILE: $specFile"
|
||||
Write-Output "FEATURE_NUM: $featureNum"
|
||||
Write-Output "HAS_GIT: $hasGit"
|
||||
Write-Output "SPECIFY_FEATURE environment variable set to: $branchName"
|
||||
}
|
||||
|
||||
61
.specify/scripts/powershell/setup-plan.ps1
Normal file
61
.specify/scripts/powershell/setup-plan.ps1
Normal file
@ -0,0 +1,61 @@
|
||||
#!/usr/bin/env pwsh
|
||||
# Setup implementation plan for a feature
|
||||
|
||||
[CmdletBinding()]
|
||||
param(
|
||||
[switch]$Json,
|
||||
[switch]$Help
|
||||
)
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
# Show help if requested
|
||||
if ($Help) {
|
||||
Write-Output "Usage: ./setup-plan.ps1 [-Json] [-Help]"
|
||||
Write-Output " -Json Output results in JSON format"
|
||||
Write-Output " -Help Show this help message"
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Load common functions
|
||||
. "$PSScriptRoot/common.ps1"
|
||||
|
||||
# Get all paths and variables from common functions
|
||||
$paths = Get-FeaturePathsEnv
|
||||
|
||||
# Check if we're on a proper feature branch (only for git repos)
|
||||
if (-not (Test-FeatureBranch -Branch $paths.CURRENT_BRANCH -HasGit $paths.HAS_GIT)) {
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Ensure the feature directory exists
|
||||
New-Item -ItemType Directory -Path $paths.FEATURE_DIR -Force | Out-Null
|
||||
|
||||
# Copy plan template if it exists, otherwise note it or create empty file
|
||||
$template = Join-Path $paths.REPO_ROOT '.specify/templates/plan-template.md'
|
||||
if (Test-Path $template) {
|
||||
Copy-Item $template $paths.IMPL_PLAN -Force
|
||||
Write-Output "Copied plan template to $($paths.IMPL_PLAN)"
|
||||
} else {
|
||||
Write-Warning "Plan template not found at $template"
|
||||
# Create a basic plan file if template doesn't exist
|
||||
New-Item -ItemType File -Path $paths.IMPL_PLAN -Force | Out-Null
|
||||
}
|
||||
|
||||
# Output results
|
||||
if ($Json) {
|
||||
$result = [PSCustomObject]@{
|
||||
FEATURE_SPEC = $paths.FEATURE_SPEC
|
||||
IMPL_PLAN = $paths.IMPL_PLAN
|
||||
SPECS_DIR = $paths.FEATURE_DIR
|
||||
BRANCH = $paths.CURRENT_BRANCH
|
||||
HAS_GIT = $paths.HAS_GIT
|
||||
}
|
||||
$result | ConvertTo-Json -Compress
|
||||
} else {
|
||||
Write-Output "FEATURE_SPEC: $($paths.FEATURE_SPEC)"
|
||||
Write-Output "IMPL_PLAN: $($paths.IMPL_PLAN)"
|
||||
Write-Output "SPECS_DIR: $($paths.FEATURE_DIR)"
|
||||
Write-Output "BRANCH: $($paths.CURRENT_BRANCH)"
|
||||
Write-Output "HAS_GIT: $($paths.HAS_GIT)"
|
||||
}
|
||||
448
.specify/scripts/powershell/update-agent-context.ps1
Normal file
448
.specify/scripts/powershell/update-agent-context.ps1
Normal file
@ -0,0 +1,448 @@
|
||||
#!/usr/bin/env pwsh
|
||||
<#!
|
||||
.SYNOPSIS
|
||||
Update agent context files with information from plan.md (PowerShell version)
|
||||
|
||||
.DESCRIPTION
|
||||
Mirrors the behavior of scripts/bash/update-agent-context.sh:
|
||||
1. Environment Validation
|
||||
2. Plan Data Extraction
|
||||
3. Agent File Management (create from template or update existing)
|
||||
4. Content Generation (technology stack, recent changes, timestamp)
|
||||
5. Multi-Agent Support (claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, kilocode, auggie, roo, codebuddy, amp, shai, q, bob, qoder)
|
||||
|
||||
.PARAMETER AgentType
|
||||
Optional agent key to update a single agent. If omitted, updates all existing agent files (creating a default Claude file if none exist).
|
||||
|
||||
.EXAMPLE
|
||||
./update-agent-context.ps1 -AgentType claude
|
||||
|
||||
.EXAMPLE
|
||||
./update-agent-context.ps1 # Updates all existing agent files
|
||||
|
||||
.NOTES
|
||||
Relies on common helper functions in common.ps1
|
||||
#>
|
||||
param(
|
||||
[Parameter(Position=0)]
|
||||
[ValidateSet('claude','gemini','copilot','cursor-agent','qwen','opencode','codex','windsurf','kilocode','auggie','roo','codebuddy','amp','shai','q','bob','qoder')]
|
||||
[string]$AgentType
|
||||
)
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
# Import common helpers
|
||||
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||
. (Join-Path $ScriptDir 'common.ps1')
|
||||
|
||||
# Acquire environment paths
|
||||
$envData = Get-FeaturePathsEnv
|
||||
$REPO_ROOT = $envData.REPO_ROOT
|
||||
$CURRENT_BRANCH = $envData.CURRENT_BRANCH
|
||||
$HAS_GIT = $envData.HAS_GIT
|
||||
$IMPL_PLAN = $envData.IMPL_PLAN
|
||||
$NEW_PLAN = $IMPL_PLAN
|
||||
|
||||
# Agent file paths
|
||||
$CLAUDE_FILE = Join-Path $REPO_ROOT 'CLAUDE.md'
|
||||
$GEMINI_FILE = Join-Path $REPO_ROOT 'GEMINI.md'
|
||||
$COPILOT_FILE = Join-Path $REPO_ROOT '.github/agents/copilot-instructions.md'
|
||||
$CURSOR_FILE = Join-Path $REPO_ROOT '.cursor/rules/specify-rules.mdc'
|
||||
$QWEN_FILE = Join-Path $REPO_ROOT 'QWEN.md'
|
||||
$AGENTS_FILE = Join-Path $REPO_ROOT 'AGENTS.md'
|
||||
$WINDSURF_FILE = Join-Path $REPO_ROOT '.windsurf/rules/specify-rules.md'
|
||||
$KILOCODE_FILE = Join-Path $REPO_ROOT '.kilocode/rules/specify-rules.md'
|
||||
$AUGGIE_FILE = Join-Path $REPO_ROOT '.augment/rules/specify-rules.md'
|
||||
$ROO_FILE = Join-Path $REPO_ROOT '.roo/rules/specify-rules.md'
|
||||
$CODEBUDDY_FILE = Join-Path $REPO_ROOT 'CODEBUDDY.md'
|
||||
$QODER_FILE = Join-Path $REPO_ROOT 'QODER.md'
|
||||
$AMP_FILE = Join-Path $REPO_ROOT 'AGENTS.md'
|
||||
$SHAI_FILE = Join-Path $REPO_ROOT 'SHAI.md'
|
||||
$Q_FILE = Join-Path $REPO_ROOT 'AGENTS.md'
|
||||
$BOB_FILE = Join-Path $REPO_ROOT 'AGENTS.md'
|
||||
|
||||
$TEMPLATE_FILE = Join-Path $REPO_ROOT '.specify/templates/agent-file-template.md'
|
||||
|
||||
# Parsed plan data placeholders
|
||||
$script:NEW_LANG = ''
|
||||
$script:NEW_FRAMEWORK = ''
|
||||
$script:NEW_DB = ''
|
||||
$script:NEW_PROJECT_TYPE = ''
|
||||
|
||||
function Write-Info {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$Message
|
||||
)
|
||||
Write-Host "INFO: $Message"
|
||||
}
|
||||
|
||||
function Write-Success {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$Message
|
||||
)
|
||||
Write-Host "$([char]0x2713) $Message"
|
||||
}
|
||||
|
||||
function Write-WarningMsg {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$Message
|
||||
)
|
||||
Write-Warning $Message
|
||||
}
|
||||
|
||||
function Write-Err {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$Message
|
||||
)
|
||||
Write-Host "ERROR: $Message" -ForegroundColor Red
|
||||
}
|
||||
|
||||
function Validate-Environment {
|
||||
if (-not $CURRENT_BRANCH) {
|
||||
Write-Err 'Unable to determine current feature'
|
||||
if ($HAS_GIT) { Write-Info "Make sure you're on a feature branch" } else { Write-Info 'Set SPECIFY_FEATURE environment variable or create a feature first' }
|
||||
exit 1
|
||||
}
|
||||
if (-not (Test-Path $NEW_PLAN)) {
|
||||
Write-Err "No plan.md found at $NEW_PLAN"
|
||||
Write-Info 'Ensure you are working on a feature with a corresponding spec directory'
|
||||
if (-not $HAS_GIT) { Write-Info 'Use: $env:SPECIFY_FEATURE=your-feature-name or create a new feature first' }
|
||||
exit 1
|
||||
}
|
||||
if (-not (Test-Path $TEMPLATE_FILE)) {
|
||||
Write-Err "Template file not found at $TEMPLATE_FILE"
|
||||
Write-Info 'Run specify init to scaffold .specify/templates, or add agent-file-template.md there.'
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
function Extract-PlanField {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$FieldPattern,
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$PlanFile
|
||||
)
|
||||
if (-not (Test-Path $PlanFile)) { return '' }
|
||||
# Lines like **Language/Version**: Python 3.12
|
||||
$regex = "^\*\*$([Regex]::Escape($FieldPattern))\*\*: (.+)$"
|
||||
Get-Content -LiteralPath $PlanFile -Encoding utf8 | ForEach-Object {
|
||||
if ($_ -match $regex) {
|
||||
$val = $Matches[1].Trim()
|
||||
if ($val -notin @('NEEDS CLARIFICATION','N/A')) { return $val }
|
||||
}
|
||||
} | Select-Object -First 1
|
||||
}
|
||||
|
||||
function Parse-PlanData {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$PlanFile
|
||||
)
|
||||
if (-not (Test-Path $PlanFile)) { Write-Err "Plan file not found: $PlanFile"; return $false }
|
||||
Write-Info "Parsing plan data from $PlanFile"
|
||||
$script:NEW_LANG = Extract-PlanField -FieldPattern 'Language/Version' -PlanFile $PlanFile
|
||||
$script:NEW_FRAMEWORK = Extract-PlanField -FieldPattern 'Primary Dependencies' -PlanFile $PlanFile
|
||||
$script:NEW_DB = Extract-PlanField -FieldPattern 'Storage' -PlanFile $PlanFile
|
||||
$script:NEW_PROJECT_TYPE = Extract-PlanField -FieldPattern 'Project Type' -PlanFile $PlanFile
|
||||
|
||||
if ($NEW_LANG) { Write-Info "Found language: $NEW_LANG" } else { Write-WarningMsg 'No language information found in plan' }
|
||||
if ($NEW_FRAMEWORK) { Write-Info "Found framework: $NEW_FRAMEWORK" }
|
||||
if ($NEW_DB -and $NEW_DB -ne 'N/A') { Write-Info "Found database: $NEW_DB" }
|
||||
if ($NEW_PROJECT_TYPE) { Write-Info "Found project type: $NEW_PROJECT_TYPE" }
|
||||
return $true
|
||||
}
|
||||
|
||||
function Format-TechnologyStack {
|
||||
param(
|
||||
[Parameter(Mandatory=$false)]
|
||||
[string]$Lang,
|
||||
[Parameter(Mandatory=$false)]
|
||||
[string]$Framework
|
||||
)
|
||||
$parts = @()
|
||||
if ($Lang -and $Lang -ne 'NEEDS CLARIFICATION') { $parts += $Lang }
|
||||
if ($Framework -and $Framework -notin @('NEEDS CLARIFICATION','N/A')) { $parts += $Framework }
|
||||
if (-not $parts) { return '' }
|
||||
return ($parts -join ' + ')
|
||||
}
|
||||
|
||||
function Get-ProjectStructure {
|
||||
param(
|
||||
[Parameter(Mandatory=$false)]
|
||||
[string]$ProjectType
|
||||
)
|
||||
if ($ProjectType -match 'web') { return "backend/`nfrontend/`ntests/" } else { return "src/`ntests/" }
|
||||
}
|
||||
|
||||
function Get-CommandsForLanguage {
|
||||
param(
|
||||
[Parameter(Mandatory=$false)]
|
||||
[string]$Lang
|
||||
)
|
||||
switch -Regex ($Lang) {
|
||||
'Python' { return "cd src; pytest; ruff check ." }
|
||||
'Rust' { return "cargo test; cargo clippy" }
|
||||
'JavaScript|TypeScript' { return "npm test; npm run lint" }
|
||||
default { return "# Add commands for $Lang" }
|
||||
}
|
||||
}
|
||||
|
||||
function Get-LanguageConventions {
|
||||
param(
|
||||
[Parameter(Mandatory=$false)]
|
||||
[string]$Lang
|
||||
)
|
||||
if ($Lang) { "${Lang}: Follow standard conventions" } else { 'General: Follow standard conventions' }
|
||||
}
|
||||
|
||||
function New-AgentFile {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$TargetFile,
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$ProjectName,
|
||||
[Parameter(Mandatory=$true)]
|
||||
[datetime]$Date
|
||||
)
|
||||
if (-not (Test-Path $TEMPLATE_FILE)) { Write-Err "Template not found at $TEMPLATE_FILE"; return $false }
|
||||
$temp = New-TemporaryFile
|
||||
Copy-Item -LiteralPath $TEMPLATE_FILE -Destination $temp -Force
|
||||
|
||||
$projectStructure = Get-ProjectStructure -ProjectType $NEW_PROJECT_TYPE
|
||||
$commands = Get-CommandsForLanguage -Lang $NEW_LANG
|
||||
$languageConventions = Get-LanguageConventions -Lang $NEW_LANG
|
||||
|
||||
$escaped_lang = $NEW_LANG
|
||||
$escaped_framework = $NEW_FRAMEWORK
|
||||
$escaped_branch = $CURRENT_BRANCH
|
||||
|
||||
$content = Get-Content -LiteralPath $temp -Raw -Encoding utf8
|
||||
$content = $content -replace '\[PROJECT NAME\]',$ProjectName
|
||||
$content = $content -replace '\[DATE\]',$Date.ToString('yyyy-MM-dd')
|
||||
|
||||
# Build the technology stack string safely
|
||||
$techStackForTemplate = ""
|
||||
if ($escaped_lang -and $escaped_framework) {
|
||||
$techStackForTemplate = "- $escaped_lang + $escaped_framework ($escaped_branch)"
|
||||
} elseif ($escaped_lang) {
|
||||
$techStackForTemplate = "- $escaped_lang ($escaped_branch)"
|
||||
} elseif ($escaped_framework) {
|
||||
$techStackForTemplate = "- $escaped_framework ($escaped_branch)"
|
||||
}
|
||||
|
||||
$content = $content -replace '\[EXTRACTED FROM ALL PLAN.MD FILES\]',$techStackForTemplate
|
||||
# For project structure we manually embed (keep newlines)
|
||||
$escapedStructure = [Regex]::Escape($projectStructure)
|
||||
$content = $content -replace '\[ACTUAL STRUCTURE FROM PLANS\]',$escapedStructure
|
||||
# Replace escaped newlines placeholder after all replacements
|
||||
$content = $content -replace '\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]',$commands
|
||||
$content = $content -replace '\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]',$languageConventions
|
||||
|
||||
# Build the recent changes string safely
|
||||
$recentChangesForTemplate = ""
|
||||
if ($escaped_lang -and $escaped_framework) {
|
||||
$recentChangesForTemplate = "- ${escaped_branch}: Added ${escaped_lang} + ${escaped_framework}"
|
||||
} elseif ($escaped_lang) {
|
||||
$recentChangesForTemplate = "- ${escaped_branch}: Added ${escaped_lang}"
|
||||
} elseif ($escaped_framework) {
|
||||
$recentChangesForTemplate = "- ${escaped_branch}: Added ${escaped_framework}"
|
||||
}
|
||||
|
||||
$content = $content -replace '\[LAST 3 FEATURES AND WHAT THEY ADDED\]',$recentChangesForTemplate
|
||||
# Convert literal \n sequences introduced by Escape to real newlines
|
||||
$content = $content -replace '\\n',[Environment]::NewLine
|
||||
|
||||
$parent = Split-Path -Parent $TargetFile
|
||||
if (-not (Test-Path $parent)) { New-Item -ItemType Directory -Path $parent | Out-Null }
|
||||
Set-Content -LiteralPath $TargetFile -Value $content -NoNewline -Encoding utf8
|
||||
Remove-Item $temp -Force
|
||||
return $true
|
||||
}
|
||||
|
||||
function Update-ExistingAgentFile {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$TargetFile,
|
||||
[Parameter(Mandatory=$true)]
|
||||
[datetime]$Date
|
||||
)
|
||||
if (-not (Test-Path $TargetFile)) { return (New-AgentFile -TargetFile $TargetFile -ProjectName (Split-Path $REPO_ROOT -Leaf) -Date $Date) }
|
||||
|
||||
$techStack = Format-TechnologyStack -Lang $NEW_LANG -Framework $NEW_FRAMEWORK
|
||||
$newTechEntries = @()
|
||||
if ($techStack) {
|
||||
$escapedTechStack = [Regex]::Escape($techStack)
|
||||
if (-not (Select-String -Pattern $escapedTechStack -Path $TargetFile -Quiet)) {
|
||||
$newTechEntries += "- $techStack ($CURRENT_BRANCH)"
|
||||
}
|
||||
}
|
||||
if ($NEW_DB -and $NEW_DB -notin @('N/A','NEEDS CLARIFICATION')) {
|
||||
$escapedDB = [Regex]::Escape($NEW_DB)
|
||||
if (-not (Select-String -Pattern $escapedDB -Path $TargetFile -Quiet)) {
|
||||
$newTechEntries += "- $NEW_DB ($CURRENT_BRANCH)"
|
||||
}
|
||||
}
|
||||
$newChangeEntry = ''
|
||||
if ($techStack) { $newChangeEntry = "- ${CURRENT_BRANCH}: Added ${techStack}" }
|
||||
elseif ($NEW_DB -and $NEW_DB -notin @('N/A','NEEDS CLARIFICATION')) { $newChangeEntry = "- ${CURRENT_BRANCH}: Added ${NEW_DB}" }
|
||||
|
||||
$lines = Get-Content -LiteralPath $TargetFile -Encoding utf8
|
||||
$output = New-Object System.Collections.Generic.List[string]
|
||||
$inTech = $false; $inChanges = $false; $techAdded = $false; $changeAdded = $false; $existingChanges = 0
|
||||
|
||||
for ($i=0; $i -lt $lines.Count; $i++) {
|
||||
$line = $lines[$i]
|
||||
if ($line -eq '## Active Technologies') {
|
||||
$output.Add($line)
|
||||
$inTech = $true
|
||||
continue
|
||||
}
|
||||
if ($inTech -and $line -match '^##\s') {
|
||||
if (-not $techAdded -and $newTechEntries.Count -gt 0) { $newTechEntries | ForEach-Object { $output.Add($_) }; $techAdded = $true }
|
||||
$output.Add($line); $inTech = $false; continue
|
||||
}
|
||||
if ($inTech -and [string]::IsNullOrWhiteSpace($line)) {
|
||||
if (-not $techAdded -and $newTechEntries.Count -gt 0) { $newTechEntries | ForEach-Object { $output.Add($_) }; $techAdded = $true }
|
||||
$output.Add($line); continue
|
||||
}
|
||||
if ($line -eq '## Recent Changes') {
|
||||
$output.Add($line)
|
||||
if ($newChangeEntry) { $output.Add($newChangeEntry); $changeAdded = $true }
|
||||
$inChanges = $true
|
||||
continue
|
||||
}
|
||||
if ($inChanges -and $line -match '^##\s') { $output.Add($line); $inChanges = $false; continue }
|
||||
if ($inChanges -and $line -match '^- ') {
|
||||
if ($existingChanges -lt 2) { $output.Add($line); $existingChanges++ }
|
||||
continue
|
||||
}
|
||||
if ($line -match '\*\*Last updated\*\*: .*\d{4}-\d{2}-\d{2}') {
|
||||
$output.Add(($line -replace '\d{4}-\d{2}-\d{2}',$Date.ToString('yyyy-MM-dd')))
|
||||
continue
|
||||
}
|
||||
$output.Add($line)
|
||||
}
|
||||
|
||||
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
|
||||
if ($inTech -and -not $techAdded -and $newTechEntries.Count -gt 0) {
|
||||
$newTechEntries | ForEach-Object { $output.Add($_) }
|
||||
}
|
||||
|
||||
Set-Content -LiteralPath $TargetFile -Value ($output -join [Environment]::NewLine) -Encoding utf8
|
||||
return $true
|
||||
}
|
||||
|
||||
function Update-AgentFile {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$TargetFile,
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$AgentName
|
||||
)
|
||||
if (-not $TargetFile -or -not $AgentName) { Write-Err 'Update-AgentFile requires TargetFile and AgentName'; return $false }
|
||||
Write-Info "Updating $AgentName context file: $TargetFile"
|
||||
$projectName = Split-Path $REPO_ROOT -Leaf
|
||||
$date = Get-Date
|
||||
|
||||
$dir = Split-Path -Parent $TargetFile
|
||||
if (-not (Test-Path $dir)) { New-Item -ItemType Directory -Path $dir | Out-Null }
|
||||
|
||||
if (-not (Test-Path $TargetFile)) {
|
||||
if (New-AgentFile -TargetFile $TargetFile -ProjectName $projectName -Date $date) { Write-Success "Created new $AgentName context file" } else { Write-Err 'Failed to create new agent file'; return $false }
|
||||
} else {
|
||||
try {
|
||||
if (Update-ExistingAgentFile -TargetFile $TargetFile -Date $date) { Write-Success "Updated existing $AgentName context file" } else { Write-Err 'Failed to update agent file'; return $false }
|
||||
} catch {
|
||||
Write-Err "Cannot access or update existing file: $TargetFile. $_"
|
||||
return $false
|
||||
}
|
||||
}
|
||||
return $true
|
||||
}
|
||||
|
||||
function Update-SpecificAgent {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$Type
|
||||
)
|
||||
switch ($Type) {
|
||||
'claude' { Update-AgentFile -TargetFile $CLAUDE_FILE -AgentName 'Claude Code' }
|
||||
'gemini' { Update-AgentFile -TargetFile $GEMINI_FILE -AgentName 'Gemini CLI' }
|
||||
'copilot' { Update-AgentFile -TargetFile $COPILOT_FILE -AgentName 'GitHub Copilot' }
|
||||
'cursor-agent' { Update-AgentFile -TargetFile $CURSOR_FILE -AgentName 'Cursor IDE' }
|
||||
'qwen' { Update-AgentFile -TargetFile $QWEN_FILE -AgentName 'Qwen Code' }
|
||||
'opencode' { Update-AgentFile -TargetFile $AGENTS_FILE -AgentName 'opencode' }
|
||||
'codex' { Update-AgentFile -TargetFile $AGENTS_FILE -AgentName 'Codex CLI' }
|
||||
'windsurf' { Update-AgentFile -TargetFile $WINDSURF_FILE -AgentName 'Windsurf' }
|
||||
'kilocode' { Update-AgentFile -TargetFile $KILOCODE_FILE -AgentName 'Kilo Code' }
|
||||
'auggie' { Update-AgentFile -TargetFile $AUGGIE_FILE -AgentName 'Auggie CLI' }
|
||||
'roo' { Update-AgentFile -TargetFile $ROO_FILE -AgentName 'Roo Code' }
|
||||
'codebuddy' { Update-AgentFile -TargetFile $CODEBUDDY_FILE -AgentName 'CodeBuddy CLI' }
|
||||
'qoder' { Update-AgentFile -TargetFile $QODER_FILE -AgentName 'Qoder CLI' }
|
||||
'amp' { Update-AgentFile -TargetFile $AMP_FILE -AgentName 'Amp' }
|
||||
'shai' { Update-AgentFile -TargetFile $SHAI_FILE -AgentName 'SHAI' }
|
||||
'q' { Update-AgentFile -TargetFile $Q_FILE -AgentName 'Amazon Q Developer CLI' }
|
||||
'bob' { Update-AgentFile -TargetFile $BOB_FILE -AgentName 'IBM Bob' }
|
||||
default { Write-Err "Unknown agent type '$Type'"; Write-Err 'Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|bob|qoder'; return $false }
|
||||
}
|
||||
}
|
||||
|
||||
function Update-AllExistingAgents {
|
||||
$found = $false
|
||||
$ok = $true
|
||||
if (Test-Path $CLAUDE_FILE) { if (-not (Update-AgentFile -TargetFile $CLAUDE_FILE -AgentName 'Claude Code')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $GEMINI_FILE) { if (-not (Update-AgentFile -TargetFile $GEMINI_FILE -AgentName 'Gemini CLI')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $COPILOT_FILE) { if (-not (Update-AgentFile -TargetFile $COPILOT_FILE -AgentName 'GitHub Copilot')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $CURSOR_FILE) { if (-not (Update-AgentFile -TargetFile $CURSOR_FILE -AgentName 'Cursor IDE')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $QWEN_FILE) { if (-not (Update-AgentFile -TargetFile $QWEN_FILE -AgentName 'Qwen Code')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $AGENTS_FILE) { if (-not (Update-AgentFile -TargetFile $AGENTS_FILE -AgentName 'Codex/opencode')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $WINDSURF_FILE) { if (-not (Update-AgentFile -TargetFile $WINDSURF_FILE -AgentName 'Windsurf')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $KILOCODE_FILE) { if (-not (Update-AgentFile -TargetFile $KILOCODE_FILE -AgentName 'Kilo Code')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $AUGGIE_FILE) { if (-not (Update-AgentFile -TargetFile $AUGGIE_FILE -AgentName 'Auggie CLI')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $ROO_FILE) { if (-not (Update-AgentFile -TargetFile $ROO_FILE -AgentName 'Roo Code')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $CODEBUDDY_FILE) { if (-not (Update-AgentFile -TargetFile $CODEBUDDY_FILE -AgentName 'CodeBuddy CLI')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $QODER_FILE) { if (-not (Update-AgentFile -TargetFile $QODER_FILE -AgentName 'Qoder CLI')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $SHAI_FILE) { if (-not (Update-AgentFile -TargetFile $SHAI_FILE -AgentName 'SHAI')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $Q_FILE) { if (-not (Update-AgentFile -TargetFile $Q_FILE -AgentName 'Amazon Q Developer CLI')) { $ok = $false }; $found = $true }
|
||||
if (Test-Path $BOB_FILE) { if (-not (Update-AgentFile -TargetFile $BOB_FILE -AgentName 'IBM Bob')) { $ok = $false }; $found = $true }
|
||||
if (-not $found) {
|
||||
Write-Info 'No existing agent files found, creating default Claude file...'
|
||||
if (-not (Update-AgentFile -TargetFile $CLAUDE_FILE -AgentName 'Claude Code')) { $ok = $false }
|
||||
}
|
||||
return $ok
|
||||
}
|
||||
|
||||
function Print-Summary {
|
||||
Write-Host ''
|
||||
Write-Info 'Summary of changes:'
|
||||
if ($NEW_LANG) { Write-Host " - Added language: $NEW_LANG" }
|
||||
if ($NEW_FRAMEWORK) { Write-Host " - Added framework: $NEW_FRAMEWORK" }
|
||||
if ($NEW_DB -and $NEW_DB -ne 'N/A') { Write-Host " - Added database: $NEW_DB" }
|
||||
Write-Host ''
|
||||
Write-Info 'Usage: ./update-agent-context.ps1 [-AgentType claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|bob|qoder]'
|
||||
}
|
||||
|
||||
function Main {
|
||||
Validate-Environment
|
||||
Write-Info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
|
||||
if (-not (Parse-PlanData -PlanFile $NEW_PLAN)) { Write-Err 'Failed to parse plan data'; exit 1 }
|
||||
$success = $true
|
||||
if ($AgentType) {
|
||||
Write-Info "Updating specific agent: $AgentType"
|
||||
if (-not (Update-SpecificAgent -Type $AgentType)) { $success = $false }
|
||||
}
|
||||
else {
|
||||
Write-Info 'No agent specified, updating all existing agent files...'
|
||||
if (-not (Update-AllExistingAgents)) { $success = $false }
|
||||
}
|
||||
Print-Summary
|
||||
if ($success) { Write-Success 'Agent context update completed successfully'; exit 0 } else { Write-Err 'Agent context update completed with errors'; exit 1 }
|
||||
}
|
||||
|
||||
Main
|
||||
|
||||
28
.specify/templates/agent-file-template.md
Normal file
28
.specify/templates/agent-file-template.md
Normal file
@ -0,0 +1,28 @@
|
||||
# [PROJECT NAME] Development Guidelines
|
||||
|
||||
Auto-generated from all feature plans. Last updated: [DATE]
|
||||
|
||||
## Active Technologies
|
||||
|
||||
[EXTRACTED FROM ALL PLAN.MD FILES]
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
[ACTUAL STRUCTURE FROM PLANS]
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
|
||||
|
||||
## Code Style
|
||||
|
||||
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
|
||||
|
||||
## Recent Changes
|
||||
|
||||
[LAST 3 FEATURES AND WHAT THEY ADDED]
|
||||
|
||||
<!-- MANUAL ADDITIONS START -->
|
||||
<!-- MANUAL ADDITIONS END -->
|
||||
40
.specify/templates/checklist-template.md
Normal file
40
.specify/templates/checklist-template.md
Normal file
@ -0,0 +1,40 @@
|
||||
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
|
||||
|
||||
**Purpose**: [Brief description of what this checklist covers]
|
||||
**Created**: [DATE]
|
||||
**Feature**: [Link to spec.md or relevant documentation]
|
||||
|
||||
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
|
||||
|
||||
<!--
|
||||
============================================================================
|
||||
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
|
||||
|
||||
The /speckit.checklist command MUST replace these with actual items based on:
|
||||
- User's specific checklist request
|
||||
- Feature requirements from spec.md
|
||||
- Technical context from plan.md
|
||||
- Implementation details from tasks.md
|
||||
|
||||
DO NOT keep these sample items in the generated checklist file.
|
||||
============================================================================
|
||||
-->
|
||||
|
||||
## [Category 1]
|
||||
|
||||
- [ ] CHK001 First checklist item with clear action
|
||||
- [ ] CHK002 Second checklist item
|
||||
- [ ] CHK003 Third checklist item
|
||||
|
||||
## [Category 2]
|
||||
|
||||
- [ ] CHK004 Another category item
|
||||
- [ ] CHK005 Item with specific criteria
|
||||
- [ ] CHK006 Final item in this category
|
||||
|
||||
## Notes
|
||||
|
||||
- Check items off as completed: `[x]`
|
||||
- Add comments or findings inline
|
||||
- Link to relevant resources or documentation
|
||||
- Items are numbered sequentially for easy reference
|
||||
142
.specify/templates/plan-template.md
Normal file
142
.specify/templates/plan-template.md
Normal file
@ -0,0 +1,142 @@
|
||||
# Implementation Plan: [FEATURE]
|
||||
|
||||
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
||||
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
||||
|
||||
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
|
||||
|
||||
## Summary
|
||||
|
||||
[Extract from feature spec: primary requirement + technical approach from research]
|
||||
|
||||
## Technical Context
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: Replace the content in this section with the technical details
|
||||
for the project. The structure here is presented in advisory capacity to guide
|
||||
the iteration process.
|
||||
-->
|
||||
|
||||
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
|
||||
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
|
||||
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
|
||||
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
|
||||
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
|
||||
**Project Type**: [single/web/mobile - determines source structure]
|
||||
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
|
||||
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
|
||||
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
|
||||
|
||||
## Constitution Check
|
||||
|
||||
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
||||
|
||||
The following principles from `.specify/memory/constitution.md` MUST be verified:
|
||||
|
||||
### I. Test-Driven Development (NON-NEGOTIABLE)
|
||||
- [ ] Tests planned before implementation
|
||||
- [ ] Red-Green-Refactor cycle documented in tasks
|
||||
- [ ] Test coverage targets defined (>80% general, 100% critical paths)
|
||||
|
||||
### II. API-First Design
|
||||
- [ ] API contracts documented with Pydantic schemas
|
||||
- [ ] Request/response formats specified
|
||||
- [ ] Error handling and status codes defined
|
||||
- [ ] Authentication/authorization requirements documented
|
||||
|
||||
### III. Asynchronous Architecture
|
||||
- [ ] I/O-bound operations use async patterns
|
||||
- [ ] Background tasks delegated to Celery (if applicable)
|
||||
- [ ] Database operations use async drivers
|
||||
- [ ] Caching strategy defined (if applicable)
|
||||
|
||||
### IV. Observability and Monitoring
|
||||
- [ ] Logging strategy defined with structured logs
|
||||
- [ ] Tracing instrumentation planned (OpenTelemetry)
|
||||
- [ ] Metrics collection specified
|
||||
- [ ] Error tracking approach documented
|
||||
|
||||
### V. Type Safety and Validation
|
||||
- [ ] Pydantic models defined for all data structures
|
||||
- [ ] Type hints planned for all functions
|
||||
- [ ] Input validation at API boundaries
|
||||
- [ ] Runtime validation strategy specified
|
||||
|
||||
### VI. Security by Default
|
||||
- [ ] No credentials in code (environment variables only)
|
||||
- [ ] Input validation and sanitization planned
|
||||
- [ ] Authentication/authorization requirements clear
|
||||
- [ ] SQL injection prevention via parameterized queries
|
||||
- [ ] Security audit plan included
|
||||
|
||||
**PASS CRITERIA**: All checkboxes must be checked or explicitly waived with justification in Complexity Tracking section.
|
||||
|
||||
## Project Structure
|
||||
|
||||
### Documentation (this feature)
|
||||
|
||||
```text
|
||||
specs/[###-feature]/
|
||||
├── plan.md # This file (/speckit.plan command output)
|
||||
├── research.md # Phase 0 output (/speckit.plan command)
|
||||
├── data-model.md # Phase 1 output (/speckit.plan command)
|
||||
├── quickstart.md # Phase 1 output (/speckit.plan command)
|
||||
├── contracts/ # Phase 1 output (/speckit.plan command)
|
||||
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
|
||||
```
|
||||
|
||||
### Source Code (repository root)
|
||||
<!--
|
||||
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
|
||||
for this feature. Delete unused options and expand the chosen structure with
|
||||
real paths (e.g., apps/admin, packages/something). The delivered plan must
|
||||
not include Option labels.
|
||||
-->
|
||||
|
||||
```text
|
||||
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
|
||||
src/
|
||||
├── models/
|
||||
├── services/
|
||||
├── cli/
|
||||
└── lib/
|
||||
|
||||
tests/
|
||||
├── contract/
|
||||
├── integration/
|
||||
└── unit/
|
||||
|
||||
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
|
||||
backend/
|
||||
├── src/
|
||||
│ ├── models/
|
||||
│ ├── services/
|
||||
│ └── api/
|
||||
└── tests/
|
||||
|
||||
frontend/
|
||||
├── src/
|
||||
│ ├── components/
|
||||
│ ├── pages/
|
||||
│ └── services/
|
||||
└── tests/
|
||||
|
||||
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
|
||||
api/
|
||||
└── [same as backend above]
|
||||
|
||||
ios/ or android/
|
||||
└── [platform-specific structure: feature modules, UI flows, platform tests]
|
||||
```
|
||||
|
||||
**Structure Decision**: [Document the selected structure and reference the real
|
||||
directories captured above]
|
||||
|
||||
## Complexity Tracking
|
||||
|
||||
> **Fill ONLY if Constitution Check has violations that must be justified**
|
||||
|
||||
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
||||
|-----------|------------|-------------------------------------|
|
||||
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
||||
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
||||
141
.specify/templates/spec-template.md
Normal file
141
.specify/templates/spec-template.md
Normal file
@ -0,0 +1,141 @@
|
||||
# Feature Specification: [FEATURE NAME]
|
||||
|
||||
**Feature Branch**: `[###-feature-name]`
|
||||
**Created**: [DATE]
|
||||
**Status**: Draft
|
||||
**Input**: User description: "$ARGUMENTS"
|
||||
|
||||
## User Scenarios & Testing *(mandatory)*
|
||||
|
||||
<!--
|
||||
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
|
||||
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
|
||||
you should still have a viable MVP (Minimum Viable Product) that delivers value.
|
||||
|
||||
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
|
||||
Think of each story as a standalone slice of functionality that can be:
|
||||
- Developed independently
|
||||
- Tested independently
|
||||
- Deployed independently
|
||||
- Demonstrated to users independently
|
||||
-->
|
||||
|
||||
### User Story 1 - [Brief Title] (Priority: P1)
|
||||
|
||||
[Describe this user journey in plain language]
|
||||
|
||||
**Why this priority**: [Explain the value and why it has this priority level]
|
||||
|
||||
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
|
||||
---
|
||||
|
||||
### User Story 2 - [Brief Title] (Priority: P2)
|
||||
|
||||
[Describe this user journey in plain language]
|
||||
|
||||
**Why this priority**: [Explain the value and why it has this priority level]
|
||||
|
||||
**Independent Test**: [Describe how this can be tested independently]
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
|
||||
---
|
||||
|
||||
### User Story 3 - [Brief Title] (Priority: P3)
|
||||
|
||||
[Describe this user journey in plain language]
|
||||
|
||||
**Why this priority**: [Explain the value and why it has this priority level]
|
||||
|
||||
**Independent Test**: [Describe how this can be tested independently]
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
|
||||
---
|
||||
|
||||
[Add more user stories as needed, each with an assigned priority]
|
||||
|
||||
### Edge Cases
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: The content in this section represents placeholders.
|
||||
Fill them out with the right edge cases.
|
||||
-->
|
||||
|
||||
- What happens when [boundary condition]?
|
||||
- How does system handle [error scenario]?
|
||||
|
||||
## Requirements *(mandatory)*
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: The content in this section represents placeholders.
|
||||
Fill them out with the right functional requirements.
|
||||
-->
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
|
||||
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
|
||||
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
|
||||
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
|
||||
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
|
||||
|
||||
*Example of marking unclear requirements:*
|
||||
|
||||
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
|
||||
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
|
||||
|
||||
### Non-Functional Requirements (Per Constitution)
|
||||
|
||||
These requirements are mandated by `.specify/memory/constitution.md` and MUST be included:
|
||||
|
||||
#### Security (Principle VI)
|
||||
- **NFR-SEC-001**: All API endpoints MUST validate and sanitize inputs
|
||||
- **NFR-SEC-002**: Authentication/authorization MUST be implemented for protected resources
|
||||
- **NFR-SEC-003**: No credentials MUST be stored in code (environment variables only)
|
||||
- **NFR-SEC-004**: SQL queries MUST use parameterized statements to prevent injection
|
||||
|
||||
#### Observability (Principle IV)
|
||||
- **NFR-OBS-001**: All operations MUST emit structured logs with context
|
||||
- **NFR-OBS-002**: Critical paths MUST be instrumented with tracing spans
|
||||
- **NFR-OBS-003**: Key metrics MUST be collected (latency, error rate, throughput)
|
||||
- **NFR-OBS-004**: Errors MUST be tracked with full context for debugging
|
||||
|
||||
#### Type Safety (Principle V)
|
||||
- **NFR-TYPE-001**: All data structures MUST use Pydantic models
|
||||
- **NFR-TYPE-002**: All functions MUST have type annotations
|
||||
- **NFR-TYPE-003**: API boundaries MUST perform runtime validation
|
||||
|
||||
#### Testing (Principle I - NON-NEGOTIABLE)
|
||||
- **NFR-TEST-001**: Test coverage MUST exceed 80% (100% for critical paths)
|
||||
- **NFR-TEST-002**: Tests MUST be written before implementation (TDD)
|
||||
- **NFR-TEST-003**: All tests MUST be independent and order-agnostic
|
||||
|
||||
### Key Entities *(include if feature involves data)*
|
||||
|
||||
- **[Entity 1]**: [What it represents, key attributes without implementation]
|
||||
- **[Entity 2]**: [What it represents, relationships to other entities]
|
||||
|
||||
## Success Criteria *(mandatory)*
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: Define measurable success criteria.
|
||||
These must be technology-agnostic and measurable.
|
||||
-->
|
||||
|
||||
### Measurable Outcomes
|
||||
|
||||
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
|
||||
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
||||
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
||||
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
||||
336
.specify/templates/tasks-template.md
Normal file
336
.specify/templates/tasks-template.md
Normal file
@ -0,0 +1,336 @@
|
||||
---
|
||||
|
||||
description: "Task list template for feature implementation"
|
||||
---
|
||||
|
||||
# Tasks: [FEATURE NAME]
|
||||
|
||||
**Input**: Design documents from `/specs/[###-feature-name]/`
|
||||
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
|
||||
|
||||
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
|
||||
|
||||
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
|
||||
|
||||
## Format: `[ID] [P?] [Story] Description`
|
||||
|
||||
- **[P]**: Can run in parallel (different files, no dependencies)
|
||||
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
|
||||
- Include exact file paths in descriptions
|
||||
|
||||
## Path Conventions
|
||||
|
||||
- **Single project**: `src/`, `tests/` at repository root
|
||||
- **Web app**: `backend/src/`, `frontend/src/`
|
||||
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
|
||||
- Paths shown below assume single project - adjust based on plan.md structure
|
||||
|
||||
<!--
|
||||
============================================================================
|
||||
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
|
||||
|
||||
The /speckit.tasks command MUST replace these with actual tasks based on:
|
||||
- User stories from spec.md (with their priorities P1, P2, P3...)
|
||||
- Feature requirements from plan.md
|
||||
- Entities from data-model.md
|
||||
- Endpoints from contracts/
|
||||
|
||||
Tasks MUST be organized by user story so each story can be:
|
||||
- Implemented independently
|
||||
- Tested independently
|
||||
- Delivered as an MVP increment
|
||||
|
||||
DO NOT keep these sample tasks in the generated tasks.md file.
|
||||
============================================================================
|
||||
-->
|
||||
|
||||
## Phase 1: Setup (Shared Infrastructure)
|
||||
|
||||
**Purpose**: Project initialization and basic structure
|
||||
|
||||
- [ ] T001 Create project structure per implementation plan
|
||||
- [ ] T002 Initialize [language] project with [framework] dependencies
|
||||
- [ ] T003 [P] Configure linting and formatting tools (Black, Ruff, isort per Constitution)
|
||||
- [ ] T004 [P] Setup pre-commit hooks for code quality gates (Constitution: Quality Assurance)
|
||||
- [ ] T005 [P] Configure security scanning (Bandit per Constitution Principle VI)
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Foundational (Blocking Prerequisites)
|
||||
|
||||
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
|
||||
|
||||
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
|
||||
|
||||
### Constitution-Mandated Infrastructure
|
||||
|
||||
Per `.specify/memory/constitution.md`, the following MUST be established:
|
||||
|
||||
#### Type Safety & Validation (Principle V)
|
||||
- [ ] T006 [P] Define base Pydantic models for common types
|
||||
- [ ] T007 [P] Setup input validation middleware for API boundaries
|
||||
- [ ] T008 [P] Configure type checking tools (if using mypy)
|
||||
|
||||
#### Observability (Principle IV)
|
||||
- [ ] T009 [P] Configure structured logging (logfire/loguru)
|
||||
- [ ] T010 [P] Setup OpenTelemetry instrumentation
|
||||
- [ ] T011 [P] Configure Prometheus metrics collection
|
||||
- [ ] T012 [P] Implement error tracking with context
|
||||
|
||||
#### Security (Principle VI)
|
||||
- [ ] T013 [P] Setup environment variable configuration
|
||||
- [ ] T014 [P] Implement authentication/authorization framework
|
||||
- [ ] T015 [P] Configure input sanitization middleware
|
||||
- [ ] T016 [P] Setup database with parameterized query support
|
||||
|
||||
#### Async Architecture (Principle III)
|
||||
- [ ] T017 Setup async database connection pool
|
||||
- [ ] T018 [P] Configure Redis for caching
|
||||
- [ ] T019 [P] Setup Celery for background tasks
|
||||
- [ ] T020 [P] Configure async HTTP client
|
||||
|
||||
#### API-First Design (Principle II)
|
||||
- [ ] T021 Setup API routing structure with FastAPI
|
||||
- [ ] T022 [P] Define base response schemas
|
||||
- [ ] T023 [P] Configure error response standards
|
||||
- [ ] T024 [P] Setup API documentation (OpenAPI/Swagger)
|
||||
|
||||
#### Testing Infrastructure (Principle I - NON-NEGOTIABLE)
|
||||
- [ ] T025 Configure pytest with async support
|
||||
- [ ] T026 [P] Setup test database fixtures
|
||||
- [ ] T027 [P] Configure code coverage reporting (>80% target)
|
||||
- [ ] T028 [P] Create test utilities and factories
|
||||
|
||||
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
|
||||
|
||||
**Goal**: [Brief description of what this story delivers]
|
||||
|
||||
**Independent Test**: [How to verify this story works on its own]
|
||||
|
||||
### Tests for User Story 1 (Constitution Principle I - NON-NEGOTIABLE) ⚠️
|
||||
|
||||
> **CONSTITUTION REQUIREMENT**: Tests MUST be written FIRST, MUST FAIL before implementation (TDD)
|
||||
|
||||
- [ ] T029 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||
- [ ] T030 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
|
||||
- [ ] T031 [P] [US1] Unit tests for [Service] in tests/unit/test_[service].py
|
||||
- [ ] T032 **VERIFY ALL TESTS FAIL** before proceeding to implementation
|
||||
|
||||
### Implementation for User Story 1
|
||||
|
||||
#### Models (Constitution Principle V: Type Safety)
|
||||
- [ ] T033 [P] [US1] Create [Entity1] Pydantic model in src/models/[entity1].py with full type hints
|
||||
- [ ] T034 [P] [US1] Create [Entity2] Pydantic model in src/models/[entity2].py with full type hints
|
||||
|
||||
#### Services (Constitution Principle III: Async Architecture)
|
||||
- [ ] T035 [US1] Implement async [Service] in src/services/[service].py (depends on T033, T034)
|
||||
- [ ] T036 [US1] Add async database operations using repository pattern
|
||||
|
||||
#### API Layer (Constitution Principle II: API-First)
|
||||
- [ ] T037 [US1] Implement [endpoint] with request/response schemas in src/[location]/[file].py
|
||||
- [ ] T038 [US1] Add input validation and sanitization (Principle VI: Security)
|
||||
- [ ] T039 [US1] Implement error handling with appropriate status codes
|
||||
|
||||
#### Cross-Cutting Concerns (Constitution Principles IV & VI)
|
||||
- [ ] T040 [US1] Add structured logging with context (Principle IV: Observability)
|
||||
- [ ] T041 [US1] Add OpenTelemetry tracing spans (Principle IV: Observability)
|
||||
- [ ] T042 [US1] Add authentication/authorization checks (Principle VI: Security)
|
||||
- [ ] T043 [US1] Add metrics collection for key operations (Principle IV: Observability)
|
||||
|
||||
#### Validation (Constitution Principle I: TDD)
|
||||
- [ ] T044 **RUN ALL TESTS** - verify they now pass
|
||||
- [ ] T045 **CHECK COVERAGE** - must be >80% (100% for critical paths)
|
||||
|
||||
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: User Story 2 - [Title] (Priority: P2)
|
||||
|
||||
**Goal**: [Brief description of what this story delivers]
|
||||
|
||||
**Independent Test**: [How to verify this story works on its own]
|
||||
|
||||
### Tests for User Story 2 (Constitution Principle I - NON-NEGOTIABLE) ⚠️
|
||||
|
||||
> **CONSTITUTION REQUIREMENT**: Tests MUST be written FIRST, MUST FAIL before implementation (TDD)
|
||||
|
||||
- [ ] T046 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||
- [ ] T047 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
|
||||
- [ ] T048 [P] [US2] Unit tests for [Service] in tests/unit/test_[service].py
|
||||
- [ ] T049 **VERIFY ALL TESTS FAIL** before proceeding to implementation
|
||||
|
||||
### Implementation for User Story 2
|
||||
|
||||
Follow same constitution-mandated structure as User Story 1:
|
||||
- [ ] T050 [P] [US2] Create Pydantic models with type hints (Principle V)
|
||||
- [ ] T051 [US2] Implement async service layer (Principle III)
|
||||
- [ ] T052 [US2] Implement API endpoints with schemas (Principle II)
|
||||
- [ ] T053 [US2] Add validation, error handling, security (Principle VI)
|
||||
- [ ] T054 [US2] Add logging, tracing, metrics (Principle IV)
|
||||
- [ ] T055 [US2] Integrate with User Story 1 components (if needed)
|
||||
- [ ] T056 **RUN ALL TESTS & CHECK COVERAGE** (Principle I)
|
||||
|
||||
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: User Story 3 - [Title] (Priority: P3)
|
||||
|
||||
**Goal**: [Brief description of what this story delivers]
|
||||
|
||||
**Independent Test**: [How to verify this story works on its own]
|
||||
|
||||
### Tests for User Story 3 (Constitution Principle I - NON-NEGOTIABLE) ⚠️
|
||||
|
||||
> **CONSTITUTION REQUIREMENT**: Tests MUST be written FIRST, MUST FAIL before implementation (TDD)
|
||||
|
||||
- [ ] T057 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||
- [ ] T058 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
|
||||
- [ ] T059 [P] [US3] Unit tests for [Service] in tests/unit/test_[service].py
|
||||
- [ ] T060 **VERIFY ALL TESTS FAIL** before proceeding to implementation
|
||||
|
||||
### Implementation for User Story 3
|
||||
|
||||
Follow same constitution-mandated structure as User Stories 1 & 2:
|
||||
- [ ] T061 [P] [US3] Create Pydantic models with type hints (Principle V)
|
||||
- [ ] T062 [US3] Implement async service layer (Principle III)
|
||||
- [ ] T063 [US3] Implement API endpoints with schemas (Principle II)
|
||||
- [ ] T064 [US3] Add validation, error handling, security (Principle VI)
|
||||
- [ ] T065 [US3] Add logging, tracing, metrics (Principle IV)
|
||||
- [ ] T066 **RUN ALL TESTS & CHECK COVERAGE** (Principle I)
|
||||
|
||||
**Checkpoint**: All user stories should now be independently functional
|
||||
|
||||
---
|
||||
|
||||
[Add more user story phases as needed, following the same pattern]
|
||||
|
||||
---
|
||||
|
||||
## Phase N: Polish & Cross-Cutting Concerns
|
||||
|
||||
**Purpose**: Final validation and improvements that affect multiple user stories
|
||||
|
||||
### Constitution Compliance Verification
|
||||
|
||||
- [ ] T067 [P] Run full test suite and verify >80% coverage (Principle I)
|
||||
- [ ] T068 [P] Run security scan (Bandit) and resolve findings (Principle VI)
|
||||
- [ ] T069 [P] Verify all code has type hints (Principle V)
|
||||
- [ ] T070 [P] Verify all API endpoints have schemas (Principle II)
|
||||
- [ ] T071 [P] Verify all operations have logging/tracing (Principle IV)
|
||||
- [ ] T072 [P] Verify all async operations use proper patterns (Principle III)
|
||||
|
||||
### Quality Assurance (Per Constitution)
|
||||
|
||||
- [ ] T073 [P] Run Black, isort, Ruff formatting/linting
|
||||
- [ ] T074 [P] Verify pre-commit hooks are working
|
||||
- [ ] T075 Code review checklist validation
|
||||
- [ ] T076 Performance benchmarking for critical paths
|
||||
- [ ] T077 Load testing for async endpoints
|
||||
|
||||
### Documentation & Deployment
|
||||
|
||||
- [ ] T078 [P] API documentation (OpenAPI/Swagger) complete
|
||||
- [ ] T079 [P] Update README with setup and usage
|
||||
- [ ] T080 [P] Document environment variables and configuration
|
||||
- [ ] T081 Run quickstart.md validation
|
||||
- [ ] T082 Database migration testing and rollback procedures
|
||||
- [ ] T083 Deployment checklist completion
|
||||
|
||||
---
|
||||
|
||||
## Dependencies & Execution Order
|
||||
|
||||
### Phase Dependencies
|
||||
|
||||
- **Setup (Phase 1)**: No dependencies - can start immediately
|
||||
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
|
||||
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
|
||||
- User stories can then proceed in parallel (if staffed)
|
||||
- Or sequentially in priority order (P1 → P2 → P3)
|
||||
- **Polish (Final Phase)**: Depends on all desired user stories being complete
|
||||
|
||||
### User Story Dependencies
|
||||
|
||||
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
|
||||
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
|
||||
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
|
||||
|
||||
### Within Each User Story
|
||||
|
||||
- Tests (if included) MUST be written and FAIL before implementation
|
||||
- Models before services
|
||||
- Services before endpoints
|
||||
- Core implementation before integration
|
||||
- Story complete before moving to next priority
|
||||
|
||||
### Parallel Opportunities
|
||||
|
||||
- All Setup tasks marked [P] can run in parallel
|
||||
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
|
||||
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
|
||||
- All tests for a user story marked [P] can run in parallel
|
||||
- Models within a story marked [P] can run in parallel
|
||||
- Different user stories can be worked on in parallel by different team members
|
||||
|
||||
---
|
||||
|
||||
## Parallel Example: User Story 1
|
||||
|
||||
```bash
|
||||
# Launch all tests for User Story 1 together (if tests requested):
|
||||
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
|
||||
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
|
||||
|
||||
# Launch all models for User Story 1 together:
|
||||
Task: "Create [Entity1] model in src/models/[entity1].py"
|
||||
Task: "Create [Entity2] model in src/models/[entity2].py"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### MVP First (User Story 1 Only)
|
||||
|
||||
1. Complete Phase 1: Setup
|
||||
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
|
||||
3. Complete Phase 3: User Story 1
|
||||
4. **STOP and VALIDATE**: Test User Story 1 independently
|
||||
5. Deploy/demo if ready
|
||||
|
||||
### Incremental Delivery
|
||||
|
||||
1. Complete Setup + Foundational → Foundation ready
|
||||
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
|
||||
3. Add User Story 2 → Test independently → Deploy/Demo
|
||||
4. Add User Story 3 → Test independently → Deploy/Demo
|
||||
5. Each story adds value without breaking previous stories
|
||||
|
||||
### Parallel Team Strategy
|
||||
|
||||
With multiple developers:
|
||||
|
||||
1. Team completes Setup + Foundational together
|
||||
2. Once Foundational is done:
|
||||
- Developer A: User Story 1
|
||||
- Developer B: User Story 2
|
||||
- Developer C: User Story 3
|
||||
3. Stories complete and integrate independently
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- [P] tasks = different files, no dependencies
|
||||
- [Story] label maps task to specific user story for traceability
|
||||
- Each user story should be independently completable and testable
|
||||
- Verify tests fail before implementing
|
||||
- Commit after each task or logical group
|
||||
- Stop at any checkpoint to validate story independently
|
||||
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
|
||||
258
CLAUDE.md
Normal file
258
CLAUDE.md
Normal file
@ -0,0 +1,258 @@
|
||||
# Pygentic-AI - Project Initialization Guide
|
||||
|
||||
## Project Overview
|
||||
|
||||
**Pygentic-AI** is an AI-powered SWOT analysis platform that transforms any URL into actionable business intelligence using generative AI.
|
||||
|
||||
### Core Capabilities
|
||||
- **URL Analysis**: Scrapes and analyzes web content from any URL
|
||||
- **SWOT Generation**: Uses Claude/GPT models to generate comprehensive SWOT analysis
|
||||
- **Reddit Intelligence**: Gathers competitive insights from relevant subreddits
|
||||
- **Async Processing**: Celery-based task queue for long-running analysis
|
||||
- **Modern Frontend**: SCSS-based responsive UI with WCAG 2.1 AA accessibility
|
||||
|
||||
---
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Backend
|
||||
- **Framework**: FastAPI (async Python web framework)
|
||||
- **Task Queue**: Celery with Redis broker
|
||||
- **Database**: PostgreSQL (SQLAlchemy ORM)
|
||||
- **AI Models**:
|
||||
- Anthropic Claude (primary)
|
||||
- OpenAI GPT-4o-mini (fallback)
|
||||
- **Web Scraping**: BeautifulSoup, Playwright
|
||||
- **Settings**: Pydantic settings with environment-based configs
|
||||
|
||||
### Frontend
|
||||
- **Templating**: Jinjax (component-based Jinja2)
|
||||
- **CSS Framework**: Bulma + Custom SCSS (modular architecture)
|
||||
- **Interactivity**: HTMX + vanilla JavaScript
|
||||
- **Build Tools**: Sass compiler, npm scripts
|
||||
|
||||
### Infrastructure
|
||||
- **Containerization**: Docker + Docker Compose
|
||||
- **Reverse Proxy**: Traefik with Let's Encrypt
|
||||
- **CI/CD**: GitHub Actions → Komodo deployment
|
||||
- **Registry**: Custom S3-backed Docker registry
|
||||
|
||||
---
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
├── src/
|
||||
│ ├── app.py # FastAPI application entry point
|
||||
│ ├── cworker.py # Celery worker entry point
|
||||
│ ├── backend/
|
||||
│ │ ├── core/ # Business logic & AI agents
|
||||
│ │ │ ├── core.py # SWOT analyzer agent
|
||||
│ │ │ ├── tools.py # Reddit intelligence tools
|
||||
│ │ │ └── main.py # Analysis orchestration
|
||||
│ │ ├── server/ # FastAPI routes & endpoints
|
||||
│ │ ├── db/ # Database models & migrations
|
||||
│ │ ├── settings/ # Environment-based configuration
|
||||
│ │ │ ├── base.py # Base settings
|
||||
│ │ │ ├── dev.py # Development settings
|
||||
│ │ │ └── prod.py # Production settings
|
||||
│ │ └── site/ # Frontend routes
|
||||
│ └── frontend/
|
||||
│ ├── scss/ # Modular SCSS architecture
|
||||
│ │ ├── _variables.scss
|
||||
│ │ ├── _components.scss
|
||||
│ │ ├── _layout.scss
|
||||
│ │ └── styles.scss
|
||||
│ ├── static/ # Compiled assets
|
||||
│ │ ├── css/
|
||||
│ │ └── js/
|
||||
│ └── templates/ # Jinjax components
|
||||
│ ├── home.html
|
||||
│ ├── result.html
|
||||
│ └── components/
|
||||
├── docker/ # Docker build scripts
|
||||
├── .github/workflows/ # CI/CD pipelines
|
||||
├── compose.yaml # Production Docker Compose
|
||||
├── justfile # Task automation
|
||||
└── .env.example # Environment template
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Workflows
|
||||
|
||||
### Analysis Pipeline
|
||||
1. User submits URL via frontend form
|
||||
2. FastAPI endpoint creates Celery task
|
||||
3. Worker scrapes URL content
|
||||
4. AI agent generates SWOT analysis
|
||||
5. Reddit tool gathers competitive intelligence
|
||||
6. Results stored in database
|
||||
7. Frontend polls for completion and displays results
|
||||
|
||||
### Development Workflow
|
||||
```bash
|
||||
just setup # Initialize environment
|
||||
just dev # Start FastAPI dev server
|
||||
just celery # Start Celery worker (separate terminal)
|
||||
just scss-watch # Auto-compile SCSS (separate terminal)
|
||||
```
|
||||
|
||||
### Deployment Workflow
|
||||
1. Push to `dev_deploy` or `main` branch
|
||||
2. GitHub Actions builds Docker image
|
||||
3. Image tagged: `{branch}-{date}` and `{branch}-latest`
|
||||
4. Komodo webhook triggered on success
|
||||
5. Production server pulls and deploys new image
|
||||
|
||||
---
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
### Required Environment Variables
|
||||
- `OPENAI_API_KEY` - OpenAI API key for GPT models
|
||||
- `ANTHROPIC_API_KEY` - Anthropic API key for Claude models
|
||||
- `REDDIT_CLIENT_ID` - Reddit API client ID
|
||||
- `REDDIT_CLIENT_SECRET` - Reddit API secret
|
||||
- `REDDIT_SUBREDDIT` - Comma-separated subreddit list
|
||||
- `CLOUD_DB_*` - PostgreSQL connection details
|
||||
- `SECRET_KEY` - Application secret key
|
||||
|
||||
See `.env.example` for complete list.
|
||||
|
||||
---
|
||||
|
||||
## Architecture Patterns
|
||||
|
||||
### Agent-Based AI
|
||||
- **SWOT Analyzer Agent**: Uses Claude with structured output
|
||||
- **Tool Use**: Reddit intelligence as tool-augmented generation
|
||||
- **Validation**: Pydantic models for result validation
|
||||
|
||||
### Async Processing
|
||||
- **FastAPI**: Async endpoints for non-blocking I/O
|
||||
- **Celery**: Distributed task queue for long-running jobs
|
||||
- **Redis**: Message broker and result backend
|
||||
|
||||
### Frontend Architecture
|
||||
- **BEM Naming**: Block-Element-Modifier CSS conventions
|
||||
- **Component-Based**: Jinjax for reusable template components
|
||||
- **Progressive Enhancement**: HTMX for dynamic updates
|
||||
- **Accessibility First**: WCAG 2.1 AA compliant
|
||||
|
||||
---
|
||||
|
||||
## Development Commands (justfile)
|
||||
|
||||
### Essential Commands
|
||||
```bash
|
||||
just # List all commands
|
||||
just setup # First-time setup
|
||||
just dev # Start dev server
|
||||
just test # Run tests
|
||||
just build # Build Docker image
|
||||
just deploy # Deploy to production
|
||||
```
|
||||
|
||||
### Docker Commands
|
||||
```bash
|
||||
just up # Start services
|
||||
just down # Stop services
|
||||
just logs-f # Follow all logs
|
||||
just logs-web # Follow web logs
|
||||
just health # Check service health
|
||||
```
|
||||
|
||||
### Frontend Commands
|
||||
```bash
|
||||
just scss # Compile SCSS once
|
||||
just scss-watch # Watch and compile SCSS
|
||||
just npm-install # Install frontend deps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Git Workflow
|
||||
|
||||
### Branch Strategy
|
||||
- `main` - Production-ready code
|
||||
- `dev_deploy` - Development/staging branch
|
||||
- `feature/*` - Feature branches
|
||||
- `*_deploy` - Auto-deploys to Komodo
|
||||
|
||||
### Commit Conventions
|
||||
- `feat:` - New features
|
||||
- `fix:` - Bug fixes
|
||||
- `refactor:` - Code refactoring
|
||||
- `docs:` - Documentation updates
|
||||
- `style:` - CSS/formatting changes
|
||||
- `test:` - Test additions/changes
|
||||
|
||||
All commits include `Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>`
|
||||
|
||||
---
|
||||
|
||||
## Multi-Agent Coordination
|
||||
|
||||
When working on this project, activate the appropriate persona:
|
||||
|
||||
- **🏗️ Architect**: System design, architecture decisions, tech stack
|
||||
- **🎨 Frontend**: UI/UX, SCSS, accessibility, Jinjax components
|
||||
- **⚙️ Backend**: FastAPI, Celery, database, AI agents
|
||||
- **🔒 Security**: Authentication, secrets management, input validation
|
||||
- **🚀 DevOps**: Docker, CI/CD, deployment, monitoring
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Start Development
|
||||
```bash
|
||||
# Terminal 1: Backend
|
||||
just dev
|
||||
|
||||
# Terminal 2: Celery Worker
|
||||
just celery
|
||||
|
||||
# Terminal 3: SCSS Watcher
|
||||
just scss-watch
|
||||
```
|
||||
|
||||
### Build & Deploy
|
||||
```bash
|
||||
# Build image
|
||||
just build dev-$(date +%Y-%m-%d)
|
||||
|
||||
# Deploy to dev
|
||||
just deploy-dev
|
||||
|
||||
# Deploy to production
|
||||
just deploy-main
|
||||
```
|
||||
|
||||
### Common Tasks
|
||||
```bash
|
||||
just health # Check if services are running
|
||||
just logs-web # Debug web service
|
||||
just clean # Stop and remove containers
|
||||
just check-env # Validate environment variables
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Useful Links
|
||||
|
||||
- **Repository**: https://github.com/FJS-Services-Inc/Pygentic-AI
|
||||
- **Production**: https://pygenticai.francissecada.com
|
||||
- **Registry**: s3docker.francissecada.com/pygentic_ai
|
||||
|
||||
---
|
||||
|
||||
## Notes for Claude
|
||||
|
||||
- Always use the `justfile` for commands instead of raw Docker/npm commands
|
||||
- Check `.env.example` for required environment variables
|
||||
- Frontend SCSS must be compiled before changes are visible
|
||||
- Use the appropriate persona for the task at hand
|
||||
- Follow the project's commit conventions
|
||||
- Accessibility is a priority - maintain WCAG 2.1 AA compliance
|
||||
@ -7,14 +7,8 @@ ENV TZ="America/New_York"
|
||||
ENV LANGUAGE=en_US:en
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
ARG GIT_BRANCH="main"
|
||||
|
||||
RUN echo ${GIT_BRANCH}
|
||||
|
||||
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
|
||||
ENV WORKDIR="/opt/pygentic_ai"
|
||||
RUN --mount=type=ssh git clone -b ${GIT_BRANCH} git@github.com:fsecada01/Pygentic-AI.git ${WORKDIR}
|
||||
#COPY . ${WORKDIR}
|
||||
COPY . ${WORKDIR}
|
||||
WORKDIR ${WORKDIR}
|
||||
|
||||
FROM s3docker.francissecada.com/fjs_ubuntu:latest
|
||||
|
||||
60
compose.yaml
60
compose.yaml
@ -1,58 +1,68 @@
|
||||
services:
|
||||
web:
|
||||
image: s3docker.francissecada.com/pygentic_ai:main.2024-11-30
|
||||
image: s3docker.francissecada.com/pygentic_ai:${IMAGE_TAG:-main-latest}
|
||||
container_name: pygentic_ai
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1024mb
|
||||
memory: ${MEMORY_LIMIT:-1024mb}
|
||||
reservations:
|
||||
memory: ${MEMORY_RESERVATION:-512mb}
|
||||
ports:
|
||||
- "5051:5051"
|
||||
- "0.0.0.0:${PORT:-5051}:${INTERNAL_PORT:-5051}"
|
||||
env_file:
|
||||
- ./stack.env
|
||||
environment:
|
||||
PORT: 5051
|
||||
SERVER_ENV: prod
|
||||
- PORT=${INTERNAL_PORT:-5051}
|
||||
- SERVER_ENV=${SERVER_ENV:-prod}
|
||||
volumes:
|
||||
- ./src/proxy_urls.db:/opt/pygentic_ai/src/proxy_urls.db
|
||||
command: bash -c "/opt/pygentic_ai/docker/pygentic_ai/python_start.sh"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "-H", "Host: pygenticai.francissecada.com", "http://localhost:${INTERNAL_PORT:-5051}/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=proxy
|
||||
- traefik.http.routers.pygentic_ai.entrypoints=websecure
|
||||
- traefik.http.routers.pygentic_ai.rule=Host(`pygenticai.francissecada.com`)
|
||||
- traefik.http.routers.pygentic_ai.tls=true
|
||||
healthcheck:
|
||||
test: curl --fail http://localhost:5051/ || exit 1
|
||||
interval: 40s
|
||||
timeout: 30s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
- traefik.http.routers.pygentic_ai.entrypoints=websecure
|
||||
- traefik.http.routers.pygentic_ai.tls.certresolver=letsencrypt
|
||||
- traefik.http.services.pygentic_ai.loadbalancer.server.url=http://${DOCKER_HOST_IP:-192.168.99.85}:${PORT:-5051}
|
||||
networks:
|
||||
- proxy
|
||||
|
||||
celery_service:
|
||||
image: s3docker.francissecada.com/pygentic_ai:main.2024-11-30
|
||||
image: s3docker.francissecada.com/pygentic_ai:${IMAGE_TAG:-main-latest}
|
||||
container_name: pygentic_ai_celery
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512mb
|
||||
# build: .
|
||||
command: ./docker/celery/start.sh
|
||||
memory: ${CELERY_MEMORY_LIMIT:-512mb}
|
||||
reservations:
|
||||
memory: ${CELERY_MEMORY_RESERVATION:-256mb}
|
||||
command: bash -c "/opt/pygentic_ai/docker/celery/start.sh"
|
||||
env_file:
|
||||
- ./stack.env
|
||||
environment:
|
||||
PORT: ${CELERY_PORT}
|
||||
SERVER_ENV: staging
|
||||
C_FORCE_ROOT: true
|
||||
- PORT=${CELERY_PORT:-5052}
|
||||
- SERVER_ENV=${SERVER_ENV:-prod}
|
||||
- C_FORCE_ROOT=true
|
||||
ports:
|
||||
- "5052:5052"
|
||||
- "0.0.0.0:${CELERY_PORT:-5052}:${CELERY_PORT:-5052}"
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=proxy
|
||||
- traefik.http.routers.celery_ranked_jobs.entrypoints=websecure
|
||||
- traefik.http.routers.celery_ranked_jobs.rule=Host(`celery.pygenticai.francissecada.com`)
|
||||
- traefik.http.routers.celery_pygentic_ai.entrypoints=websecure
|
||||
- traefik.http.routers.celery_pygentic_ai.rule=Host(`celery.pygenticai.francissecada.com`)
|
||||
- traefik.http.routers.celery_pygentic_ai.tls.certresolver=letsencrypt
|
||||
networks:
|
||||
- proxy
|
||||
depends_on:
|
||||
- web
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
external: true
|
||||
|
||||
@ -24,6 +24,7 @@ python-decouple
|
||||
python-slugify
|
||||
psycopg
|
||||
pydantic-ai[examples]
|
||||
pydantic-settings
|
||||
pytz
|
||||
redis
|
||||
simplejson
|
||||
|
||||
@ -1,462 +0,0 @@
|
||||
# This file was autogenerated by uv via the following command:
|
||||
# uv pip compile --strip-extras core_requirements.in -o core_requirements.txt
|
||||
aiofiles==24.1.0
|
||||
# via -r core_requirements.in
|
||||
aiomysql==0.2.0
|
||||
# via -r core_requirements.in
|
||||
amqp==5.3.1
|
||||
# via kombu
|
||||
annotated-types==0.7.0
|
||||
# via
|
||||
# pydantic
|
||||
# sqlmodel-crud-utilities
|
||||
anthropic==0.44.0
|
||||
# via pydantic-ai-slim
|
||||
anyio==4.8.0
|
||||
# via
|
||||
# anthropic
|
||||
# groq
|
||||
# httpx
|
||||
# openai
|
||||
# starlette
|
||||
appdirs==1.4.4
|
||||
# via pyppeteer
|
||||
asgiref==3.8.1
|
||||
# via opentelemetry-instrumentation-asgi
|
||||
asttokens==2.4.1
|
||||
# via devtools
|
||||
asyncpg==0.30.0
|
||||
# via pydantic-ai-examples
|
||||
beautifulsoup4==4.12.3
|
||||
# via httpx-html
|
||||
billiard==4.2.1
|
||||
# via celery
|
||||
cachetools==5.5.1
|
||||
# via google-auth
|
||||
celery==5.4.0
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# flower
|
||||
certifi==2024.12.14
|
||||
# via
|
||||
# httpcore
|
||||
# httpx
|
||||
# pyppeteer
|
||||
# requests
|
||||
charset-normalizer==3.4.1
|
||||
# via requests
|
||||
click==8.1.8
|
||||
# via
|
||||
# celery
|
||||
# click-didyoumean
|
||||
# click-plugins
|
||||
# click-repl
|
||||
# uvicorn
|
||||
click-didyoumean==0.3.1
|
||||
# via celery
|
||||
click-plugins==1.1.1
|
||||
# via celery
|
||||
click-repl==0.3.0
|
||||
# via celery
|
||||
colorama==0.4.6
|
||||
# via
|
||||
# click
|
||||
# griffe
|
||||
# loguru
|
||||
# sqlmodel-crud-utilities
|
||||
# tqdm
|
||||
cssselect==1.2.0
|
||||
# via pyquery
|
||||
deprecated==1.2.15
|
||||
# via
|
||||
# opentelemetry-api
|
||||
# opentelemetry-exporter-otlp-proto-http
|
||||
# opentelemetry-semantic-conventions
|
||||
devtools==0.12.2
|
||||
# via pydantic-ai-examples
|
||||
distro==1.9.0
|
||||
# via
|
||||
# anthropic
|
||||
# groq
|
||||
# openai
|
||||
eval-type-backport==0.2.2
|
||||
# via
|
||||
# mistralai
|
||||
# pydantic-ai-slim
|
||||
executing==2.1.0
|
||||
# via
|
||||
# devtools
|
||||
# logfire
|
||||
fake-useragent==2.0.3
|
||||
# via httpx-html
|
||||
fastapi==0.115.6
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# fastapi-restful
|
||||
# fastcrud
|
||||
# pydantic-ai-examples
|
||||
fastapi-restful==0.6.0
|
||||
# via -r core_requirements.in
|
||||
fastcrud==0.15.5
|
||||
# via -r core_requirements.in
|
||||
flower==2.0.1
|
||||
# via -r core_requirements.in
|
||||
google-auth==2.37.0
|
||||
# via pydantic-ai-slim
|
||||
googleapis-common-protos==1.66.0
|
||||
# via opentelemetry-exporter-otlp-proto-http
|
||||
greenlet==3.1.1
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# sqlalchemy
|
||||
# sqlmodel-crud-utilities
|
||||
griffe==1.5.5
|
||||
# via pydantic-ai-slim
|
||||
groq==0.15.0
|
||||
# via pydantic-ai-slim
|
||||
h11==0.14.0
|
||||
# via
|
||||
# httpcore
|
||||
# hypercorn
|
||||
# uvicorn
|
||||
# wsproto
|
||||
h2==4.1.0
|
||||
# via hypercorn
|
||||
hpack==4.0.0
|
||||
# via h2
|
||||
html5lib==1.1
|
||||
# via -r core_requirements.in
|
||||
httpcore==1.0.7
|
||||
# via httpx
|
||||
httpx==0.28.1
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# anthropic
|
||||
# groq
|
||||
# httpx-html
|
||||
# mistralai
|
||||
# openai
|
||||
# pydantic-ai-slim
|
||||
httpx-html==0.11.0.dev0
|
||||
# via -r core_requirements.in
|
||||
humanize==4.11.0
|
||||
# via flower
|
||||
hypercorn==0.17.3
|
||||
# via -r core_requirements.in
|
||||
hyperframe==6.0.1
|
||||
# via h2
|
||||
idna==3.10
|
||||
# via
|
||||
# anyio
|
||||
# httpx
|
||||
# requests
|
||||
importlib-metadata==8.5.0
|
||||
# via
|
||||
# opentelemetry-api
|
||||
# pyppeteer
|
||||
itsdangerous==2.2.0
|
||||
# via -r core_requirements.in
|
||||
jinja2==3.1.5
|
||||
# via jinjax
|
||||
jinjax==0.48
|
||||
# via -r core_requirements.in
|
||||
jiter==0.8.2
|
||||
# via
|
||||
# anthropic
|
||||
# openai
|
||||
jsonpath-python==1.0.6
|
||||
# via mistralai
|
||||
kombu==5.4.2
|
||||
# via celery
|
||||
logfire==3.2.0
|
||||
# via pydantic-ai-examples
|
||||
logfire-api==3.2.0
|
||||
# via pydantic-ai-slim
|
||||
loguru==0.7.3
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# sqlmodel-crud-utilities
|
||||
lxml==5.3.0
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# lxml-html-clean
|
||||
# pyquery
|
||||
lxml-html-clean==0.4.1
|
||||
# via lxml
|
||||
markdown-it-py==3.0.0
|
||||
# via rich
|
||||
markupsafe==3.0.2
|
||||
# via
|
||||
# jinja2
|
||||
# jinjax
|
||||
mdurl==0.1.2
|
||||
# via markdown-it-py
|
||||
mistralai==1.4.0
|
||||
# via pydantic-ai-slim
|
||||
mypy-extensions==1.0.0
|
||||
# via typing-inspect
|
||||
openai==1.60.0
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# pydantic-ai-slim
|
||||
opentelemetry-api==1.29.0
|
||||
# via
|
||||
# opentelemetry-exporter-otlp-proto-http
|
||||
# opentelemetry-instrumentation
|
||||
# opentelemetry-instrumentation-asgi
|
||||
# opentelemetry-instrumentation-asyncpg
|
||||
# opentelemetry-instrumentation-dbapi
|
||||
# opentelemetry-instrumentation-fastapi
|
||||
# opentelemetry-instrumentation-sqlite3
|
||||
# opentelemetry-sdk
|
||||
# opentelemetry-semantic-conventions
|
||||
opentelemetry-exporter-otlp-proto-common==1.29.0
|
||||
# via opentelemetry-exporter-otlp-proto-http
|
||||
opentelemetry-exporter-otlp-proto-http==1.29.0
|
||||
# via logfire
|
||||
opentelemetry-instrumentation==0.50b0
|
||||
# via
|
||||
# logfire
|
||||
# opentelemetry-instrumentation-asgi
|
||||
# opentelemetry-instrumentation-asyncpg
|
||||
# opentelemetry-instrumentation-dbapi
|
||||
# opentelemetry-instrumentation-fastapi
|
||||
# opentelemetry-instrumentation-sqlite3
|
||||
opentelemetry-instrumentation-asgi==0.50b0
|
||||
# via opentelemetry-instrumentation-fastapi
|
||||
opentelemetry-instrumentation-asyncpg==0.50b0
|
||||
# via logfire
|
||||
opentelemetry-instrumentation-dbapi==0.50b0
|
||||
# via opentelemetry-instrumentation-sqlite3
|
||||
opentelemetry-instrumentation-fastapi==0.50b0
|
||||
# via logfire
|
||||
opentelemetry-instrumentation-sqlite3==0.50b0
|
||||
# via logfire
|
||||
opentelemetry-proto==1.29.0
|
||||
# via
|
||||
# opentelemetry-exporter-otlp-proto-common
|
||||
# opentelemetry-exporter-otlp-proto-http
|
||||
opentelemetry-sdk==1.29.0
|
||||
# via
|
||||
# logfire
|
||||
# opentelemetry-exporter-otlp-proto-http
|
||||
opentelemetry-semantic-conventions==0.50b0
|
||||
# via
|
||||
# opentelemetry-instrumentation
|
||||
# opentelemetry-instrumentation-asgi
|
||||
# opentelemetry-instrumentation-asyncpg
|
||||
# opentelemetry-instrumentation-dbapi
|
||||
# opentelemetry-instrumentation-fastapi
|
||||
# opentelemetry-sdk
|
||||
opentelemetry-util-http==0.50b0
|
||||
# via
|
||||
# opentelemetry-instrumentation-asgi
|
||||
# opentelemetry-instrumentation-fastapi
|
||||
packaging==24.2
|
||||
# via opentelemetry-instrumentation
|
||||
parse==1.20.2
|
||||
# via httpx-html
|
||||
praw==7.8.1
|
||||
# via -r core_requirements.in
|
||||
prawcore==2.4.0
|
||||
# via praw
|
||||
priority==2.0.0
|
||||
# via hypercorn
|
||||
prometheus-client==0.21.1
|
||||
# via flower
|
||||
prompt-toolkit==3.0.50
|
||||
# via click-repl
|
||||
protobuf==5.29.3
|
||||
# via
|
||||
# googleapis-common-protos
|
||||
# logfire
|
||||
# opentelemetry-proto
|
||||
psutil==5.9.8
|
||||
# via fastapi-restful
|
||||
psycopg==3.2.4
|
||||
# via -r core_requirements.in
|
||||
pyasn1==0.6.1
|
||||
# via
|
||||
# pyasn1-modules
|
||||
# rsa
|
||||
pyasn1-modules==0.4.1
|
||||
# via google-auth
|
||||
pydantic==2.10.5
|
||||
# via
|
||||
# anthropic
|
||||
# fastapi
|
||||
# fastapi-restful
|
||||
# fastcrud
|
||||
# groq
|
||||
# mistralai
|
||||
# openai
|
||||
# pydantic-ai-slim
|
||||
# sqlmodel
|
||||
# sqlmodel-crud-utilities
|
||||
pydantic-ai==0.0.18
|
||||
# via -r core_requirements.in
|
||||
pydantic-ai-examples==0.0.18
|
||||
# via pydantic-ai
|
||||
pydantic-ai-slim==0.0.18
|
||||
# via
|
||||
# pydantic-ai
|
||||
# pydantic-ai-examples
|
||||
pydantic-core==2.27.2
|
||||
# via
|
||||
# pydantic
|
||||
# sqlmodel-crud-utilities
|
||||
pyee==11.1.1
|
||||
# via pyppeteer
|
||||
pygments==2.19.1
|
||||
# via
|
||||
# devtools
|
||||
# rich
|
||||
pymysql==1.1.1
|
||||
# via aiomysql
|
||||
pyppeteer==2.0.0
|
||||
# via httpx-html
|
||||
pyquery==2.0.1
|
||||
# via httpx-html
|
||||
python-dateutil==2.9.0.post0
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# celery
|
||||
# mistralai
|
||||
# sqlmodel-crud-utilities
|
||||
python-decouple==3.8
|
||||
# via -r core_requirements.in
|
||||
python-dotenv==1.0.1
|
||||
# via sqlmodel-crud-utilities
|
||||
python-multipart==0.0.20
|
||||
# via pydantic-ai-examples
|
||||
python-slugify==8.0.4
|
||||
# via -r core_requirements.in
|
||||
pytz==2024.2
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# flower
|
||||
redis==5.2.1
|
||||
# via -r core_requirements.in
|
||||
requests==2.32.3
|
||||
# via
|
||||
# opentelemetry-exporter-otlp-proto-http
|
||||
# prawcore
|
||||
# pydantic-ai-slim
|
||||
# update-checker
|
||||
rich==13.9.4
|
||||
# via
|
||||
# logfire
|
||||
# pydantic-ai-examples
|
||||
rsa==4.9
|
||||
# via google-auth
|
||||
simplejson==3.19.3
|
||||
# via -r core_requirements.in
|
||||
six==1.17.0
|
||||
# via
|
||||
# asttokens
|
||||
# html5lib
|
||||
# python-dateutil
|
||||
# sqlalchemy-mixins
|
||||
# sqlmodel-crud-utilities
|
||||
sniffio==1.3.1
|
||||
# via
|
||||
# anthropic
|
||||
# anyio
|
||||
# groq
|
||||
# openai
|
||||
socksio==1.0.0
|
||||
# via httpx
|
||||
soupsieve==2.6
|
||||
# via beautifulsoup4
|
||||
sqlalchemy==2.0.37
|
||||
# via
|
||||
# fastcrud
|
||||
# sqlalchemy-mixins
|
||||
# sqlalchemy-utils
|
||||
# sqlmodel
|
||||
# sqlmodel-crud-utilities
|
||||
sqlalchemy-mixins==2.0.5
|
||||
# via -r core_requirements.in
|
||||
sqlalchemy-utils==0.41.2
|
||||
# via fastcrud
|
||||
sqlmodel==0.0.22
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# sqlmodel-crud-utilities
|
||||
sqlmodel-crud-utilities @ git+https://github.com/fsecada01/SQLModel-CRUD-Utilities@83e964f6e7b633e339e45ddcaaa49cd8617fa105
|
||||
# via -r core_requirements.in
|
||||
starlette==0.41.3
|
||||
# via fastapi
|
||||
text-unidecode==1.3
|
||||
# via python-slugify
|
||||
tornado==6.4.2
|
||||
# via flower
|
||||
tqdm==4.67.1
|
||||
# via
|
||||
# openai
|
||||
# pyppeteer
|
||||
typing-extensions==4.12.2
|
||||
# via
|
||||
# anthropic
|
||||
# fastapi
|
||||
# groq
|
||||
# logfire
|
||||
# openai
|
||||
# opentelemetry-sdk
|
||||
# pydantic
|
||||
# pydantic-core
|
||||
# pyee
|
||||
# sqlalchemy
|
||||
# sqlmodel-crud-utilities
|
||||
# typing-inspect
|
||||
typing-inspect==0.9.0
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# mistralai
|
||||
tzdata==2025.1
|
||||
# via
|
||||
# celery
|
||||
# kombu
|
||||
# psycopg
|
||||
update-checker==0.18.0
|
||||
# via praw
|
||||
urllib3==1.26.20
|
||||
# via
|
||||
# pyppeteer
|
||||
# requests
|
||||
uvicorn==0.34.0
|
||||
# via
|
||||
# -r core_requirements.in
|
||||
# pydantic-ai-examples
|
||||
vine==5.1.0
|
||||
# via
|
||||
# amqp
|
||||
# celery
|
||||
# kombu
|
||||
w3lib==2.2.1
|
||||
# via httpx-html
|
||||
wcwidth==0.2.13
|
||||
# via prompt-toolkit
|
||||
webencodings==0.5.1
|
||||
# via html5lib
|
||||
websocket-client==1.8.0
|
||||
# via praw
|
||||
websockets==10.4
|
||||
# via pyppeteer
|
||||
win32-setctime==1.2.0
|
||||
# via
|
||||
# loguru
|
||||
# sqlmodel-crud-utilities
|
||||
wrapt==1.17.2
|
||||
# via
|
||||
# deprecated
|
||||
# opentelemetry-instrumentation
|
||||
# opentelemetry-instrumentation-dbapi
|
||||
wsproto==1.2.0
|
||||
# via hypercorn
|
||||
xmljson==0.2.1
|
||||
# via -r core_requirements.in
|
||||
xmltodict==0.14.2
|
||||
# via -r core_requirements.in
|
||||
zipp==3.21.0
|
||||
# via importlib-metadata
|
||||
@ -1,417 +0,0 @@
|
||||
# This file was autogenerated by uv via the following command:
|
||||
# uv pip compile --strip-extras dev_requirements.in -o dev_requirements.txt
|
||||
alembic==1.14.1
|
||||
# via -r dev_requirements.in
|
||||
annotated-types==0.7.0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# pydantic
|
||||
anyio==4.8.0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# fastapi-debug-toolbar
|
||||
# httpx
|
||||
# jupyter-server
|
||||
# starlette
|
||||
argon2-cffi==23.1.0
|
||||
# via jupyter-server
|
||||
argon2-cffi-bindings==21.2.0
|
||||
# via argon2-cffi
|
||||
arrow==1.3.0
|
||||
# via isoduration
|
||||
asttokens==2.4.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# stack-data
|
||||
async-lru==2.0.4
|
||||
# via jupyterlab
|
||||
attrs==24.3.0
|
||||
# via
|
||||
# jsonschema
|
||||
# referencing
|
||||
babel==2.16.0
|
||||
# via jupyterlab-server
|
||||
beautifulsoup4==4.12.3
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# nbconvert
|
||||
black==24.10.0
|
||||
# via -r dev_requirements.in
|
||||
bleach==6.2.0
|
||||
# via nbconvert
|
||||
certifi==2024.12.14
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# httpcore
|
||||
# httpx
|
||||
# requests
|
||||
cffi==1.17.1
|
||||
# via argon2-cffi-bindings
|
||||
cfgv==3.4.0
|
||||
# via pre-commit
|
||||
charset-normalizer==3.4.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# requests
|
||||
click==8.1.8
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# black
|
||||
colorama==0.4.6
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# click
|
||||
# ipython
|
||||
comm==0.2.2
|
||||
# via ipykernel
|
||||
debugpy==1.8.12
|
||||
# via ipykernel
|
||||
decorator==5.1.1
|
||||
# via ipython
|
||||
defusedxml==0.7.1
|
||||
# via nbconvert
|
||||
distlib==0.3.9
|
||||
# via virtualenv
|
||||
executing==2.1.0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# stack-data
|
||||
fastapi==0.115.6
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# fastapi-debug-toolbar
|
||||
fastapi-debug-toolbar @ git+https://github.com/fsecada01/fastapi-debug-toolbar.git@2da9f1e724d1d7ca56990ba7a8e72598fa3e1cf4
|
||||
# via -r dev_requirements.in
|
||||
fastjsonschema==2.21.1
|
||||
# via nbformat
|
||||
filelock==3.17.0
|
||||
# via virtualenv
|
||||
fqdn==1.5.1
|
||||
# via jsonschema
|
||||
greenlet==3.1.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# sqlalchemy
|
||||
h11==0.14.0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# httpcore
|
||||
httpcore==1.0.7
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# httpx
|
||||
httpx==0.28.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# jupyterlab
|
||||
identify==2.6.6
|
||||
# via pre-commit
|
||||
idna==3.10
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# anyio
|
||||
# httpx
|
||||
# jsonschema
|
||||
# requests
|
||||
ipykernel==6.29.5
|
||||
# via jupyterlab
|
||||
ipython==8.31.0
|
||||
# via ipykernel
|
||||
isoduration==20.11.0
|
||||
# via jsonschema
|
||||
isort==5.13.2
|
||||
# via -r dev_requirements.in
|
||||
jedi==0.19.2
|
||||
# via ipython
|
||||
jinja2==3.1.5
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# fastapi-debug-toolbar
|
||||
# jupyter-server
|
||||
# jupyterlab
|
||||
# jupyterlab-server
|
||||
# nbconvert
|
||||
json5==0.10.0
|
||||
# via jupyterlab-server
|
||||
jsonpointer==3.0.0
|
||||
# via jsonschema
|
||||
jsonschema==4.23.0
|
||||
# via
|
||||
# jupyter-events
|
||||
# jupyterlab-server
|
||||
# nbformat
|
||||
jsonschema-specifications==2024.10.1
|
||||
# via jsonschema
|
||||
jupyter-client==8.6.3
|
||||
# via
|
||||
# ipykernel
|
||||
# jupyter-server
|
||||
# nbclient
|
||||
jupyter-core==5.7.2
|
||||
# via
|
||||
# ipykernel
|
||||
# jupyter-client
|
||||
# jupyter-server
|
||||
# jupyterlab
|
||||
# nbclient
|
||||
# nbconvert
|
||||
# nbformat
|
||||
jupyter-events==0.11.0
|
||||
# via jupyter-server
|
||||
jupyter-lsp==2.2.5
|
||||
# via jupyterlab
|
||||
jupyter-server==2.15.0
|
||||
# via
|
||||
# jupyter-lsp
|
||||
# jupyterlab
|
||||
# jupyterlab-code-formatter
|
||||
# jupyterlab-server
|
||||
# notebook-shim
|
||||
jupyter-server-terminals==0.5.3
|
||||
# via jupyter-server
|
||||
jupyterlab==4.3.4
|
||||
# via -r dev_requirements.in
|
||||
jupyterlab-code-formatter==3.0.2
|
||||
# via -r dev_requirements.in
|
||||
jupyterlab-pygments==0.3.0
|
||||
# via nbconvert
|
||||
jupyterlab-server==2.27.3
|
||||
# via jupyterlab
|
||||
mako==1.3.8
|
||||
# via alembic
|
||||
markupsafe==3.0.2
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# jinja2
|
||||
# mako
|
||||
# nbconvert
|
||||
matplotlib-inline==0.1.7
|
||||
# via
|
||||
# ipykernel
|
||||
# ipython
|
||||
mistune==3.1.0
|
||||
# via nbconvert
|
||||
mypy-extensions==1.0.0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# black
|
||||
nbclient==0.10.2
|
||||
# via nbconvert
|
||||
nbconvert==7.16.5
|
||||
# via jupyter-server
|
||||
nbformat==5.10.4
|
||||
# via
|
||||
# jupyter-server
|
||||
# nbclient
|
||||
# nbconvert
|
||||
nest-asyncio==1.6.0
|
||||
# via ipykernel
|
||||
nodeenv==1.9.1
|
||||
# via pre-commit
|
||||
notebook-shim==0.2.4
|
||||
# via jupyterlab
|
||||
overrides==7.7.0
|
||||
# via jupyter-server
|
||||
packaging==24.2
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# black
|
||||
# ipykernel
|
||||
# jupyter-server
|
||||
# jupyterlab
|
||||
# jupyterlab-code-formatter
|
||||
# jupyterlab-server
|
||||
# nbconvert
|
||||
pandocfilters==1.5.1
|
||||
# via nbconvert
|
||||
parso==0.8.4
|
||||
# via jedi
|
||||
pathspec==0.12.1
|
||||
# via black
|
||||
platformdirs==4.3.6
|
||||
# via
|
||||
# black
|
||||
# jupyter-core
|
||||
# virtualenv
|
||||
pre-commit==4.1.0
|
||||
# via -r dev_requirements.in
|
||||
prometheus-client==0.21.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# jupyter-server
|
||||
prompt-toolkit==3.0.50
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# ipython
|
||||
psutil==5.9.8
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# ipykernel
|
||||
pure-eval==0.2.3
|
||||
# via stack-data
|
||||
pycparser==2.22
|
||||
# via cffi
|
||||
pydantic==2.10.5
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# fastapi
|
||||
# fastapi-debug-toolbar
|
||||
# pydantic-extra-types
|
||||
# pydantic-settings
|
||||
pydantic-core==2.27.2
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# pydantic
|
||||
pydantic-extra-types==2.10.2
|
||||
# via fastapi-debug-toolbar
|
||||
pydantic-settings==2.7.1
|
||||
# via fastapi-debug-toolbar
|
||||
pygments==2.19.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# ipython
|
||||
# nbconvert
|
||||
pyinstrument==5.0.0
|
||||
# via fastapi-debug-toolbar
|
||||
python-dateutil==2.9.0.post0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# arrow
|
||||
# jupyter-client
|
||||
python-dotenv==1.0.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# pydantic-settings
|
||||
python-json-logger==3.2.1
|
||||
# via jupyter-events
|
||||
pywin32==308
|
||||
# via jupyter-core
|
||||
pywinpty==2.0.14
|
||||
# via
|
||||
# jupyter-server
|
||||
# jupyter-server-terminals
|
||||
# terminado
|
||||
pyyaml==6.0.2
|
||||
# via
|
||||
# jupyter-events
|
||||
# pre-commit
|
||||
pyzmq==26.2.0
|
||||
# via
|
||||
# ipykernel
|
||||
# jupyter-client
|
||||
# jupyter-server
|
||||
referencing==0.36.1
|
||||
# via
|
||||
# jsonschema
|
||||
# jsonschema-specifications
|
||||
# jupyter-events
|
||||
requests==2.32.3
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# jupyterlab-server
|
||||
rfc3339-validator==0.1.4
|
||||
# via
|
||||
# jsonschema
|
||||
# jupyter-events
|
||||
rfc3986-validator==0.1.1
|
||||
# via
|
||||
# jsonschema
|
||||
# jupyter-events
|
||||
rpds-py==0.22.3
|
||||
# via
|
||||
# jsonschema
|
||||
# referencing
|
||||
ruff==0.9.2
|
||||
# via -r dev_requirements.in
|
||||
send2trash==1.8.3
|
||||
# via jupyter-server
|
||||
setuptools==75.8.0
|
||||
# via jupyterlab
|
||||
six==1.17.0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# asttokens
|
||||
# python-dateutil
|
||||
# rfc3339-validator
|
||||
sniffio==1.3.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# anyio
|
||||
soupsieve==2.6
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# beautifulsoup4
|
||||
sqlalchemy==2.0.37
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# alembic
|
||||
sqlparse==0.5.3
|
||||
# via fastapi-debug-toolbar
|
||||
stack-data==0.6.3
|
||||
# via ipython
|
||||
starlette==0.41.3
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# fastapi
|
||||
terminado==0.18.1
|
||||
# via
|
||||
# jupyter-server
|
||||
# jupyter-server-terminals
|
||||
tinycss2==1.4.0
|
||||
# via bleach
|
||||
tornado==6.4.2
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# ipykernel
|
||||
# jupyter-client
|
||||
# jupyter-server
|
||||
# jupyterlab
|
||||
# terminado
|
||||
traitlets==5.14.3
|
||||
# via
|
||||
# comm
|
||||
# ipykernel
|
||||
# ipython
|
||||
# jupyter-client
|
||||
# jupyter-core
|
||||
# jupyter-events
|
||||
# jupyter-server
|
||||
# jupyterlab
|
||||
# matplotlib-inline
|
||||
# nbclient
|
||||
# nbconvert
|
||||
# nbformat
|
||||
types-python-dateutil==2.9.0.20241206
|
||||
# via arrow
|
||||
typing-extensions==4.12.2
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# alembic
|
||||
# fastapi
|
||||
# pydantic
|
||||
# pydantic-core
|
||||
# pydantic-extra-types
|
||||
# sqlalchemy
|
||||
uri-template==1.3.0
|
||||
# via jsonschema
|
||||
urllib3==1.26.20
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# requests
|
||||
virtualenv==20.29.1
|
||||
# via pre-commit
|
||||
wcwidth==0.2.13
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# prompt-toolkit
|
||||
webcolors==24.11.1
|
||||
# via jsonschema
|
||||
webencodings==0.5.1
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# bleach
|
||||
# tinycss2
|
||||
websocket-client==1.8.0
|
||||
# via
|
||||
# -c core_requirements.txt
|
||||
# jupyter-server
|
||||
@ -4,7 +4,7 @@ __dir="$(cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)"
|
||||
cd ${__dir}/../.. || exit
|
||||
source .venv/bin/activate
|
||||
cd src || exit
|
||||
gunicorn main:app -w ${WORKERS} -k uvicorn.workers.UvicornWorker \
|
||||
gunicorn app:app -w ${WORKERS} -k uvicorn.workers.UvicornWorker \
|
||||
--timeout "${TIMEOUT}" \
|
||||
--forwarded-allow-ips "*" \
|
||||
-b 0.0.0.0:"${PORT}"
|
||||
|
||||
309
justfile
Normal file
309
justfile
Normal file
@ -0,0 +1,309 @@
|
||||
# Pygentic-AI Justfile
|
||||
# Task automation for Docker, development, and deployment workflows
|
||||
|
||||
# Default recipe - show available commands
|
||||
default:
|
||||
@just --list
|
||||
|
||||
# Variables with sensible defaults
|
||||
IMAGE_NAME := "s3docker.francissecada.com/pygentic_ai"
|
||||
COMPOSE_FILE := "compose.yaml"
|
||||
|
||||
# ============================================
|
||||
# Docker Build Commands
|
||||
# ============================================
|
||||
|
||||
# Build Docker image with optional tag (default: dev-latest)
|
||||
build tag="dev-latest":
|
||||
@echo "Building Docker image: {{IMAGE_NAME}}:{{tag}}"
|
||||
docker build -t {{IMAGE_NAME}}:{{tag}} .
|
||||
|
||||
# Build image with custom branch tag
|
||||
build-branch branch="dev_deploy":
|
||||
@echo "Building image with branch tag: {{branch}}"
|
||||
docker build -t {{IMAGE_NAME}}:{{branch}}-latest .
|
||||
|
||||
# Build and push image
|
||||
build-push tag="dev-latest":
|
||||
@echo "Building and pushing: {{IMAGE_NAME}}:{{tag}}"
|
||||
docker build -t {{IMAGE_NAME}}:{{tag}} --push .
|
||||
|
||||
# ============================================
|
||||
# Docker Compose Commands
|
||||
# ============================================
|
||||
|
||||
# Start all services
|
||||
up *args="":
|
||||
docker-compose -f {{COMPOSE_FILE}} up {{args}}
|
||||
|
||||
# Start services in detached mode
|
||||
up-d:
|
||||
docker-compose -f {{COMPOSE_FILE}} up -d
|
||||
|
||||
# Stop all services
|
||||
down:
|
||||
docker-compose -f {{COMPOSE_FILE}} down
|
||||
|
||||
# Restart services
|
||||
restart:
|
||||
docker-compose -f {{COMPOSE_FILE}} restart
|
||||
|
||||
# View logs (optional service name and follow flag)
|
||||
logs service="" follow="":
|
||||
#!/usr/bin/env bash
|
||||
if [ -n "{{service}}" ]; then
|
||||
docker-compose -f {{COMPOSE_FILE}} logs {{follow}} {{service}}
|
||||
else
|
||||
docker-compose -f {{COMPOSE_FILE}} logs {{follow}}
|
||||
fi
|
||||
|
||||
# Follow logs for all services
|
||||
logs-f:
|
||||
docker-compose -f {{COMPOSE_FILE}} logs -f
|
||||
|
||||
# Follow logs for web service
|
||||
logs-web:
|
||||
docker-compose -f {{COMPOSE_FILE}} logs -f web
|
||||
|
||||
# Follow logs for celery service
|
||||
logs-celery:
|
||||
docker-compose -f {{COMPOSE_FILE}} logs -f celery_service
|
||||
|
||||
# ============================================
|
||||
# Development Commands
|
||||
# ============================================
|
||||
|
||||
# Install Python dependencies with uv
|
||||
install:
|
||||
uv sync --all-extras --dev
|
||||
|
||||
# Run FastAPI development server
|
||||
dev:
|
||||
uv run python src/app.py
|
||||
|
||||
# Run Celery worker
|
||||
celery:
|
||||
uv run python src/cworker.py
|
||||
|
||||
# Compile SCSS to CSS
|
||||
scss:
|
||||
cd src/frontend && npm run build:css
|
||||
|
||||
# Watch SCSS files and recompile on changes
|
||||
scss-watch:
|
||||
cd src/frontend && npm run watch:css
|
||||
|
||||
# Install frontend dependencies
|
||||
npm-install:
|
||||
cd src/frontend && npm install
|
||||
|
||||
# ============================================
|
||||
# Testing & Quality Commands
|
||||
# ============================================
|
||||
|
||||
# Run all tests
|
||||
test:
|
||||
uv run pytest tests/ -v
|
||||
|
||||
# Run tests with coverage
|
||||
test-cov:
|
||||
uv run pytest tests/ --cov=src --cov-report=html --cov-report=term
|
||||
|
||||
# Run linting checks
|
||||
lint:
|
||||
uv run ruff check src/
|
||||
|
||||
# Run formatting
|
||||
format:
|
||||
uv run black src/
|
||||
uv run ruff check --fix src/
|
||||
|
||||
# Run security checks
|
||||
security:
|
||||
uv run bandit -r src/
|
||||
|
||||
# ============================================
|
||||
# Health & Status Commands
|
||||
# ============================================
|
||||
|
||||
# Check if services are healthy
|
||||
health:
|
||||
@echo "Checking web service health..."
|
||||
@curl -f -H "Host: pygenticai.francissecada.com" http://localhost:5051/ || echo "Web service is not responding"
|
||||
|
||||
# Check Docker container status
|
||||
ps:
|
||||
docker-compose -f {{COMPOSE_FILE}} ps
|
||||
|
||||
# Show container resource usage
|
||||
stats:
|
||||
docker stats --no-stream
|
||||
|
||||
# ============================================
|
||||
# Database Commands
|
||||
# ============================================
|
||||
|
||||
# Run database migrations (placeholder - add your migration tool)
|
||||
migrate:
|
||||
@echo "Running migrations..."
|
||||
# uv run alembic upgrade head
|
||||
|
||||
# Create new migration
|
||||
migration name:
|
||||
@echo "Creating migration: {{name}}"
|
||||
# uv run alembic revision --autogenerate -m "{{name}}"
|
||||
|
||||
# Reset database (WARNING: destructive)
|
||||
db-reset:
|
||||
@echo "WARNING: This will delete all data!"
|
||||
@read -p "Are you sure? (y/N) " -n 1 -r
|
||||
@echo
|
||||
# Add your reset commands here
|
||||
|
||||
# ============================================
|
||||
# Cleanup Commands
|
||||
# ============================================
|
||||
|
||||
# Stop and remove all containers, networks
|
||||
clean:
|
||||
docker-compose -f {{COMPOSE_FILE}} down -v
|
||||
|
||||
# Remove all Pygentic-AI Docker images
|
||||
clean-images:
|
||||
docker images {{IMAGE_NAME}} -q | xargs -r docker rmi -f
|
||||
|
||||
# Full cleanup - containers, images, volumes
|
||||
clean-all: clean clean-images
|
||||
@echo "Cleaned up all Pygentic-AI Docker resources"
|
||||
|
||||
# Remove unused Docker resources
|
||||
prune:
|
||||
docker system prune -f
|
||||
|
||||
# ============================================
|
||||
# Deployment Commands
|
||||
# ============================================
|
||||
|
||||
# Deploy with specific image tag
|
||||
deploy tag="main-latest":
|
||||
@echo "Deploying with IMAGE_TAG={{tag}}"
|
||||
IMAGE_TAG={{tag}} docker-compose -f {{COMPOSE_FILE}} up -d
|
||||
|
||||
# Pull latest images
|
||||
pull tag="main-latest":
|
||||
docker pull {{IMAGE_NAME}}:{{tag}}
|
||||
|
||||
# Deploy latest from main branch
|
||||
deploy-main: (pull "main-latest")
|
||||
IMAGE_TAG=main-latest docker-compose -f {{COMPOSE_FILE}} up -d
|
||||
|
||||
# Deploy from dev branch
|
||||
deploy-dev: (pull "dev_deploy-latest")
|
||||
IMAGE_TAG=dev_deploy-latest docker-compose -f {{COMPOSE_FILE}} up -d
|
||||
|
||||
# ============================================
|
||||
# Environment Setup
|
||||
# ============================================
|
||||
|
||||
# Create .env from template
|
||||
init-env:
|
||||
@if [ ! -f .env ]; then \
|
||||
cp .env.example .env; \
|
||||
echo "Created .env from template. Please update with your credentials."; \
|
||||
else \
|
||||
echo ".env already exists. Skipping."; \
|
||||
fi
|
||||
|
||||
# Validate environment variables
|
||||
check-env:
|
||||
@echo "Checking required environment variables..."
|
||||
@grep -v '^#' .env.example | grep '=' | cut -d'=' -f1 | while read var; do \
|
||||
if ! grep -q "^$var=" .env 2>/dev/null; then \
|
||||
echo "Missing: $var"; \
|
||||
fi; \
|
||||
done
|
||||
|
||||
# ============================================
|
||||
# Development Workflow
|
||||
# ============================================
|
||||
|
||||
# Full development setup
|
||||
setup: init-env npm-install install scss
|
||||
@echo "✅ Development environment ready!"
|
||||
@echo "Run 'just dev' to start the development server"
|
||||
|
||||
# Start complete development environment
|
||||
dev-full: scss-watch dev
|
||||
|
||||
# Rebuild and restart services
|
||||
rebuild: build up-d
|
||||
@echo "Services rebuilt and restarted"
|
||||
|
||||
# ============================================
|
||||
# CI/CD Helpers
|
||||
# ============================================
|
||||
|
||||
# Simulate CI build
|
||||
ci-build:
|
||||
@echo "Simulating CI build process..."
|
||||
just build dev_deploy-$(date +%Y-%m-%d)
|
||||
|
||||
# Run all quality checks (for pre-commit)
|
||||
check: lint test
|
||||
@echo "✅ All checks passed!"
|
||||
|
||||
# ============================================
|
||||
# Information Commands
|
||||
# ============================================
|
||||
|
||||
# Show environment information
|
||||
info:
|
||||
@echo "Project: Pygentic-AI"
|
||||
@echo "Image: {{IMAGE_NAME}}"
|
||||
@echo "Compose: {{COMPOSE_FILE}}"
|
||||
@echo ""
|
||||
@echo "Python version:"
|
||||
@uv run python --version
|
||||
@echo ""
|
||||
@echo "Docker version:"
|
||||
@docker --version
|
||||
@echo ""
|
||||
@echo "Docker Compose version:"
|
||||
@docker-compose --version
|
||||
|
||||
# Show current configuration
|
||||
config:
|
||||
docker-compose -f {{COMPOSE_FILE}} config
|
||||
|
||||
# ============================================
|
||||
# Claude AI Assistance
|
||||
# ============================================
|
||||
|
||||
# Start Claude with full project context and multi-agent orchestration
|
||||
start-claude *args="":
|
||||
@echo "🤖 Starting Claude with Pygentic-AI context..."
|
||||
@echo "📋 System Prompt: .claude/system-prompt.md (Multi-agent orchestration)"
|
||||
@echo "📖 Project Context: CLAUDE.md (Initialization guide)"
|
||||
@echo ""
|
||||
@if [ ! -f .claude/system-prompt.md ]; then \
|
||||
echo "❌ Error: .claude/system-prompt.md not found"; \
|
||||
echo "Run: git pull origin dev_deploy"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@if [ ! -f CLAUDE.md ]; then \
|
||||
echo "❌ Error: CLAUDE.md not found"; \
|
||||
echo "Run: git pull origin dev_deploy"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "✅ All context files found"
|
||||
@echo ""
|
||||
@echo "Claude will load with:"
|
||||
@echo " • Multi-agent orchestration (🏗️ Architect, 🎨 Frontend, ⚙️ Backend, 🔒 Security, 🚀 DevOps)"
|
||||
@echo " • MCP server routing (Sequential, Context7, Magic, Playwright, Morphllm, Serena)"
|
||||
@echo " • Project architecture and workflows"
|
||||
@echo ""
|
||||
@echo "🚀 Launching Claude Code CLI..."
|
||||
@echo " System Prompt: .claude/system-prompt.md"
|
||||
@echo " Use /init in Claude to load CLAUDE.md context"
|
||||
@echo ""
|
||||
claude --system-prompt-file .claude/system-prompt.md {{args}}
|
||||
308
pyproject.toml
308
pyproject.toml
@ -57,8 +57,7 @@ profile = "black"
|
||||
line_length = 80
|
||||
|
||||
[tool.uv.sources]
|
||||
sqlmodel-crud-utilities = { git = "https://github.com/fsecada01/SQLModel-CRUD-Utilities", rev = "83e964f6e7b633e339e45ddcaaa49cd8617fa105" }
|
||||
fastapi-debug-toolbar = { git = "https://github.com/fsecada01/fastapi-debug-toolbar.git", rev = "2da9f1e724d1d7ca56990ba7a8e72598fa3e1cf4" }
|
||||
fastapi-debug-toolbar = { git = "https://github.com/fsecada01/fastapi-debug-toolbar.git", rev = "patch-2" }
|
||||
#multi_line_output = 3
|
||||
#include_trailing_comma = true
|
||||
#force_grid_wrap = 0
|
||||
@ -76,272 +75,51 @@ classifiers = [
|
||||
]
|
||||
version = "1.0.0b0"
|
||||
dependencies = [
|
||||
"aiofiles==24.1.0",
|
||||
"aiomysql==0.2.0",
|
||||
"amqp==5.3.1",
|
||||
"annotated-types==0.7.0",
|
||||
"anthropic==0.44.0",
|
||||
"anyio==4.8.0",
|
||||
"appdirs==1.4.4",
|
||||
"asgiref==3.8.1",
|
||||
"asttokens==2.4.1",
|
||||
"asyncpg==0.30.0",
|
||||
"beautifulsoup4==4.12.3",
|
||||
"billiard==4.2.1",
|
||||
"cachetools==5.5.1",
|
||||
"celery==5.4.0",
|
||||
"certifi==2024.12.14",
|
||||
"charset-normalizer==3.4.1",
|
||||
"click-didyoumean==0.3.1",
|
||||
"click-plugins==1.1.1",
|
||||
"click-repl==0.3.0",
|
||||
"click==8.1.8",
|
||||
"colorama==0.4.6",
|
||||
"cssselect==1.2.0",
|
||||
"deprecated==1.2.15",
|
||||
"devtools==0.12.2",
|
||||
"distro==1.9.0",
|
||||
"eval-type-backport==0.2.2",
|
||||
"executing==2.1.0",
|
||||
"fake-useragent==2.0.3",
|
||||
"fastapi-restful==0.6.0",
|
||||
"fastapi==0.115.6",
|
||||
"fastcrud==0.15.5",
|
||||
"flower==2.0.1",
|
||||
"google-auth==2.37.0",
|
||||
"googleapis-common-protos==1.66.0",
|
||||
"greenlet==3.1.1",
|
||||
"griffe==1.5.5",
|
||||
"groq==0.15.0",
|
||||
"h11==0.14.0",
|
||||
"h2==4.1.0",
|
||||
"hpack==4.0.0",
|
||||
"html5lib==1.1",
|
||||
"httpcore==1.0.7",
|
||||
"httpx-html==0.11.0.dev0",
|
||||
"httpx==0.28.1",
|
||||
"humanize==4.11.0",
|
||||
"hypercorn==0.17.3",
|
||||
"hyperframe==6.0.1",
|
||||
"idna==3.10",
|
||||
"importlib-metadata==8.5.0",
|
||||
"itsdangerous==2.2.0",
|
||||
"jinja2==3.1.5",
|
||||
"jinjax==0.48",
|
||||
"jiter==0.8.2",
|
||||
"jsonpath-python==1.0.6",
|
||||
"kombu==5.4.2",
|
||||
"logfire-api==3.2.0",
|
||||
"logfire==3.2.0",
|
||||
"loguru==0.7.3",
|
||||
"lxml-html-clean==0.4.1",
|
||||
"lxml==5.3.0",
|
||||
"markdown-it-py==3.0.0",
|
||||
"markupsafe==3.0.2",
|
||||
"mdurl==0.1.2",
|
||||
"mistralai==1.4.0",
|
||||
"mypy-extensions==1.0.0",
|
||||
"openai==1.60.0",
|
||||
"opentelemetry-api==1.29.0",
|
||||
"opentelemetry-exporter-otlp-proto-common==1.29.0",
|
||||
"opentelemetry-exporter-otlp-proto-http==1.29.0",
|
||||
"opentelemetry-instrumentation-asgi==0.50b0",
|
||||
"opentelemetry-instrumentation-asyncpg==0.50b0",
|
||||
"opentelemetry-instrumentation-dbapi==0.50b0",
|
||||
"opentelemetry-instrumentation-fastapi==0.50b0",
|
||||
"opentelemetry-instrumentation-sqlite3==0.50b0",
|
||||
"opentelemetry-instrumentation==0.50b0",
|
||||
"opentelemetry-proto==1.29.0",
|
||||
"opentelemetry-sdk==1.29.0",
|
||||
"opentelemetry-semantic-conventions==0.50b0",
|
||||
"opentelemetry-util-http==0.50b0",
|
||||
"packaging==24.2",
|
||||
"parse==1.20.2",
|
||||
"praw==7.8.1",
|
||||
"prawcore==2.4.0",
|
||||
"priority==2.0.0",
|
||||
"prometheus-client==0.21.1",
|
||||
"prompt-toolkit==3.0.50",
|
||||
"protobuf==5.29.3",
|
||||
"psutil==5.9.8",
|
||||
"psycopg==3.2.4",
|
||||
"pyasn1-modules==0.4.1",
|
||||
"pyasn1==0.6.1",
|
||||
"pydantic-ai-examples==0.0.18",
|
||||
"pydantic-ai-slim==0.0.18",
|
||||
"pydantic-ai==0.0.18",
|
||||
"pydantic-core==2.27.2",
|
||||
"pydantic==2.10.5",
|
||||
"pyee==11.1.1",
|
||||
"pygments==2.19.1",
|
||||
"pymysql==1.1.1",
|
||||
"pyppeteer==2.0.0",
|
||||
"pyquery==2.0.1",
|
||||
"python-dateutil==2.9.0.post0",
|
||||
"python-decouple==3.8",
|
||||
"python-dotenv==1.0.1",
|
||||
"python-multipart==0.0.20",
|
||||
"python-slugify==8.0.4",
|
||||
"pytz==2024.2",
|
||||
"redis==5.2.1",
|
||||
"requests==2.32.3",
|
||||
"rich==13.9.4",
|
||||
"rsa==4.9",
|
||||
"simplejson==3.19.3",
|
||||
"six==1.17.0",
|
||||
"sniffio==1.3.1",
|
||||
"socksio==1.0.0",
|
||||
"soupsieve==2.6",
|
||||
"sqlalchemy-mixins==2.0.5",
|
||||
"sqlalchemy-utils==0.41.2",
|
||||
"sqlalchemy==2.0.37",
|
||||
"aiofiles>=24.1.0",
|
||||
"aiomysql>=0.2.0",
|
||||
"celery>=5.4.0",
|
||||
"fastapi>=0.115.7",
|
||||
"fastapi-restful>=0.6.0",
|
||||
"fastcrud>=0.15.5",
|
||||
"flower>=2.0.1",
|
||||
"greenlet>=3.1.1",
|
||||
"gunicorn>=25.0.1 ; sys_platform != 'win32'",
|
||||
"html5lib>=1.1",
|
||||
"httpx-html>=0.11.0.dev0",
|
||||
"httpx[socks]>=0.28.1",
|
||||
"hypercorn>=0.17.3 ; sys_platform == 'win32'",
|
||||
"itsdangerous>=2.2.0",
|
||||
"jinjax>=0.48",
|
||||
"loguru>=0.7.3",
|
||||
"lxml[html-clean]>=5.3.0",
|
||||
"openai>=1.60.0",
|
||||
"praw>=7.8.1",
|
||||
"psycopg>=3.2.4",
|
||||
"pydantic-ai[examples]>=0.0.18",
|
||||
"pydantic-settings>=2.7.1",
|
||||
"python-dateutil>=2.9.0.post0",
|
||||
"python-decouple>=3.8",
|
||||
"python-slugify>=8.0.4",
|
||||
"pytz>=2024.2",
|
||||
"redis>=5.2.1",
|
||||
"simplejson>=3.19.3",
|
||||
"sqlalchemy-mixins>=2.0.5",
|
||||
"sqlmodel>=0.0.22",
|
||||
"sqlmodel-crud-utilities",
|
||||
"sqlmodel==0.0.22",
|
||||
"starlette==0.41.3",
|
||||
"text-unidecode==1.3",
|
||||
"tornado==6.4.2",
|
||||
"tqdm==4.67.1",
|
||||
"typing-extensions==4.12.2",
|
||||
"typing-inspect==0.9.0",
|
||||
"tzdata==2025.1",
|
||||
"update-checker==0.18.0",
|
||||
"urllib3==1.26.20",
|
||||
"uvicorn==0.34.0",
|
||||
"vine==5.1.0",
|
||||
"w3lib==2.2.1",
|
||||
"wcwidth==0.2.13",
|
||||
"webencodings==0.5.1",
|
||||
"websocket-client==1.8.0",
|
||||
"websockets==10.4",
|
||||
"win32-setctime==1.2.0",
|
||||
"wrapt==1.17.2",
|
||||
"wsproto==1.2.0",
|
||||
"xmljson==0.2.1",
|
||||
"xmltodict==0.14.2",
|
||||
"zipp==3.21.0",
|
||||
"typing-inspect>=0.9.0",
|
||||
"uvicorn>=0.34.0",
|
||||
"xmljson>=0.2.1",
|
||||
"xmltodict>=0.14.2",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"aiofiles==24.1.0",
|
||||
"alembic==1.14.1",
|
||||
"annotated-types==0.7.0",
|
||||
"anyio==4.8.0",
|
||||
"argon2-cffi-bindings==21.2.0",
|
||||
"argon2-cffi==23.1.0",
|
||||
"arrow==1.3.0",
|
||||
"asttokens==2.4.1",
|
||||
"async-lru==2.0.4",
|
||||
"attrs==24.3.0",
|
||||
"babel==2.16.0",
|
||||
"beautifulsoup4==4.12.3",
|
||||
"black==24.10.0",
|
||||
"bleach==6.2.0",
|
||||
"certifi==2024.12.14",
|
||||
"cffi==1.17.1",
|
||||
"cfgv==3.4.0",
|
||||
"charset-normalizer==3.4.1",
|
||||
"click==8.1.8",
|
||||
"colorama==0.4.6",
|
||||
"comm==0.2.2",
|
||||
"debugpy==1.8.12",
|
||||
"decorator==5.1.1",
|
||||
"defusedxml==0.7.1",
|
||||
"distlib==0.3.9",
|
||||
"executing==2.1.0",
|
||||
"fastapi-debug-toolbar==0.6.3",
|
||||
"fastapi==0.115.6",
|
||||
"fastjsonschema==2.21.1",
|
||||
"filelock==3.17.0",
|
||||
"fqdn==1.5.1",
|
||||
"greenlet==3.1.1",
|
||||
"h11==0.14.0",
|
||||
"httpcore==1.0.7",
|
||||
"httpx==0.28.1",
|
||||
"identify==2.6.6",
|
||||
"idna==3.10",
|
||||
"ipykernel==6.29.5",
|
||||
"ipython==8.31.0",
|
||||
"isoduration==20.11.0",
|
||||
"isort==5.13.2",
|
||||
"jedi==0.19.2",
|
||||
"jinja2==3.1.5",
|
||||
"json5==0.10.0",
|
||||
"jsonpointer==3.0.0",
|
||||
"jsonschema-specifications==2024.10.1",
|
||||
"jsonschema==4.23.0",
|
||||
"jupyter-client==8.6.3",
|
||||
"jupyter-core==5.7.2",
|
||||
"jupyter-events==0.11.0",
|
||||
"jupyter-lsp==2.2.5",
|
||||
"jupyter-server-terminals==0.5.3",
|
||||
"jupyter-server==2.15.0",
|
||||
"jupyterlab-code-formatter==3.0.2",
|
||||
"jupyterlab-pygments==0.3.0",
|
||||
"jupyterlab-server==2.27.3",
|
||||
"jupyterlab==4.3.4",
|
||||
"mako==1.3.8",
|
||||
"markupsafe==3.0.2",
|
||||
"matplotlib-inline==0.1.7",
|
||||
"mistune==3.1.0",
|
||||
"mypy-extensions==1.0.0",
|
||||
"nbclient==0.10.2",
|
||||
"nbconvert==7.16.5",
|
||||
"nbformat==5.10.4",
|
||||
"nest-asyncio==1.6.0",
|
||||
"nodeenv==1.9.1",
|
||||
"notebook-shim==0.2.4",
|
||||
"overrides==7.7.0",
|
||||
"packaging==24.2",
|
||||
"pandocfilters==1.5.1",
|
||||
"parso==0.8.4",
|
||||
"pathspec==0.12.1",
|
||||
"platformdirs==4.3.6",
|
||||
"pre-commit==4.1.0",
|
||||
"prometheus-client==0.21.1",
|
||||
"prompt-toolkit==3.0.50",
|
||||
"psutil==5.9.8",
|
||||
"pure-eval==0.2.3",
|
||||
"pycparser==2.22",
|
||||
"pydantic-core==2.27.2",
|
||||
"pydantic-extra-types==2.10.2",
|
||||
"pydantic-settings==2.7.1",
|
||||
"pydantic==2.10.5",
|
||||
"pygments==2.19.1",
|
||||
"pyinstrument==5.0.0",
|
||||
"python-dateutil==2.9.0.post0",
|
||||
"python-dotenv==1.0.1",
|
||||
"python-json-logger==3.2.1",
|
||||
"pywin32==308",
|
||||
"pywinpty==2.0.14",
|
||||
"pyyaml==6.0.2",
|
||||
"pyzmq==26.2.0",
|
||||
"referencing==0.36.1",
|
||||
"requests==2.32.3",
|
||||
"rfc3339-validator==0.1.4",
|
||||
"rfc3986-validator==0.1.1",
|
||||
"rpds-py==0.22.3",
|
||||
"ruff==0.9.2",
|
||||
"send2trash==1.8.3",
|
||||
"setuptools==75.8.0",
|
||||
"six==1.17.0",
|
||||
"sniffio==1.3.1",
|
||||
"soupsieve==2.6",
|
||||
"sqlalchemy==2.0.37",
|
||||
"sqlparse==0.5.3",
|
||||
"stack-data==0.6.3",
|
||||
"starlette==0.41.3",
|
||||
"terminado==0.18.1",
|
||||
"tinycss2==1.4.0",
|
||||
"tornado==6.4.2",
|
||||
"traitlets==5.14.3",
|
||||
"types-python-dateutil==2.9.0.20241206",
|
||||
"typing-extensions==4.12.2",
|
||||
"uri-template==1.3.0",
|
||||
"urllib3==1.26.20",
|
||||
"virtualenv==20.29.1",
|
||||
"wcwidth==0.2.13",
|
||||
"webcolors==24.11.1",
|
||||
"webencodings==0.5.1",
|
||||
"websocket-client==1.8.0",
|
||||
"alembic>=1.18.3",
|
||||
"black>=26.1.0",
|
||||
"fastapi-debug-toolbar",
|
||||
"isort>=7.0.0",
|
||||
"jupyterlab>=4.5.3",
|
||||
"jupyterlab-code-formatter>=3.0.2",
|
||||
"pre-commit>=4.5.1",
|
||||
"ruff>=0.14.14",
|
||||
]
|
||||
|
||||
@ -1,11 +1,14 @@
|
||||
import asyncio
|
||||
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup as soup
|
||||
from pydantic_ai import RunContext
|
||||
from pydantic_ai import ModelRetry, RunContext
|
||||
|
||||
from backend.core.consts import AI_MODEL
|
||||
from backend.core.core import SwotAgentDeps, SwotAnalysis, swot_agent
|
||||
from backend.core.utils import report_tool_usage
|
||||
from backend.logger import logger
|
||||
from backend.utils import get_val, set_event_loop, windows_sys_event_loop_check
|
||||
|
||||
|
||||
@swot_agent.tool(prepare=report_tool_usage)
|
||||
@ -40,7 +43,7 @@ async def analyze_competition(
|
||||
product_name: str,
|
||||
product_description: str,
|
||||
) -> str:
|
||||
"""Analyzes the competition for the given product using the Gemini model."""
|
||||
"""Analyzes the competition for the given product using the OpenAI model."""
|
||||
logger.info(f"Analyzing competition for: {product_name}")
|
||||
|
||||
prompt = f"""
|
||||
@ -75,7 +78,7 @@ async def analyze_competition(
|
||||
async def get_reddit_insights(
|
||||
ctx: RunContext[SwotAgentDeps],
|
||||
query: str,
|
||||
subreddit_name: str = "python",
|
||||
subreddit_names: list[str] | None = None,
|
||||
):
|
||||
"""
|
||||
A tool to gain insights from a subreddit. Data is returned as string
|
||||
@ -83,23 +86,87 @@ async def get_reddit_insights(
|
||||
|
||||
:param ctx: RunContext[SwotAgentDeps]
|
||||
:param query: str
|
||||
:param subreddit_name: str
|
||||
:param subreddit_names: str
|
||||
:return: str
|
||||
"""
|
||||
subreddit = ctx.deps.reddit_client.subreddit(subreddit_name)
|
||||
search_results = subreddit.search(query)
|
||||
|
||||
if not subreddit_names:
|
||||
subreddit_names = get_val("REDDIT_SUBREDDIT", "python, ")
|
||||
subreddit_names = [x.strip() for x in subreddit_names.split(",")]
|
||||
insights = []
|
||||
for post in search_results:
|
||||
insights.append(
|
||||
f"Title: {post.title}\n"
|
||||
f"URL: {post.url}\n"
|
||||
f"Content: {post.selftext}\n",
|
||||
)
|
||||
if len(subreddit_names) <= 3:
|
||||
for name in subreddit_names:
|
||||
subreddit = ctx.deps.reddit_client.subreddit(name)
|
||||
search_results = subreddit.search(query)
|
||||
|
||||
for post in search_results:
|
||||
insights.append(
|
||||
f"Title: {post.title}\n"
|
||||
f"URL: {post.url}\n"
|
||||
f"Content: {post.selftext}\n",
|
||||
)
|
||||
else:
|
||||
windows_sys_event_loop_check()
|
||||
set_event_loop()
|
||||
loop = asyncio.get_event_loop()
|
||||
tasks = [
|
||||
asyncio.ensure_future(
|
||||
loop.run_in_executor(
|
||||
None, ctx.deps.reddit_client.subreddit(name).search, query
|
||||
)
|
||||
)
|
||||
for name in subreddit_names
|
||||
]
|
||||
results = await asyncio.gather(*tasks)
|
||||
for result in results:
|
||||
for post in result:
|
||||
insights.append(
|
||||
f"Title: {post.title}\n"
|
||||
f"URL: {post.url}\n"
|
||||
f"Content: {post.selftext}\n",
|
||||
)
|
||||
|
||||
return "\n".join(insights)
|
||||
|
||||
|
||||
@swot_agent.result_validator
|
||||
def validate_result(
|
||||
_ctx: RunContext[SwotAgentDeps], value: SwotAnalysis
|
||||
) -> SwotAnalysis:
|
||||
"""
|
||||
A validator for SWOT Analysis results; provides greater completeness and
|
||||
quality control
|
||||
:param _ctx: RunContext[SwotAgentDeps]
|
||||
:param value: SwotAnalysis
|
||||
:return: SwotAnalysis
|
||||
"""
|
||||
issues = []
|
||||
min = 2
|
||||
categories = {
|
||||
k.title(): getattr(value, k)
|
||||
for k in ("strengths", "weaknesses", "opportunities", "threats")
|
||||
}
|
||||
|
||||
for cat_name, points in categories.items():
|
||||
if len(points) < min:
|
||||
issues.append(
|
||||
f"{cat_name} should have at least {min} points. "
|
||||
f"Current count is {len(points)}."
|
||||
)
|
||||
|
||||
min_len_analysis = 100
|
||||
if len(value.analysis) < min_len_analysis:
|
||||
issues.append(
|
||||
f"Analysis should have at least {min_len_analysis} "
|
||||
f"characters. Current count is {len(value.analysis)}."
|
||||
)
|
||||
|
||||
if issues:
|
||||
logger.info(f"Validation issues: {issues}")
|
||||
raise ModelRetry("\n".join(issues))
|
||||
|
||||
return value
|
||||
|
||||
|
||||
async def run_agent(
|
||||
url: str,
|
||||
deps: SwotAgentDeps = SwotAgentDeps(),
|
||||
|
||||
@ -1,4 +1,6 @@
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
|
||||
from decouple import config
|
||||
|
||||
@ -92,3 +94,24 @@ def get_val(val: str, default: str | int | bool | None = None, **kwargs):
|
||||
)
|
||||
|
||||
return val
|
||||
|
||||
|
||||
def windows_sys_event_loop_check():
|
||||
"""
|
||||
A function that sets the event loop policy to a Windows-specific one.
|
||||
This is a workaround to a known bug involving capturing an existing
|
||||
asyncio event loop on non-Linux platforms.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
|
||||
|
||||
|
||||
def set_event_loop():
|
||||
"""
|
||||
A utility function to capture the existing event loop if it's running.
|
||||
If no event loop is running, then a new one is created and set.
|
||||
"""
|
||||
try:
|
||||
asyncio.get_running_loop()
|
||||
except RuntimeError:
|
||||
asyncio.set_event_loop(asyncio.new_event_loop())
|
||||
|
||||
1
src/frontend/.npmrc
Normal file
1
src/frontend/.npmrc
Normal file
@ -0,0 +1 @@
|
||||
registry=https://registry.npmjs.org/
|
||||
2448
src/frontend/package-lock.json
generated
Normal file
2448
src/frontend/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
23
src/frontend/package.json
Normal file
23
src/frontend/package.json
Normal file
@ -0,0 +1,23 @@
|
||||
{
|
||||
"dependencies": {
|
||||
"@creativebulma/bulma-collapsible": "^1.0.4",
|
||||
"@vizuaalog/bulmajs": "^0.12.2",
|
||||
"bulma": "^1.0.4",
|
||||
"bulma-tagsinput": "^2.0.0",
|
||||
"htmx.org": "^2.0.8",
|
||||
"install": "^0.13.0",
|
||||
"jquery": "^4.0.0",
|
||||
"npm": "^11.8.0"
|
||||
},
|
||||
"description": "Pygentic AI Frontend - SWOT Analysis Interface",
|
||||
"devDependencies": {
|
||||
"sass": "^1.83.0"
|
||||
},
|
||||
"name": "pygentic-ai-frontend",
|
||||
"scripts": {
|
||||
"build:css": "sass scss/styles.scss static/css/pygentic_ai.css --style compressed",
|
||||
"dev": "npm run watch:css",
|
||||
"watch:css": "sass scss/styles.scss static/css/pygentic_ai.css --watch"
|
||||
},
|
||||
"version": "1.0.0"
|
||||
}
|
||||
96
src/frontend/scss/_animations.scss
Normal file
96
src/frontend/scss/_animations.scss
Normal file
@ -0,0 +1,96 @@
|
||||
// ============================================
|
||||
// ANIMATIONS & KEYFRAMES
|
||||
// ============================================
|
||||
|
||||
@keyframes spin {
|
||||
from {
|
||||
transform: rotate(0deg);
|
||||
}
|
||||
to {
|
||||
transform: rotate(360deg);
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes float {
|
||||
0%, 100% {
|
||||
transform: translateY(0px);
|
||||
}
|
||||
50% {
|
||||
transform: translateY(-10px);
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes pulse {
|
||||
0%, 100% {
|
||||
box-shadow: 0 0 0 0 rgba($brand-primary, 0.7);
|
||||
}
|
||||
50% {
|
||||
box-shadow: 0 0 0 10px rgba($brand-primary, 0);
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes slideUp {
|
||||
from {
|
||||
opacity: 0;
|
||||
transform: translateY(20px);
|
||||
}
|
||||
to {
|
||||
opacity: 1;
|
||||
transform: translateY(0);
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes fadeIn {
|
||||
from {
|
||||
opacity: 0;
|
||||
}
|
||||
to {
|
||||
opacity: 1;
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes shake {
|
||||
0%, 100% { transform: translateX(0); }
|
||||
25% { transform: translateX(-10px); }
|
||||
75% { transform: translateX(10px); }
|
||||
}
|
||||
|
||||
@keyframes iconPulse {
|
||||
0%, 100% {
|
||||
transform: scale(1);
|
||||
}
|
||||
50% {
|
||||
transform: scale(1.1);
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes containerFadeIn {
|
||||
from {
|
||||
opacity: 0;
|
||||
transform: translateY(30px);
|
||||
}
|
||||
to {
|
||||
opacity: 1;
|
||||
transform: translateY(0);
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes statusFadeIn {
|
||||
from {
|
||||
opacity: 0;
|
||||
transform: translateX(-10px);
|
||||
}
|
||||
to {
|
||||
opacity: 1;
|
||||
transform: translateX(0);
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes statusPulse {
|
||||
0%, 100% {
|
||||
box-shadow: $shadow-md;
|
||||
}
|
||||
50% {
|
||||
box-shadow: 0 4px 20px rgba($swot-weakness, 0.3);
|
||||
}
|
||||
}
|
||||
208
src/frontend/scss/_components.scss
Normal file
208
src/frontend/scss/_components.scss
Normal file
@ -0,0 +1,208 @@
|
||||
// ============================================
|
||||
// COMPONENTS
|
||||
// Reusable UI components
|
||||
// ============================================
|
||||
|
||||
// Accessibility
|
||||
// ===================================
|
||||
.sr-only {
|
||||
position: absolute;
|
||||
width: 1px;
|
||||
height: 1px;
|
||||
padding: 0;
|
||||
margin: -1px;
|
||||
overflow: hidden;
|
||||
clip: rect(0, 0, 0, 0);
|
||||
white-space: nowrap;
|
||||
border-width: 0;
|
||||
}
|
||||
|
||||
.skip-link {
|
||||
position: absolute;
|
||||
top: -40px;
|
||||
left: 0;
|
||||
background: $brand-primary;
|
||||
color: white;
|
||||
padding: 0.75rem 1.5rem;
|
||||
text-decoration: none;
|
||||
font-weight: 600;
|
||||
z-index: 10000;
|
||||
border-radius: 0 0 $radius-md 0;
|
||||
transition: top $transition-fast;
|
||||
|
||||
&:focus {
|
||||
top: 0;
|
||||
}
|
||||
}
|
||||
|
||||
// Loading Spinner
|
||||
// ===================================
|
||||
.spinner-wrapper {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
background-color: rgba(255, 255, 255, 0.95);
|
||||
backdrop-filter: blur(8px);
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
z-index: 9999;
|
||||
}
|
||||
|
||||
.loading-content {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.loader {
|
||||
border: 4px solid rgba($brand-primary, 0.1);
|
||||
border-top: 4px solid $brand-primary;
|
||||
border-radius: 50%;
|
||||
width: 60px;
|
||||
height: 60px;
|
||||
animation: spin 0.8s linear infinite;
|
||||
margin: 0 auto;
|
||||
}
|
||||
|
||||
.loading-text {
|
||||
margin-top: 1.5rem;
|
||||
|
||||
h3 {
|
||||
font-size: 1.25rem;
|
||||
font-weight: 600;
|
||||
color: $neutral-900;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
p {
|
||||
font-size: 0.875rem;
|
||||
color: $neutral-600;
|
||||
}
|
||||
}
|
||||
|
||||
// Search Form
|
||||
// ===================================
|
||||
.search-container {
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
padding: 2rem 1rem;
|
||||
}
|
||||
|
||||
.search-form {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.search-input-group {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
background: white;
|
||||
border-radius: $radius-full;
|
||||
padding: 0.5rem;
|
||||
box-shadow: $shadow-xl;
|
||||
transition: all $transition-base;
|
||||
|
||||
&:focus-within {
|
||||
box-shadow: 0 20px 40px rgba($brand-primary, 0.2);
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
}
|
||||
|
||||
.search-icon {
|
||||
padding: 0 1rem;
|
||||
color: $neutral-400;
|
||||
font-size: 1.25rem;
|
||||
}
|
||||
|
||||
.search-input {
|
||||
flex: 1;
|
||||
border: none;
|
||||
outline: none;
|
||||
padding: 0.875rem 1rem;
|
||||
font-size: 1rem;
|
||||
background: transparent;
|
||||
color: $neutral-900;
|
||||
|
||||
&::placeholder {
|
||||
color: $neutral-400;
|
||||
}
|
||||
}
|
||||
|
||||
.search-button {
|
||||
background: $brand-primary;
|
||||
color: white;
|
||||
border: none;
|
||||
border-radius: $radius-full;
|
||||
padding: 0.875rem 2rem;
|
||||
font-weight: 600;
|
||||
font-size: 1rem;
|
||||
cursor: pointer;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
transition: all $transition-base;
|
||||
white-space: nowrap;
|
||||
|
||||
&:hover {
|
||||
background: $brand-primary-dark;
|
||||
transform: translateX(2px);
|
||||
box-shadow: $shadow-lg;
|
||||
}
|
||||
|
||||
&:active {
|
||||
transform: scale(0.95);
|
||||
}
|
||||
|
||||
&:focus {
|
||||
outline: 2px solid $brand-primary-light;
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
&.is-loading {
|
||||
position: relative;
|
||||
color: transparent;
|
||||
pointer-events: none;
|
||||
|
||||
&::after {
|
||||
content: '';
|
||||
position: absolute;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
top: 50%;
|
||||
left: 50%;
|
||||
margin-left: -8px;
|
||||
margin-top: -8px;
|
||||
border: 2px solid transparent;
|
||||
border-top-color: white;
|
||||
border-radius: 50%;
|
||||
animation: spin 0.6s linear infinite;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.search-help {
|
||||
margin-top: 1rem;
|
||||
text-align: center;
|
||||
color: $neutral-600;
|
||||
font-size: 0.875rem;
|
||||
|
||||
i {
|
||||
margin-right: 0.25rem;
|
||||
color: $brand-primary;
|
||||
}
|
||||
}
|
||||
|
||||
// Smooth transitions for all interactive elements
|
||||
button,
|
||||
a,
|
||||
input,
|
||||
.swot-card,
|
||||
.swot-card__icon {
|
||||
transition: all $transition-base;
|
||||
}
|
||||
|
||||
// Focus visible styles for accessibility
|
||||
:focus-visible {
|
||||
outline: 2px solid $brand-primary;
|
||||
outline-offset: 2px;
|
||||
}
|
||||
269
src/frontend/scss/_layout.scss
Normal file
269
src/frontend/scss/_layout.scss
Normal file
@ -0,0 +1,269 @@
|
||||
// ============================================
|
||||
// LAYOUT
|
||||
// Page structure and major sections
|
||||
// ============================================
|
||||
|
||||
// Hero Section
|
||||
// ===================================
|
||||
.gradient-hero {
|
||||
background: $gradient-hero !important;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.hero-icon {
|
||||
img {
|
||||
filter: brightness(0) invert(1);
|
||||
animation: float 3s ease-in-out infinite;
|
||||
|
||||
&:hover {
|
||||
animation: float 1s ease-in-out infinite;
|
||||
filter: brightness(0) invert(1) drop-shadow(0 0 20px rgba(255, 255, 255, 0.5));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.hero-cta .button {
|
||||
animation: pulse 2s infinite;
|
||||
}
|
||||
|
||||
// SWOT Cards
|
||||
// ===================================
|
||||
.swot-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
|
||||
gap: $spacing-lg;
|
||||
margin-top: $spacing-xl;
|
||||
}
|
||||
|
||||
.swot-card {
|
||||
background: white;
|
||||
border-radius: $radius-lg;
|
||||
padding: $spacing-lg;
|
||||
box-shadow: $shadow-lg;
|
||||
transition: all $transition-base;
|
||||
border-top: 4px solid var(--card-color);
|
||||
animation: slideUp 0.5s ease-out backwards;
|
||||
|
||||
&:hover {
|
||||
transform: translateY(-4px);
|
||||
box-shadow: $shadow-2xl;
|
||||
|
||||
.swot-card__icon {
|
||||
animation: iconPulse 0.5s ease-out;
|
||||
}
|
||||
}
|
||||
|
||||
// Card variants
|
||||
&--strength {
|
||||
--card-color: #{$swot-strength};
|
||||
}
|
||||
|
||||
&--weakness {
|
||||
--card-color: #{$swot-weakness};
|
||||
}
|
||||
|
||||
&--opportunity {
|
||||
--card-color: #{$swot-opportunity};
|
||||
}
|
||||
|
||||
&--threat {
|
||||
--card-color: #{$swot-threat};
|
||||
}
|
||||
}
|
||||
|
||||
.swot-card__header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
margin-bottom: 1rem;
|
||||
padding-bottom: 1rem;
|
||||
border-bottom: 1px solid $neutral-200;
|
||||
}
|
||||
|
||||
.swot-card__icon {
|
||||
width: 48px;
|
||||
height: 48px;
|
||||
border-radius: $radius-md;
|
||||
background: var(--card-color);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
color: white;
|
||||
font-size: 1.5rem;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.swot-card__title {
|
||||
font-weight: 600;
|
||||
font-size: 1.375rem;
|
||||
color: $neutral-900;
|
||||
flex-grow: 1;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.swot-card__count {
|
||||
background: $neutral-100;
|
||||
padding: 0.25rem 0.75rem;
|
||||
border-radius: $radius-xl;
|
||||
font-size: 0.875rem;
|
||||
font-weight: 600;
|
||||
color: $neutral-700;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.swot-card__body {
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.swot-list {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.swot-list__item {
|
||||
display: flex;
|
||||
align-items: flex-start;
|
||||
gap: 0.75rem;
|
||||
padding: 0.75rem 0;
|
||||
border-bottom: 1px solid $neutral-100;
|
||||
animation: fadeIn 0.3s ease-out backwards;
|
||||
|
||||
&:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
// Stagger animation for list items
|
||||
@for $i from 1 through 10 {
|
||||
&:nth-child(#{$i}) {
|
||||
animation-delay: #{$i * 0.1}s;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.swot-list__bullet {
|
||||
width: 6px;
|
||||
height: 6px;
|
||||
border-radius: 50%;
|
||||
background: var(--card-color);
|
||||
margin-top: 0.5rem;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.swot-list__text {
|
||||
flex: 1;
|
||||
color: $neutral-700;
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
// Stagger card animations
|
||||
@for $i from 1 through 4 {
|
||||
.swot-card:nth-child(#{$i}) {
|
||||
animation-delay: #{$i * 0.1}s;
|
||||
}
|
||||
}
|
||||
|
||||
// Result container entrance
|
||||
#result-container.animate-in {
|
||||
animation: containerFadeIn 0.6s ease-out;
|
||||
}
|
||||
|
||||
// Status Timeline
|
||||
// ===================================
|
||||
.status-timeline {
|
||||
position: relative;
|
||||
padding-left: 2rem;
|
||||
|
||||
&::before {
|
||||
content: '';
|
||||
position: absolute;
|
||||
left: 14px;
|
||||
top: 24px;
|
||||
bottom: 24px;
|
||||
width: 2px;
|
||||
background: $neutral-200;
|
||||
}
|
||||
}
|
||||
|
||||
.status-item {
|
||||
position: relative;
|
||||
padding: 1rem 0 1rem 2rem;
|
||||
animation: statusFadeIn 0.3s ease-out;
|
||||
|
||||
// Status variants
|
||||
&--info {
|
||||
.status-item__indicator,
|
||||
.status-item__header {
|
||||
color: $swot-opportunity;
|
||||
}
|
||||
}
|
||||
|
||||
&--loading {
|
||||
.status-item__indicator,
|
||||
.status-item__header {
|
||||
color: $swot-weakness;
|
||||
}
|
||||
|
||||
.status-item__content {
|
||||
animation: statusPulse 2s ease-in-out infinite;
|
||||
}
|
||||
}
|
||||
|
||||
&--success {
|
||||
.status-item__indicator,
|
||||
.status-item__header {
|
||||
color: $swot-strength;
|
||||
}
|
||||
}
|
||||
|
||||
&--error {
|
||||
.status-item__indicator,
|
||||
.status-item__header {
|
||||
color: $swot-threat;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.status-item__indicator {
|
||||
position: absolute;
|
||||
left: 0;
|
||||
top: 1.25rem;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
border-radius: 50%;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 1rem;
|
||||
background: white;
|
||||
box-shadow: 0 0 0 4px white;
|
||||
z-index: 1;
|
||||
}
|
||||
|
||||
.status-item__content {
|
||||
background: white;
|
||||
border-radius: $radius-md;
|
||||
padding: 1rem 1.25rem;
|
||||
box-shadow: $shadow-md;
|
||||
transition: all $transition-base;
|
||||
}
|
||||
|
||||
.status-item:hover .status-item__content {
|
||||
box-shadow: $shadow-lg;
|
||||
transform: translateX(4px);
|
||||
}
|
||||
|
||||
.status-item__header {
|
||||
font-weight: 600;
|
||||
font-size: 0.875rem;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
.status-item__message {
|
||||
color: $neutral-700;
|
||||
line-height: 1.6;
|
||||
font-size: 0.9375rem;
|
||||
}
|
||||
74
src/frontend/scss/_responsive.scss
Normal file
74
src/frontend/scss/_responsive.scss
Normal file
@ -0,0 +1,74 @@
|
||||
// ============================================
|
||||
// RESPONSIVE DESIGN
|
||||
// Mobile-first breakpoints
|
||||
// ============================================
|
||||
|
||||
// Tablet and below (768px)
|
||||
@media (max-width: $breakpoint-tablet) {
|
||||
.cell.is-col-span-2 {
|
||||
grid-column: span 1;
|
||||
}
|
||||
|
||||
.swot-grid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.hero-title {
|
||||
font-size: 2rem;
|
||||
}
|
||||
|
||||
.search-input-group {
|
||||
flex-direction: column;
|
||||
gap: 0.5rem;
|
||||
border-radius: $radius-xl;
|
||||
padding: 1rem;
|
||||
}
|
||||
|
||||
.search-input {
|
||||
width: 100%;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.search-button {
|
||||
width: 100%;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.swot-card__header {
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.swot-card__count {
|
||||
order: -1;
|
||||
margin-left: auto;
|
||||
}
|
||||
}
|
||||
|
||||
// Mobile (480px)
|
||||
@media (max-width: $breakpoint-mobile) {
|
||||
.swot-card {
|
||||
padding: 1rem;
|
||||
}
|
||||
|
||||
.swot-card__icon {
|
||||
width: 40px;
|
||||
height: 40px;
|
||||
font-size: 1.25rem;
|
||||
}
|
||||
|
||||
.swot-card__title {
|
||||
font-size: 1.125rem;
|
||||
}
|
||||
|
||||
.hero-title {
|
||||
font-size: 1.75rem;
|
||||
}
|
||||
|
||||
.status-timeline {
|
||||
padding-left: 1.5rem;
|
||||
}
|
||||
|
||||
.status-item {
|
||||
padding-left: 1.5rem;
|
||||
}
|
||||
}
|
||||
169
src/frontend/scss/_states.scss
Normal file
169
src/frontend/scss/_states.scss
Normal file
@ -0,0 +1,169 @@
|
||||
// ============================================
|
||||
// STATES
|
||||
// Empty and error states
|
||||
// ============================================
|
||||
|
||||
// Empty State
|
||||
// ===================================
|
||||
.empty-state {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-height: 400px;
|
||||
padding: 3rem 1.5rem;
|
||||
}
|
||||
|
||||
.empty-state__content {
|
||||
text-align: center;
|
||||
max-width: 500px;
|
||||
}
|
||||
|
||||
.empty-state__icon {
|
||||
width: 120px;
|
||||
height: 120px;
|
||||
margin: 0 auto 2rem;
|
||||
border-radius: 50%;
|
||||
background: linear-gradient(135deg, $neutral-100 0%, $neutral-200 100%);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 3rem;
|
||||
color: $neutral-400;
|
||||
animation: float 3s ease-in-out infinite;
|
||||
}
|
||||
|
||||
.empty-state__title {
|
||||
font-size: 1.75rem;
|
||||
font-weight: 600;
|
||||
color: $neutral-800;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.empty-state__description {
|
||||
font-size: 1.125rem;
|
||||
color: $neutral-600;
|
||||
line-height: 1.6;
|
||||
margin-bottom: 1.5rem;
|
||||
}
|
||||
|
||||
.empty-state__cta {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
padding: 0.875rem 2rem;
|
||||
background: $brand-primary;
|
||||
color: white;
|
||||
border-radius: $radius-full;
|
||||
text-decoration: none;
|
||||
font-weight: 600;
|
||||
transition: all $transition-base;
|
||||
box-shadow: $shadow-md;
|
||||
|
||||
&:hover {
|
||||
background: $brand-primary-dark;
|
||||
transform: translateY(-2px);
|
||||
box-shadow: $shadow-lg;
|
||||
color: white;
|
||||
}
|
||||
}
|
||||
|
||||
// Error State
|
||||
// ===================================
|
||||
.error-state {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-height: 400px;
|
||||
padding: 3rem 1.5rem;
|
||||
}
|
||||
|
||||
.error-state__content {
|
||||
text-align: center;
|
||||
max-width: 500px;
|
||||
}
|
||||
|
||||
.error-state__icon {
|
||||
width: 120px;
|
||||
height: 120px;
|
||||
margin: 0 auto 2rem;
|
||||
border-radius: 50%;
|
||||
background: linear-gradient(135deg, #FEE2E2 0%, #FEF2F2 100%);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 3rem;
|
||||
color: $swot-threat;
|
||||
animation: shake 0.5s ease-in-out;
|
||||
}
|
||||
|
||||
.error-state__title {
|
||||
font-size: 1.75rem;
|
||||
font-weight: 600;
|
||||
color: $neutral-800;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.error-state__description {
|
||||
font-size: 1.125rem;
|
||||
color: $neutral-600;
|
||||
line-height: 1.6;
|
||||
margin-bottom: 1.5rem;
|
||||
}
|
||||
|
||||
.error-state__details {
|
||||
background: $neutral-100;
|
||||
border-radius: $radius-md;
|
||||
padding: 1rem;
|
||||
margin: 1.5rem 0;
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 0.875rem;
|
||||
color: $neutral-700;
|
||||
text-align: left;
|
||||
overflow-x: auto;
|
||||
}
|
||||
|
||||
.error-state__actions {
|
||||
display: flex;
|
||||
gap: 1rem;
|
||||
justify-content: center;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.error-state__button {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
padding: 0.875rem 2rem;
|
||||
border-radius: $radius-full;
|
||||
font-weight: 600;
|
||||
transition: all $transition-base;
|
||||
box-shadow: $shadow-md;
|
||||
cursor: pointer;
|
||||
border: none;
|
||||
text-decoration: none;
|
||||
|
||||
&--primary {
|
||||
background: $brand-primary;
|
||||
color: white;
|
||||
|
||||
&:hover {
|
||||
background: $brand-primary-dark;
|
||||
transform: translateY(-2px);
|
||||
box-shadow: $shadow-lg;
|
||||
color: white;
|
||||
}
|
||||
}
|
||||
|
||||
&--secondary {
|
||||
background: white;
|
||||
color: $neutral-700;
|
||||
border: 2px solid $neutral-300;
|
||||
|
||||
&:hover {
|
||||
background: $neutral-100;
|
||||
border-color: $neutral-400;
|
||||
transform: translateY(-2px);
|
||||
color: $neutral-800;
|
||||
}
|
||||
}
|
||||
}
|
||||
30
src/frontend/scss/_typography.scss
Normal file
30
src/frontend/scss/_typography.scss
Normal file
@ -0,0 +1,30 @@
|
||||
// ============================================
|
||||
// TYPOGRAPHY
|
||||
// ============================================
|
||||
|
||||
body {
|
||||
font-family: $font-body;
|
||||
font-feature-settings: 'cv02', 'cv03', 'cv04', 'cv11';
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
|
||||
.title,
|
||||
.subtitle,
|
||||
h1, h2, h3, h4, h5, h6 {
|
||||
font-family: $font-heading;
|
||||
letter-spacing: -0.02em;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
// Hero Title with Gradient
|
||||
.hero-title {
|
||||
font-size: clamp(2.5rem, 5vw, 4rem);
|
||||
font-weight: 700;
|
||||
background: $gradient-hero-alt;
|
||||
-webkit-background-clip: text;
|
||||
-webkit-text-fill-color: transparent;
|
||||
background-clip: text;
|
||||
margin-bottom: 1rem;
|
||||
line-height: 1.1;
|
||||
}
|
||||
77
src/frontend/scss/_variables.scss
Normal file
77
src/frontend/scss/_variables.scss
Normal file
@ -0,0 +1,77 @@
|
||||
// ============================================
|
||||
// PYGENTIC AI - SCSS VARIABLES
|
||||
// Design tokens and configuration
|
||||
// ============================================
|
||||
|
||||
// Font Imports
|
||||
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Space+Grotesk:wght@500;600;700&display=swap');
|
||||
|
||||
// Font Families
|
||||
$font-body: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
|
||||
$font-heading: 'Space Grotesk', 'Inter', sans-serif;
|
||||
|
||||
// Brand Colors (from purple.svg logo)
|
||||
$brand-primary: #8B5CF6;
|
||||
$brand-primary-light: #A78BFA;
|
||||
$brand-primary-dark: #7C3AED;
|
||||
$brand-primary-darker: #6D28D9;
|
||||
|
||||
// Semantic SWOT Colors
|
||||
$swot-strength: #10B981;
|
||||
$swot-strength-light: #34D399;
|
||||
$swot-weakness: #F59E0B;
|
||||
$swot-weakness-light: #FBBF24;
|
||||
$swot-opportunity: #3B82F6;
|
||||
$swot-opportunity-light: #60A5FA;
|
||||
$swot-threat: #EF4444;
|
||||
$swot-threat-light: #F87171;
|
||||
|
||||
// Neutral Palette
|
||||
$neutral-50: #F9FAFB;
|
||||
$neutral-100: #F3F4F6;
|
||||
$neutral-200: #E5E7EB;
|
||||
$neutral-300: #D1D5DB;
|
||||
$neutral-400: #9CA3AF;
|
||||
$neutral-500: #6B7280;
|
||||
$neutral-600: #4B5563;
|
||||
$neutral-700: #374151;
|
||||
$neutral-800: #1F2937;
|
||||
$neutral-900: #111827;
|
||||
|
||||
// Gradients
|
||||
$gradient-hero: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
$gradient-hero-alt: linear-gradient(135deg, $brand-primary 0%, $brand-primary-darker 100%);
|
||||
$gradient-card: linear-gradient(135deg, #f5f7fa 0%, #c3cfe2 100%);
|
||||
|
||||
// Shadows
|
||||
$shadow-sm: 0 1px 2px 0 rgba(0, 0, 0, 0.05);
|
||||
$shadow-md: 0 4px 6px -1px rgba(0, 0, 0, 0.1), 0 2px 4px -1px rgba(0, 0, 0, 0.06);
|
||||
$shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05);
|
||||
$shadow-xl: 0 20px 25px -5px rgba(0, 0, 0, 0.1), 0 10px 10px -5px rgba(0, 0, 0, 0.04);
|
||||
$shadow-2xl: 0 25px 50px -12px rgba(0, 0, 0, 0.25);
|
||||
|
||||
// Transitions
|
||||
$transition-fast: 150ms cubic-bezier(0.4, 0, 0.2, 1);
|
||||
$transition-base: 300ms cubic-bezier(0.4, 0, 0.2, 1);
|
||||
$transition-slow: 500ms cubic-bezier(0.4, 0, 0.2, 1);
|
||||
|
||||
// Breakpoints
|
||||
$breakpoint-mobile: 480px;
|
||||
$breakpoint-tablet: 768px;
|
||||
$breakpoint-desktop: 1024px;
|
||||
$breakpoint-widescreen: 1216px;
|
||||
|
||||
// Spacing
|
||||
$spacing-xs: 0.25rem;
|
||||
$spacing-sm: 0.5rem;
|
||||
$spacing-md: 1rem;
|
||||
$spacing-lg: 1.5rem;
|
||||
$spacing-xl: 2rem;
|
||||
$spacing-2xl: 3rem;
|
||||
|
||||
// Border Radius
|
||||
$radius-sm: 8px;
|
||||
$radius-md: 12px;
|
||||
$radius-lg: 16px;
|
||||
$radius-xl: 20px;
|
||||
$radius-full: 9999px;
|
||||
64
src/frontend/scss/styles.scss
Normal file
64
src/frontend/scss/styles.scss
Normal file
@ -0,0 +1,64 @@
|
||||
/*!
|
||||
* Pygentic AI - Main Stylesheet
|
||||
* Compiled from SCSS partials
|
||||
* Version: 1.0.0
|
||||
*/
|
||||
|
||||
// Import order matters!
|
||||
// 1. Variables first (used by all other partials)
|
||||
// 2. Typography (base styles)
|
||||
// 3. Animations (keyframes)
|
||||
// 4. Components (reusable UI elements)
|
||||
// 5. Layout (page structure)
|
||||
// 6. States (empty/error states)
|
||||
// 7. Responsive (media queries last)
|
||||
|
||||
@import 'variables';
|
||||
@import 'typography';
|
||||
@import 'animations';
|
||||
@import 'components';
|
||||
@import 'layout';
|
||||
@import 'states';
|
||||
@import 'responsive';
|
||||
|
||||
// ============================================
|
||||
// CSS CUSTOM PROPERTIES
|
||||
// For runtime theming support
|
||||
// ============================================
|
||||
|
||||
:root {
|
||||
// Brand Colors
|
||||
--brand-primary: #{$brand-primary};
|
||||
--brand-primary-light: #{$brand-primary-light};
|
||||
--brand-primary-dark: #{$brand-primary-dark};
|
||||
|
||||
// SWOT Colors
|
||||
--swot-strength: #{$swot-strength};
|
||||
--swot-weakness: #{$swot-weakness};
|
||||
--swot-opportunity: #{$swot-opportunity};
|
||||
--swot-threat: #{$swot-threat};
|
||||
|
||||
// Neutrals
|
||||
--neutral-50: #{$neutral-50};
|
||||
--neutral-100: #{$neutral-100};
|
||||
--neutral-200: #{$neutral-200};
|
||||
--neutral-300: #{$neutral-300};
|
||||
--neutral-400: #{$neutral-400};
|
||||
--neutral-500: #{$neutral-500};
|
||||
--neutral-600: #{$neutral-600};
|
||||
--neutral-700: #{$neutral-700};
|
||||
--neutral-800: #{$neutral-800};
|
||||
--neutral-900: #{$neutral-900};
|
||||
|
||||
// Shadows
|
||||
--shadow-sm: #{$shadow-sm};
|
||||
--shadow-md: #{$shadow-md};
|
||||
--shadow-lg: #{$shadow-lg};
|
||||
--shadow-xl: #{$shadow-xl};
|
||||
--shadow-2xl: #{$shadow-2xl};
|
||||
|
||||
// Transitions
|
||||
--transition-fast: #{$transition-fast};
|
||||
--transition-base: #{$transition-base};
|
||||
--transition-slow: #{$transition-slow};
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
1
src/frontend/static/css/pygentic_ai.css.map
Normal file
1
src/frontend/static/css/pygentic_ai.css.map
Normal file
@ -0,0 +1 @@
|
||||
{"version":3,"sourceRoot":"","sources":["../../scss/styles.scss","../../scss/_variables.scss","../../scss/_typography.scss","../../scss/_animations.scss","../../scss/_components.scss","../../scss/_layout.scss","../../scss/_states.scss","../../scss/_responsive.scss"],"names":[],"mappings":"AAAA;AAAA;AAAA;AAAA;AAAA,GCMQ,gICFR,KACE,YDIU,8FCHV,kDACA,mCACA,kCAGF,mCAGE,YDJa,mCCKb,uBACA,gBAIF,YACE,iCACA,gBACA,WDmBkB,kDClBlB,6BACA,sCACA,qBACA,mBACA,gBCxBF,gBACE,KACE,uBAEF,GACE,0BAIJ,iBACE,QACE,0BAEF,IACE,6BAIJ,iBACE,QACE,uCAEF,IACE,0CAIJ,mBACE,KACE,UACA,2BAEF,GACE,UACA,yBAIJ,kBACE,KACE,UAEF,GACE,WAIJ,iBACE,gCACA,gCACA,gCAGF,qBACE,QACE,mBAEF,IACE,sBAIJ,2BACE,KACE,UACA,2BAEF,GACE,UACA,yBAIJ,wBACE,KACE,UACA,4BAEF,GACE,UACA,yBAIJ,uBACE,QACE,WF3CQ,6DE6CV,IACE,2CCtFJ,SACE,kBACA,UACA,WACA,UACA,YACA,gBACA,sBACA,mBACA,eAGF,WACE,kBACA,UACA,OACA,WHVc,QGWd,WACA,sBACA,qBACA,gBACA,cACA,yBACA,kDAEA,iBACE,MAMJ,iBACE,eACA,MACA,OACA,WACA,YACA,qCACA,0BACA,aACA,uBACA,mBACA,aAGF,iBACE,kBAGF,QACE,qCACA,6BACA,kBACA,WACA,YACA,mCACA,cAGF,cACE,kBAEA,iBACE,kBACA,gBACA,MHnCU,QGoCV,oBAGF,gBACE,kBACA,MH5CU,QGkDd,kBACE,gBACA,cACA,kBAGF,aACE,WAGF,oBACE,aACA,mBACA,gBACA,cHvBY,OGwBZ,cACA,WHpDU,iEGqDV,kDAEA,iCACE,2CACA,2BAIJ,aACE,eACA,MH/EY,QGgFZ,kBAGF,cACE,OACA,YACA,aACA,qBACA,eACA,yBACA,MHrFY,QGuFZ,2BACE,MH7FU,QGiGd,eACE,WHtHc,QGuHd,WACA,YACA,cH1DY,OG2DZ,qBACA,gBACA,eACA,eACA,aACA,mBACA,UACA,kDACA,mBAEA,qBACE,WHnIiB,QGoIjB,0BACA,WHpGQ,+DGuGV,sBACE,sBAGF,qBACE,0BACA,mBAGF,0BACE,kBACA,oBACA,oBAEA,iCACE,WACA,kBACA,WACA,YACA,QACA,SACA,iBACA,gBACA,+BACA,sBACA,kBACA,mCAKN,aACE,gBACA,kBACA,MHtJY,QGuJZ,kBAEA,eACE,oBACA,MHjLY,QGsLhB,2CAKE,kDAIF,eACE,0BACA,mBCvMF,eACE,wEACA,kBACA,gBAIA,eACE,+BACA,wCAEA,qBACE,wCACA,8EAKN,kBACE,4BAKF,WACE,aACA,2DACA,IJiCW,OIhCX,WJiCW,KI9Bb,WACE,gBACA,cJkCU,KIjCV,QJ0BW,OIzBX,WJMU,+DILV,kDACA,uCACA,yCAEA,iBACE,2BACA,WJCS,kCICT,kCACE,iCAKJ,qBACE,sBAGF,qBACE,sBAGF,wBACE,sBAGF,mBACE,sBAIJ,mBACE,aACA,mBACA,WACA,mBACA,oBACA,gCAGF,iBACE,WACA,YACA,cJbU,KIcV,6BACA,aACA,mBACA,uBACA,WACA,iBACA,cAGF,kBACE,gBACA,mBACA,MJ7DY,QI8DZ,YACA,SAGF,kBACE,WJ3EY,QI4EZ,sBACA,cJhCU,KIiCV,kBACA,gBACA,MJ1EY,QI2EZ,cAGF,iBACE,gBAGF,WACE,gBACA,UACA,SAGF,iBACE,aACA,uBACA,WACA,iBACA,gCACA,wCAEA,4BACE,mBAKA,8BACE,qBADF,8BACE,qBADF,8BACE,qBADF,8BACE,qBADF,8BACE,qBADF,8BACE,qBADF,8BACE,qBADF,8BACE,qBADF,8BACE,qBADF,+BACE,mBAKN,mBACE,UACA,WACA,kBACA,6BACA,iBACA,cAGF,iBACE,OACA,MJvHY,QIwHZ,gBAKA,wBACE,qBADF,wBACE,qBADF,wBACE,qBADF,wBACE,qBAKJ,6BACE,uCAKF,iBACE,kBACA,kBAEA,yBACE,WACA,kBACA,UACA,SACA,YACA,UACA,WJzJU,QI6Jd,aACE,kBACA,yBACA,oCAIE,mFAEE,MJ9Ka,QImLf,yFAEE,MJvLU,QI0LZ,4CACE,8CAKF,yFAEE,MJpMU,QIyMZ,qFAEE,MJrMQ,QI0Md,wBACE,kBACA,OACA,YACA,WACA,YACA,kBACA,aACA,mBACA,uBACA,eACA,gBACA,0BACA,UAGF,sBACE,gBACA,cJ5KU,KI6KV,qBACA,WJxMU,6DIyMV,kDAGF,yCACE,WJ5MU,+DI6MV,0BAGF,qBACE,gBACA,kBACA,yBACA,qBACA,oBAGF,sBACE,MJrOY,QIsOZ,gBACA,mBCpQF,aACE,aACA,mBACA,uBACA,iBACA,oBAGF,sBACE,kBACA,gBAGF,mBACE,YACA,aACA,mBACA,kBACA,6DACA,aACA,mBACA,uBACA,eACA,MLGY,QKFZ,wCAGF,oBACE,kBACA,gBACA,cACA,qBAGF,0BACE,mBACA,MLRY,QKSZ,gBACA,qBAGF,kBACE,oBACA,mBACA,UACA,qBACA,WLxCc,QKyCd,WACA,cLqBY,OKpBZ,qBACA,gBACA,kDACA,WLZU,6DKcV,wBACE,WL/CiB,QKgDjB,2BACA,WLhBQ,+DKiBR,WAMJ,aACE,aACA,mBACA,uBACA,iBACA,oBAGF,sBACE,kBACA,gBAGF,mBACE,YACA,aACA,mBACA,kBACA,6DACA,aACA,mBACA,uBACA,eACA,MLrEY,QKsEZ,gCAGF,oBACE,kBACA,gBACA,MLhEY,QKiEZ,qBAGF,0BACE,mBACA,MLxEY,QKyEZ,gBACA,qBAGF,sBACE,WLnFY,QKoFZ,cLzCU,KK0CV,aACA,gBACA,oCACA,kBACA,MLnFY,QKoFZ,gBACA,gBAGF,sBACE,aACA,SACA,uBACA,eAGF,qBACE,oBACA,mBACA,UACA,qBACA,cL5DY,OK6DZ,gBACA,kDACA,WL5FU,6DK6FV,eACA,YACA,qBAEA,8BACE,WLpIY,QKqIZ,WAEA,oCACE,WLtIe,QKuIf,2BACA,WLvGM,+DKwGN,WAIJ,gCACE,gBACA,ML1HU,QK2HV,yBAEA,sCACE,WLpIQ,QKqIR,aLlIQ,QKmIR,2BACA,MLhIQ,QM/Bd,yBACE,oBACE,mBAGF,WACE,0BAGF,YACE,eAGF,oBACE,sBACA,UACA,cNqDQ,KMpDR,aAGF,cACE,WACA,kBAGF,eACE,WACA,uBAGF,mBACE,eAGF,kBACE,SACA,kBAKJ,yBACE,WACE,aAGF,iBACE,WACA,YACA,kBAGF,kBACE,mBAGF,YACE,kBAGF,iBACE,oBAGF,aACE,qBP3CJ,MAEE,yBACA,+BACA,8BAGA,yBACA,yBACA,4BACA,uBAGA,sBACA,uBACA,uBACA,uBACA,uBACA,uBACA,uBACA,uBACA,uBACA,uBAGA,6CACA,mFACA,qFACA,uFACA,oDAGA,sDACA,sDACA","file":"pygentic_ai.css"}
|
||||
301
src/frontend/static/js/app.js
Normal file
301
src/frontend/static/js/app.js
Normal file
@ -0,0 +1,301 @@
|
||||
/**
|
||||
* Pygentic AI - Frontend Application
|
||||
* Progressive loading and enhanced UX interactions
|
||||
*/
|
||||
|
||||
(function() {
|
||||
'use strict';
|
||||
|
||||
// Progressive loading messages
|
||||
const LOADING_MESSAGES = [
|
||||
'Fetching URL content...',
|
||||
'Analyzing page structure...',
|
||||
'Extracting key information...',
|
||||
'Identifying patterns...',
|
||||
'Generating SWOT analysis...',
|
||||
'Finalizing insights...',
|
||||
'Almost there...'
|
||||
];
|
||||
|
||||
let loadingMessageIndex = 0;
|
||||
let loadingInterval = null;
|
||||
let pollCount = 0;
|
||||
let pollInterval = 1000; // Start at 1s
|
||||
const MAX_POLL_INTERVAL = 5000; // Max 5s
|
||||
|
||||
/**
|
||||
* Update loading message progressively
|
||||
*/
|
||||
function updateLoadingMessage() {
|
||||
const statusElement = document.getElementById('loading-status');
|
||||
if (!statusElement) return;
|
||||
|
||||
if (loadingMessageIndex < LOADING_MESSAGES.length - 1) {
|
||||
loadingMessageIndex++;
|
||||
}
|
||||
|
||||
statusElement.textContent = LOADING_MESSAGES[loadingMessageIndex];
|
||||
statusElement.style.animation = 'fadeIn 0.3s ease-in';
|
||||
}
|
||||
|
||||
/**
|
||||
* Start progressive loading messages
|
||||
*/
|
||||
function startLoadingMessages() {
|
||||
loadingMessageIndex = 0;
|
||||
pollCount = 0;
|
||||
pollInterval = 1000;
|
||||
|
||||
const statusElement = document.getElementById('loading-status');
|
||||
if (statusElement) {
|
||||
statusElement.textContent = LOADING_MESSAGES[0];
|
||||
}
|
||||
|
||||
// Update message every 3 seconds
|
||||
if (loadingInterval) {
|
||||
clearInterval(loadingInterval);
|
||||
}
|
||||
|
||||
loadingInterval = setInterval(updateLoadingMessage, 3000);
|
||||
}
|
||||
|
||||
/**
|
||||
* Stop loading messages
|
||||
*/
|
||||
function stopLoadingMessages() {
|
||||
if (loadingInterval) {
|
||||
clearInterval(loadingInterval);
|
||||
loadingInterval = null;
|
||||
}
|
||||
loadingMessageIndex = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Announce to screen readers
|
||||
*/
|
||||
function announceToScreenReader(message, priority = 'polite') {
|
||||
const announcement = document.createElement('div');
|
||||
announcement.setAttribute('role', 'status');
|
||||
announcement.setAttribute('aria-live', priority);
|
||||
announcement.setAttribute('aria-atomic', 'true');
|
||||
announcement.className = 'sr-only';
|
||||
announcement.textContent = message;
|
||||
|
||||
document.body.appendChild(announcement);
|
||||
|
||||
// Remove after announcement is made
|
||||
setTimeout(() => {
|
||||
document.body.removeChild(announcement);
|
||||
}, 1000);
|
||||
}
|
||||
|
||||
/**
|
||||
* Smooth scroll to results and manage focus
|
||||
*/
|
||||
function scrollToResults() {
|
||||
const resultsSection = document.getElementById('result-container');
|
||||
const resultsHeading = document.getElementById('results-heading');
|
||||
|
||||
if (resultsSection) {
|
||||
// Announce completion to screen readers
|
||||
announceToScreenReader('Analysis complete. Results are now available.', 'assertive');
|
||||
|
||||
// Scroll to results
|
||||
resultsSection.scrollIntoView({
|
||||
behavior: 'smooth',
|
||||
block: 'start'
|
||||
});
|
||||
|
||||
// Move focus to results heading for keyboard users
|
||||
if (resultsHeading) {
|
||||
setTimeout(() => {
|
||||
resultsHeading.focus();
|
||||
}, 600);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Exponential backoff for polling
|
||||
*/
|
||||
function calculateNextPollInterval() {
|
||||
pollCount++;
|
||||
|
||||
if (pollCount > 3) {
|
||||
pollInterval = Math.min(pollInterval * 1.5, MAX_POLL_INTERVAL);
|
||||
}
|
||||
|
||||
return pollInterval;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize keyboard navigation for SWOT cards
|
||||
*/
|
||||
function initializeKeyboardNavigation() {
|
||||
const cards = document.querySelectorAll('.swot-card');
|
||||
|
||||
cards.forEach((card, index) => {
|
||||
card.addEventListener('keydown', function(e) {
|
||||
let targetCard = null;
|
||||
|
||||
switch (e.key) {
|
||||
case 'ArrowRight':
|
||||
case 'ArrowDown':
|
||||
e.preventDefault();
|
||||
targetCard = cards[index + 1] || cards[0];
|
||||
break;
|
||||
case 'ArrowLeft':
|
||||
case 'ArrowUp':
|
||||
e.preventDefault();
|
||||
targetCard = cards[index - 1] || cards[cards.length - 1];
|
||||
break;
|
||||
case 'Home':
|
||||
e.preventDefault();
|
||||
targetCard = cards[0];
|
||||
break;
|
||||
case 'End':
|
||||
e.preventDefault();
|
||||
targetCard = cards[cards.length - 1];
|
||||
break;
|
||||
}
|
||||
|
||||
if (targetCard) {
|
||||
targetCard.focus();
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize form submission handler
|
||||
*/
|
||||
function initializeForm() {
|
||||
const form = document.getElementById('swotSearch');
|
||||
if (!form) return;
|
||||
|
||||
form.addEventListener('submit', function(e) {
|
||||
// Announce to screen readers
|
||||
announceToScreenReader('Analysis started. Please wait while we process your request.', 'assertive');
|
||||
|
||||
// Start loading messages when form is submitted
|
||||
startLoadingMessages();
|
||||
|
||||
// Show spinner
|
||||
const spinner = document.getElementById('spinner');
|
||||
if (spinner) {
|
||||
spinner.classList.remove('is-hidden');
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Monitor for analysis completion
|
||||
*/
|
||||
function monitorAnalysisCompletion() {
|
||||
const resultBox = document.getElementById('result');
|
||||
if (!resultBox) return;
|
||||
|
||||
// Use MutationObserver to watch for content changes
|
||||
const observer = new MutationObserver(function(mutations) {
|
||||
mutations.forEach(function(mutation) {
|
||||
if (mutation.type === 'childList' && resultBox.innerHTML.trim().length > 0) {
|
||||
// Analysis complete
|
||||
stopLoadingMessages();
|
||||
|
||||
// Hide spinner
|
||||
const spinner = document.getElementById('spinner');
|
||||
if (spinner) {
|
||||
spinner.classList.add('is-hidden');
|
||||
}
|
||||
|
||||
// Initialize keyboard navigation for SWOT cards
|
||||
initializeKeyboardNavigation();
|
||||
|
||||
// Scroll to results after a brief delay
|
||||
setTimeout(scrollToResults, 500);
|
||||
|
||||
// Add success class for animation
|
||||
const resultContainer = document.getElementById('result-container');
|
||||
if (resultContainer) {
|
||||
resultContainer.classList.add('animate-in');
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
observer.observe(resultBox, {
|
||||
childList: true,
|
||||
subtree: true
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Add button press effects
|
||||
*/
|
||||
function initializeButtonEffects() {
|
||||
const buttons = document.querySelectorAll('.search-button, .error-state__button');
|
||||
|
||||
buttons.forEach(button => {
|
||||
button.addEventListener('mousedown', function() {
|
||||
this.style.transform = 'scale(0.95)';
|
||||
});
|
||||
|
||||
button.addEventListener('mouseup', function() {
|
||||
this.style.transform = '';
|
||||
});
|
||||
|
||||
button.addEventListener('mouseleave', function() {
|
||||
this.style.transform = '';
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize smooth anchor scrolling
|
||||
*/
|
||||
function initializeSmoothScrolling() {
|
||||
document.querySelectorAll('a[href^="#"]').forEach(anchor => {
|
||||
anchor.addEventListener('click', function(e) {
|
||||
const href = this.getAttribute('href');
|
||||
if (href === '#' || !href) return;
|
||||
|
||||
e.preventDefault();
|
||||
const target = document.querySelector(href);
|
||||
|
||||
if (target) {
|
||||
target.scrollIntoView({
|
||||
behavior: 'smooth',
|
||||
block: 'start'
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize all features on DOM ready
|
||||
*/
|
||||
function initialize() {
|
||||
initializeForm();
|
||||
monitorAnalysisCompletion();
|
||||
initializeButtonEffects();
|
||||
initializeSmoothScrolling();
|
||||
|
||||
console.log('✨ Pygentic AI initialized');
|
||||
}
|
||||
|
||||
// Initialize when DOM is ready
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', initialize);
|
||||
} else {
|
||||
initialize();
|
||||
}
|
||||
|
||||
// Export functions for external use if needed
|
||||
window.PygenticAI = {
|
||||
startLoadingMessages,
|
||||
stopLoadingMessages,
|
||||
scrollToResults,
|
||||
announceToScreenReader
|
||||
};
|
||||
})();
|
||||
12
src/frontend/static/purple.svg
Normal file
12
src/frontend/static/purple.svg
Normal file
@ -0,0 +1,12 @@
|
||||
<!-- Book outline icon with blueviolet color scheme -->
|
||||
<svg width="128" height="128" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg" style="color: blueviolet;">
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M4 5.5H9C10.1046 5.5 11 6.39543 11 7.5V16.5C11 17.0523 10.5523 17.5 10 17.5H4C3.44772 17.5
|
||||
3 17.0523 3 16.5V6.5C3 5.94772 3.44772 5.5 4 5.5ZM14 19.5C13.6494 19.5 13.3128 19.4398 13
|
||||
19.3293V19.5C13 20.0523 12.5523 20.5 12 20.5C11.4477 20.5 11 20.0523 11 19.5V19.3293C10.6872
|
||||
19.4398 10.3506 19.5 10 19.5H4C2.34315 19.5 1 18.1569 1 16.5V6.5C1 4.84315 2.34315 3.5 4 3.5H9C10.1947
|
||||
3.5 11.2671 4.02376 12 4.85418C12.7329 4.02376 13.8053 3.5 15 3.5H20C21.6569 3.5 23 4.84315 23
|
||||
6.5V16.5C23 18.1569 21.6569 19.5 20 19.5H14ZM13 7.5V16.5C13 17.0523 13.4477 17.5 14 17.5H20C20.5523
|
||||
17.5 21 17.0523 21 16.5V6.5C21 5.94772 20.5523 5.5 20 5.5H15C13.8954 5.5 13 6.39543 13 7.5ZM5
|
||||
7.5H9V9.5H5V7.5ZM15 7.5H19V9.5H15V7.5ZM19 10.5H15V12.5H19V10.5ZM5 10.5H9V12.5H5V10.5ZM19
|
||||
13.5H15V15.5H19V13.5ZM5 13.5H9V15.5H5V13.5Z" fill="currentColor" />
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 1.1 KiB |
@ -1,3 +1,4 @@
|
||||
<Js url="{{ url_for('static', path="/js/htmx.js") }}">type="modules"</Js>
|
||||
<Js url="{{ url_for('static', path="/js/bulma.js") }}"></Js>
|
||||
<Js url="{{ url_for('static', path="/js/bulma-collapsible.min.js") }}"></Js>
|
||||
<Js url="{{ url_for('static', path="/js/bulma-collapsible.min.js") }}"></Js>
|
||||
<Js url="{{ url_for('static', path="/js/app.js") }}"></Js>
|
||||
@ -8,9 +8,13 @@
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<a href="#main-content"
|
||||
class="skip-link">Skip to main content</a>
|
||||
{% include "components/main/nav.html" %}
|
||||
{% block content%}
|
||||
{% endblock %}
|
||||
<main id="main-content">
|
||||
{% block content%}
|
||||
{% endblock %}
|
||||
</main>
|
||||
{% block js_content %}
|
||||
{% endblock js_content %}
|
||||
{% include "components/snippets/js.html" %}
|
||||
|
||||
19
src/frontend/templates/components/snippets/EmptyState.jinja
Normal file
19
src/frontend/templates/components/snippets/EmptyState.jinja
Normal file
@ -0,0 +1,19 @@
|
||||
{# def
|
||||
title: str = "No Analysis Yet",
|
||||
description: str = "Enter a URL above to get started with your first SWOT analysis",
|
||||
icon: str = "fa-chart-simple"
|
||||
#}
|
||||
|
||||
<div class="empty-state"
|
||||
role="status"
|
||||
aria-live="polite">
|
||||
<div class="empty-state__content">
|
||||
<div class="empty-state__icon"
|
||||
aria-hidden="true">
|
||||
<i class="fas {{ icon }}"></i>
|
||||
</div>
|
||||
<h3 class="empty-state__title">{{ title }}</h3>
|
||||
<p class="empty-state__description">{{ description }}</p>
|
||||
{{ content }}
|
||||
</div>
|
||||
</div>
|
||||
48
src/frontend/templates/components/snippets/ErrorState.jinja
Normal file
48
src/frontend/templates/components/snippets/ErrorState.jinja
Normal file
@ -0,0 +1,48 @@
|
||||
{# def
|
||||
title: str = "Analysis Failed",
|
||||
description: str = "We couldn't analyze this URL. Please check the URL and try again.",
|
||||
error_details: str = None,
|
||||
show_retry: bool = True
|
||||
#}
|
||||
|
||||
<div class="error-state"
|
||||
role="alert"
|
||||
aria-live="assertive">
|
||||
<div class="error-state__content">
|
||||
<div class="error-state__icon"
|
||||
aria-hidden="true">
|
||||
<i class="fas fa-triangle-exclamation"></i>
|
||||
</div>
|
||||
<h3 class="error-state__title">{{ title }}</h3>
|
||||
<p class="error-state__description">{{ description }}</p>
|
||||
|
||||
{% if error_details %}
|
||||
<div class="error-state__details"
|
||||
role="region"
|
||||
aria-label="Error details">
|
||||
{{ error_details }}
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<div class="error-state__actions">
|
||||
{% if show_retry %}
|
||||
<button class="error-state__button error-state__button--primary"
|
||||
onclick="location.reload()"
|
||||
aria-label="Try analyzing again">
|
||||
<i class="fas fa-rotate-right"
|
||||
aria-hidden="true"></i>
|
||||
<span>Try Again</span>
|
||||
</button>
|
||||
{% endif %}
|
||||
<a href="/"
|
||||
class="error-state__button error-state__button--secondary"
|
||||
aria-label="Return to home page">
|
||||
<i class="fas fa-home"
|
||||
aria-hidden="true"></i>
|
||||
<span>Go Home</span>
|
||||
</a>
|
||||
</div>
|
||||
|
||||
{{ content }}
|
||||
</div>
|
||||
</div>
|
||||
@ -1,4 +1,19 @@
|
||||
<div class="spinner-wrapper is-overlay is-hidden"
|
||||
id="spinner">
|
||||
<div class="loader"></div>
|
||||
id="spinner"
|
||||
role="dialog"
|
||||
aria-modal="true"
|
||||
aria-labelledby="spinner-title"
|
||||
aria-describedby="loading-status">
|
||||
<div class="loading-content">
|
||||
<div class="loader"
|
||||
role="progressbar"
|
||||
aria-label="Loading in progress"
|
||||
aria-busy="true"></div>
|
||||
<div class="loading-text">
|
||||
<h3 id="spinner-title">Analyzing...</h3>
|
||||
<p id="loading-status"
|
||||
aria-live="polite"
|
||||
aria-atomic="true">Fetching URL content</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@ -1,38 +1,75 @@
|
||||
{% extends "components/main/base.html" %}
|
||||
{% block content %}
|
||||
<section class="hero is-medium is-info">
|
||||
<section class="hero is-large gradient-hero">
|
||||
<div class="hero-body">
|
||||
<p class="title">SWOT ANALYZER</p>
|
||||
<p class="subtitle">Strengths, Weaknesses, Opportunities and Threats
|
||||
breakdown, courtesy of Generative AI-driven insights.</p>
|
||||
<p>Try it out now!</p>
|
||||
<div class="container has-text-centered">
|
||||
<div class="hero-icon mb-4">
|
||||
<img src="{{ url_for('static', filename='purple.svg') }}"
|
||||
alt="Pygentic AI"
|
||||
style="width: 80px; height: 80px;">
|
||||
</div>
|
||||
<h1 class="hero-title has-text-white">
|
||||
AI-Powered SWOT Analysis
|
||||
</h1>
|
||||
<p class="subtitle has-text-white-bis is-4 mb-5">
|
||||
Uncover strategic insights with generative AI.
|
||||
<br>
|
||||
Transform any URL into actionable business intelligence.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
<section class="section">
|
||||
|
||||
<section class="section"
|
||||
id="search">
|
||||
<Spinner></Spinner>
|
||||
<div class="container">
|
||||
<h1 class="title">Search Here</h1>
|
||||
<div class="search-container">
|
||||
<h2 class="title is-3 has-text-centered mb-5">Get Started</h2>
|
||||
<Form form_id="swotSearch"
|
||||
action={{ url_for('analyze_url') }}
|
||||
target="status"
|
||||
method="post">
|
||||
<div class="field has-addons">
|
||||
<Search input_type='url'>
|
||||
<div class="control">
|
||||
<button type="submit"
|
||||
class="button is-success"
|
||||
hx-indicator='#spinner'
|
||||
hx-on:click="
|
||||
const [status, result] = ['#status', '#result'].map(id => document.querySelector(id));
|
||||
<div class="search-input-group"
|
||||
role="search">
|
||||
<span class="search-icon"
|
||||
aria-hidden="true">
|
||||
<i class="fas fa-link"></i>
|
||||
</span>
|
||||
<input type="url"
|
||||
class="search-input"
|
||||
id="url"
|
||||
name="url"
|
||||
placeholder="Enter a URL to analyze (e.g., https://example.com)"
|
||||
aria-label="Enter URL to analyze"
|
||||
aria-describedby="search-help"
|
||||
required
|
||||
pattern="https?://.+"
|
||||
autocomplete="url" />
|
||||
<button type="submit"
|
||||
class="search-button"
|
||||
aria-label="Analyze URL"
|
||||
hx-indicator='#spinner'
|
||||
hx-on:click="
|
||||
const [status, result, spinner] = ['#status', '#result', '#spinner'].map(id => document.querySelector(id));
|
||||
spinner.classList.remove('is-hidden');
|
||||
status.style.display = 'block';
|
||||
result.style.display = 'none';
|
||||
">Analyze</button>
|
||||
</div>
|
||||
</Search>
|
||||
">
|
||||
<span class="button-text">Analyze</span>
|
||||
<span class="button-icon"
|
||||
aria-hidden="true">
|
||||
<i class="fas fa-arrow-right"></i>
|
||||
</span>
|
||||
</button>
|
||||
</div>
|
||||
</Form>
|
||||
</div>
|
||||
<div class="search-help">
|
||||
<i class="fas fa-info-circle"></i>
|
||||
We'll analyze the page content and provide strategic insights
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="section"
|
||||
id="analysis-content">
|
||||
<div class="box"
|
||||
@ -50,8 +87,9 @@
|
||||
hx-on:after-request="
|
||||
if(this.innerHTML.trim().length > 0) {
|
||||
console.log('Going to turn off the status element and load the result element.')
|
||||
const statusDiv = document.querySelector('#status');
|
||||
const [statusDiv, spinner] = ['#status', '#spinner'].map(id => document.querySelector(id));
|
||||
if (statusDiv) statusDiv.style.display = 'none';
|
||||
if (spinner) spinner.classList.add('is-hidden');
|
||||
this.style.display = 'block'
|
||||
}
|
||||
">
|
||||
|
||||
@ -1,42 +1,79 @@
|
||||
{% if result %}
|
||||
<section class="section"
|
||||
id="result-container">
|
||||
id="result-container"
|
||||
aria-live="polite"
|
||||
aria-atomic="false"
|
||||
role="region"
|
||||
aria-label="SWOT Analysis Results">
|
||||
<div class="container">
|
||||
<h2 class="subtitle is-2">Analysis Complete</h2>
|
||||
<div class="fixed-grid has-2-cols">
|
||||
<div class="grid">
|
||||
{% for cat, val in result.dict().items() %}
|
||||
{% if not loop.last %}
|
||||
<div class="cell">
|
||||
{% set panel_class = 'success' if cat == 'strengths' else ('warning' if cat == 'weaknesses' else ('info' if cat == 'opportunities' else 'danger')) %}
|
||||
{% set i_class = 'fas fa-solid fa-arrow-up' if cat == 'strengths' else ('fas fa-solid fa-arrow-down' if cat == 'weaknesses' else ('fas fa-regular fa-lightbulb' if cat == 'opportunities' else 'fas fa-solid fa-triangle-exclamation')) %}
|
||||
<div class="panel is-{{ panel_class }}">
|
||||
<p class="panel-heading">{{ cat.title() }}</p>
|
||||
<div class="panel-block">
|
||||
<ul>
|
||||
{% for value in val %}
|
||||
<ResultEntry result={{ value }}>
|
||||
<i class="{{ i_class }}"></i>
|
||||
</ResultEntry>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
</div>
|
||||
<h2 class="title is-2 has-text-centered mb-6"
|
||||
id="results-heading"
|
||||
tabindex="-1">
|
||||
<span aria-label="Analysis Complete">Analysis Complete</span> ✨
|
||||
</h2>
|
||||
<div class="swot-grid"
|
||||
role="list"
|
||||
aria-label="SWOT Analysis Categories">
|
||||
{% for cat, val in result.dict().items() %}
|
||||
{% if cat != 'summary' %}
|
||||
{# Determine card class and icon based on category #}
|
||||
{% set card_class = 'strength' if cat == 'strengths' else ('weakness' if cat == 'weaknesses' else ('opportunity' if cat == 'opportunities' else 'threat')) %}
|
||||
{% set icon = 'fa-arrow-trend-up' if cat == 'strengths' else ('fa-arrow-trend-down' if cat == 'weaknesses' else ('fa-lightbulb' if cat == 'opportunities' else 'fa-triangle-exclamation')) %}
|
||||
{% set category_label = 'Strengths - positive internal factors' if cat == 'strengths' else ('Weaknesses - negative internal factors' if cat == 'weaknesses' else ('Opportunities - positive external factors' if cat == 'opportunities' else 'Threats - negative external factors')) %}
|
||||
|
||||
<article class="swot-card swot-card--{{ card_class }}"
|
||||
role="listitem"
|
||||
aria-labelledby="card-title-{{ cat }}"
|
||||
tabindex="0">
|
||||
<div class="swot-card__header">
|
||||
<div class="swot-card__icon"
|
||||
aria-hidden="true">
|
||||
<i class="fas {{ icon }}"></i>
|
||||
</div>
|
||||
<h3 class="swot-card__title"
|
||||
id="card-title-{{ cat }}">
|
||||
{{ cat.title() }}
|
||||
<span class="sr-only">: {{ category_label }}</span>
|
||||
</h3>
|
||||
<span class="swot-card__count"
|
||||
aria-label="{{ val|length }} items">{{ val|length }}</span>
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="cell is-col-span-2">
|
||||
<div class="panel is-primary m-2 p-2">
|
||||
<p class="panel-heading">{{ cat.title() }}</p>
|
||||
<div
|
||||
class="panel-block has-text-justified has-text-weight-light">
|
||||
{{ val }}
|
||||
</div>
|
||||
</div>
|
||||
<div class="swot-card__body">
|
||||
<ul class="swot-list"
|
||||
aria-label="{{ cat.title() }} list">
|
||||
{% for value in val %}
|
||||
<li class="swot-list__item">
|
||||
<span class="swot-list__bullet"
|
||||
aria-hidden="true"></span>
|
||||
<span class="swot-list__text">{{ value }}</span>
|
||||
</li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</div>
|
||||
</article>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</div>
|
||||
|
||||
{# Summary Section #}
|
||||
{% if result.summary %}
|
||||
<section class="box mt-6"
|
||||
role="region"
|
||||
aria-labelledby="summary-heading"
|
||||
style="border-radius: 16px; border-left: 4px solid var(--brand-primary); box-shadow: var(--shadow-lg);">
|
||||
<h3 class="title is-4"
|
||||
id="summary-heading"
|
||||
style="color: var(--brand-primary);">
|
||||
<i class="fas fa-clipboard-list mr-2"
|
||||
aria-hidden="true"></i>
|
||||
Executive Summary
|
||||
</h3>
|
||||
<div class="content"
|
||||
style="color: var(--neutral-700); line-height: 1.8;">
|
||||
{{ result.summary }}
|
||||
</div>
|
||||
</section>
|
||||
{% endif %}
|
||||
</div>
|
||||
</section>
|
||||
{% endif %}
|
||||
@ -1,32 +1,47 @@
|
||||
{% if messages %}
|
||||
<section class="section"
|
||||
id="status-container">
|
||||
<div class="container">
|
||||
{% for message in messages %}
|
||||
{% set is_error = message.startswith('Error:') %}
|
||||
{% set is_loading = loop.last and not result %}
|
||||
{% set is_tool_message = message.startswith('Using tool') %}
|
||||
<div class="box">
|
||||
{% if is_error %}
|
||||
{% set bg_color = 'danger' %}
|
||||
{% set header_content, content = message.split('body:', 1) %}
|
||||
{% elif is_loading %}
|
||||
{% set bg_color = 'success' %}
|
||||
{% set content = message %}
|
||||
{% set header_content = 'Complete' %}
|
||||
{% elif is_tool_message %}
|
||||
{% set bg_color = 'dark'%}
|
||||
{% set header_content, content = message.split(' ', 2)[2].split('...', 1) %}
|
||||
{% else %}
|
||||
{% set bg_color = 'info' %}
|
||||
{% set content = message %}
|
||||
{% set header_content = 'In Progress' %}
|
||||
{% endif %}
|
||||
<StatusResult div_class={{ bg_color }}
|
||||
header_content={{ header_content }}>{{ message }}
|
||||
</StatusResult>
|
||||
id="status-container"
|
||||
role="region"
|
||||
aria-label="Analysis Progress"
|
||||
aria-live="polite">
|
||||
<div class="container"
|
||||
style="max-width: 800px;">
|
||||
<div class="status-timeline"
|
||||
role="feed"
|
||||
aria-label="Status updates">
|
||||
{% for message in messages %}
|
||||
{% set is_error = message.startswith('Error:') %}
|
||||
{% set is_loading = loop.last and not result %}
|
||||
{% set is_tool_message = message.startswith('Using tool') %}
|
||||
{% set is_complete = "complete" in message.lower() or "done" in message.lower() %}
|
||||
{% set status_label = 'Error' if is_error else ('Complete' if is_complete else ('In Progress' if is_loading else ('Processing' if is_tool_message else 'Status'))) %}
|
||||
|
||||
<article class="status-item {% if is_error %}status-item--error{% elif is_complete %}status-item--success{% elif is_loading %}status-item--loading{% else %}status-item--info{% endif %}"
|
||||
role="article"
|
||||
aria-label="{{ status_label }}: {{ message }}">
|
||||
<div class="status-item__indicator"
|
||||
aria-hidden="true">
|
||||
{% if is_error %}
|
||||
<i class="fas fa-circle-xmark"></i>
|
||||
{% elif is_complete %}
|
||||
<i class="fas fa-circle-check"></i>
|
||||
{% elif is_loading %}
|
||||
<i class="fas fa-circle-notch fa-spin"></i>
|
||||
{% else %}
|
||||
<i class="fas fa-circle"></i>
|
||||
{% endif %}
|
||||
</div>
|
||||
<div class="status-item__content">
|
||||
<div class="status-item__header">
|
||||
{{ status_label }}
|
||||
</div>
|
||||
<div class="status-item__message">
|
||||
{{ message }}
|
||||
</div>
|
||||
</div>
|
||||
</article>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</section>
|
||||
{% endif %}
|
||||
Reference in New Issue
Block a user