Overview
⬡ Spekts
"When perspectives align, specs emerge."
Spekts (formerly Chorus / Vibe-to-Spec) is an Idea Maturation Engine. It transforms raw, high-level product ideas into concrete, mathematically precise, and reviewable technical specifications through a multi-agent orchestration pipeline.
⚡ Overview
When you have an idea, you don't always know what the edge cases are, what product decisions to make, or how it translates into code. Spekts orchestrates a team of specialized AI agents that debate, explore, and harmonize your ideas into actionable specs.
graph LR
User([Raw Idea]) --> Explorer
Explorer[🧭 Explorer] -- "Extracts Intent & Ambiguity" --> Architect
Architect[🏗️ Architect] -- "Proposes Directions" --> Critic
Critic[🔬 Critic] -- "Stress-tests Options" --> Mediator
Mediator[⚖️ Mediator] -- "Synthesizes final spec" --> Spec([📄 Final Spec Artifact])
🏗️ Architecture Principles
- Typed Artifacts: Conversations aren't the source of truth. Structured Pydantic/SQLModel artifacts are.
- Minimal State: LangGraph only holds canonical objects, stage statuses, and decisions.
- Task-based LLM Routing: Fast models handle lightweight extraction, while reasoning models (e.g., GPT-4 / Opus) handle critique and synthesis using a LiteLLM adapter.
- Day 0 Persistence: SQLite locally stores runs, artifact histories, and paused human-in-the-loop (HITL) review states.
🚦 Production Notes
The current Railway deployment is intentionally conservative:
- Single service, single worker/process.
- Keep concurrency low while SQLite is the persistence layer.
- Persistent data lives in the Railway Volume mounted at
/app/data. - Production env vars live on the Railway service itself:
SPEKTS_API_KEY,SPEKTS_REQUIRE_AUTH=true,SPEKTS_DB_URL=sqlite:////app/data/spekts.db,SPEKTS_CHECKPOINT_DB_PATH=/app/data/spekts_checkpoints.db, plus provider keys such asOPENROUTER_API_KEY. - Auth is enforced only when
SPEKTS_REQUIRE_AUTH=true. SettingSPEKTS_API_KEYalone no longer turns auth on implicitly. - The app now emits baseline browser hardening headers (
Content-Security-Policy,X-Frame-Options,X-Content-Type-Options,Referrer-Policy,Permissions-Policy) on every response. - The LangGraph SQLite checkpointer is cached per process instead of opening a fresh connection on every graph compilation.
- Use Railway Deployments to redeploy after config or image changes, and rollback from the last healthy deployment if needed.
- Move to PostgreSQL when you need more than one replica, sustained concurrency, or you start hitting SQLite lock contention.
🚀 Quick Start
1. Requirements
- Python >= 3.12
- An API Key for your LLM (OpenAI, Anthropic, Gemini, etc.) setup in an
.envfile. (See.env.example).
2. Installation
Clone the repo and install it locally with its dev dependencies:
git clone git@github.com:tapiucao/spekts.git
cd spekts
python3 -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
3. Run the Web Server
Start the API and Web UI (FastAPI via Uvicorn):
uvicorn web.app:app --reload
Then visit: 👉 http://127.0.0.1:8000/
The Web UI lets you:
- Submit raw ideas or paste context.
- Run complete generations (
full) or just scopes (idea_spec). - View
Project SpecandImplementation Specside by side. - Navigate the checkpoints and review the multi-agent history.
- Compare saved revisions side by side.
- Safely render spec/history data in the browser without trusting raw HTML from user notes or model output.
Frontend notes:
- Hero and brand assets are in
web/static/branding/; production theme issoft-carbon. - Pre-run state shows only the input form and an empty-state preview. All workstation controls (tabs, downloads, status meta, review panels) appear progressively once a run exists.
- Pipeline progress is shown in plain English — internal stage identifiers are not exposed to users.
💻 CLI Usage
You can completely bypass the UI and use the rich CLI mode to trigger Spekts pipelines or inspect past runs.
Run a new pipeline:
python cli.py run --mode idea_spec "A tool that organizes receipts and exports CSV"
Inspect a persisted run:
python cli.py inspect --run-id 1
python cli.py --output json inspect --run-id 1
Resume a paused Human-In-The-Loop (HITL) run:
python cli.py resume --run-id 1 --decision steer --notes "Tighten scope and simplify the MVP."
Inspect what each agent is focusing on:
python cli.py inspect-skills
🔌 API Reference
The entire Spekts engine exposes endpoints that you can easily consume from other applications.
Create a run:
curl -X POST http://127.0.0.1:8000/api/runs \
-H "Content-Type: application/json" \
-d '{"mode":"idea_spec","idea":"A tool that organizes receipts and exports CSV"}'
Resume a paused run:
curl -X POST http://127.0.0.1:8000/api/runs/1/resume \
-H "Content-Type: application/json" \
-d '{"decision":"steer","notes":"Keep the scope narrower and reduce moving parts."}'
Download Artifacts: Spekts persists specs, which you can download as clean Markdown or JSON directly:
curl -OJ http://127.0.0.1:8000/api/runs/1/download/project-spec.md
If auth is enabled:
curl -X POST http://127.0.0.1:8000/api/runs \
-H "Authorization: Bearer $SPEKTS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"mode":"idea_spec","idea":"A tool that organizes receipts and exports CSV"}'
🛠️ Development & Testing
Ensure you have your virtual environment active and run:
pytest -q
(Tests cover the runner, CLI, litellm routing, retry behavior, the sqlite checkpoint DB, LangGraph agents, auth/security headers, and Web UI contracts).
For a complete step-by-step local testing run, checking out how human-in-the-loop state works internally, refer to docs/local-testing.md.
Note:
llm/routing.pyis configured to bypass heavy remote LLM tests initially by forcingLITELLM_LOCAL_MODEL_COST_MAP=true.
📄 License
This repository is available under the MIT License. Check the LICENSE file for more details.