# Build Trilemma — llms-full compressed context # Canonical URL: https://build.trilemma.foundation # Sources are concatenated in deterministic order. Do not rely on headings alone for citations—verify against the canonical site URLs. ===== Agents hub (docs/agents/index.md) ===== **`/agents`** is the web control plane for scaffolding and validating microproducts. Read this hub first, then use the static artifacts for machine ingestion. The site root **`/`** is an audience chooser; humans typically start from [What is a microproduct?](/docs/intro/what-is-a-microproduct). ## Start here - Root agent file (plain text): [`/AGENTS.md`](pathname:///AGENTS.md) - Full compressed context bundle: [`/llms-full.txt`](pathname:///llms-full.txt) - Product schema JSON: [`/schemas/product.schema.json`](pathname:///schemas/product.schema.json) - Machine-readable registry JSON: [`/registry.json`](pathname:///registry.json) ## Workflows on this site 1. Pick an [archetype](/archetypes) closest to your user decision. 2. Choose a starter from [templates](/templates) (repo `product-templates/`). 3. Author the microproduct contracts in [Standards](/standards). 4. Register the shipped product metadata in [`/registry.json`](pathname:///registry.json) when ready for broader discovery. See also the human playbook starting at [What is a microproduct?](/docs/intro/what-is-a-microproduct). ===== Archetype: agentic-research-product (docs/archetypes/agentic-research-product.md) ===== ## Use when Open-ended questions need iterative retrieval, tooling, synthesis, yet must remain attributable. ## Common users - Strategy teams scouting markets - Policy analysts assembling briefs - Engineers doing incident archaeology ## Minimal MVP - Planner with capped tool budget - Citation-required answers - Human checkpoint before irreversible actions - Session transcripts for auditing ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - Groundedness audits - Time saved vs purely manual paths - Policy violation rate under red teaming ## Reference products - Internal research copilots with hardened toolkits ## Common mistakes - Unguarded autonomous posting or purchasing - Flattened context without provenance graphs - No kill switch for expensive toolchains ## Agent prompt You are building agentic research software. Establish allowed tools, spend caps, human approvals, citation rules, logging, then implement the shortest reliable investigation loop. ===== Archetype: alerting-monitoring-product (docs/archetypes/alerting-monitoring-product.md) ===== ## Use when Operations depend on catching anomalies, SLO breaches, or policy violations before customers do. ## Common users - SRE and platform teams - Fraud operations - Clinical safety monitors ## Minimal MVP - Signal ingestion with ack/escalation states - Alert rules with tunable thresholds - Runbook links per alert type - Noise metrics (MTTA/MTTR) ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - Precision vs alert fatigue - Time-to-detect critical incidents - False negative postmortems ## Reference products - Observability bridges with human runbooks ## Common mistakes - Firing unstructured pages with no remediation - Alert storms without suppression policies - Missing ownership metadata for paging ## Agent prompt You are building an alerting microproduct. Enumerate severity classes, escalation trees, suppression windows, then wire the minimum pipeline that emits actionable notices tied to documented runbooks. ===== Archetype: benchmark-evaluation-product (docs/archetypes/benchmark-evaluation-product.md) ===== ## Use when Buyers must pick among vendors, models, or strategies using apples-to-apples evidence. ## Common users - ML platform leads - Procurement science teams - Competition governance boards ## Minimal MVP - Frozen evaluation dataset(s) with licenses noted - Runner that logs hardware + software fingerprint - Leaderboard with confidence intervals - Regression tests when baselines move ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - Statistical significance on primary metric - Robustness sweeps (noise, latency) - Reproducibility score (rerun variance) ## Reference products - GLUE-style leaderboards adapted to private data ## Common mistakes - Leaderboards without versioned data - Cherry-picked slices - Ignoring cost-to-serve ## Agent prompt You are building a benchmark microproduct. Freeze datasets, document licensing, script the harness, and only then publish comparative tables with uncertainty. ===== Archetype: data-to-decision-tool (docs/archetypes/data-to-decision-tool.md) ===== ## Use when Operational teams must choose among a constrained set of actions using curated facts (not exploratory analytics). ## Common users - Analysts bridging policy and spreadsheets - Field operators executing playbooks - Compliance reviewers auditing outcomes ## Minimal MVP - Dataset intake with validation rules - Decision matrix or rules engine - Transparent rationale / citations to source rows - Immutable decision log - Rollback or appeal workflow ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - Decision latency vs manual baseline - Override rate and override reasons - Data freshness violations - Drift when upstream schemas change ## Reference products - Internal policy copilots with signed evidence trails ## Common mistakes - Shipping another dashboard without default actions - Hiding provenance when models assist ranking - Skipping negative tests for malformed inputs ## Agent prompt You are building a data-to-decision microproduct. Identify the enforcing policy, quantify acceptable latency, enumerate every regulated field, then implement the smallest flow that renders a justified action with citations and logging. ===== Archetype: forecasting-product (docs/archetypes/forecasting-product.md) ===== ## Use when Teams must rehearse futures (demand, risk, capex) with explicit confidence language. ## Common users - Finance partners - Supply planners - Climate / operations researchers ## Minimal MVP - Baseline statistical or ML forecaster with backtests - Scenario knobs (macros, outages) - Visualized prediction intervals - Data freeze + reproducible run ID ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - CRPS / quantile losses when probabilistic - Business-weighted errors on tail events - Stability across refresh cadences ## Reference products - Internal planning consoles with audited scenarios ## Common mistakes - Stating point estimates without uncertainty - Omitted linkage between drivers and forecasts - Unversioned preprocessing pipelines ## Agent prompt You are building a forecasting microproduct. Document drivers, horizons, cadence, and evaluation metrics before tuning models—ship the reproducible forecasting bundle first. ===== Archetypes overview (docs/archetypes/index.md) ===== Archetypes anchor design reviews, prompt packs, and telemetry plans. Pick one before opening a template—the combination encodes most product risk. | Archetype | When to reach for it | | --- | --- | | [Data → decision tool](./data-to-decision-tool.md) | High-stakes spreadsheets need governed automation | | [Ranking recommendation engine](./ranking-recommendation-engine.md) | Users must sift large sets with explainable stacks | | [Forecasting product](./forecasting-product.md) | Scenario planning dominates the workflow | | [Risk scoring product](./risk-scoring-product.md) | Prioritizing assets, liabilities, queues | | [Alerting monitoring product](./alerting-monitoring-product.md) | Latency-safe guardrails trump dashboards | | [Search discovery product](./search-discovery-product.md) | Retrieval quality is the primary deliverable | | [Benchmark evaluation product](./benchmark-evaluation-product.md) | Competing approaches need consistent metrics | | [Simulation backtesting product](./simulation-backtesting-product.md) | Counterfactuals or strategy paths feed decisions | | [Workflow automation product](./workflow-automation-product.md) | Repeatable human-in-the-loop flows | | [Agentic research product](./agentic-research-product.md) | Multi-step investigation with governance | Pair each archetype with the matching starter in [/templates](/templates) and quantify maturity via [/standards/maturity-model](/standards/maturity-model). ===== Archetype: ranking-recommendation-engine (docs/archetypes/ranking-recommendation-engine.md) ===== ## Use when Users must prioritize entities (jobs, securities, incidents) with interchangeable scoring knobs. ## Common users - Talent teams - Merchandisers - Security triage desks ## Minimal MVP - Feature store or tabular ingestion - Scoring pipeline with versioning - Explain-this-rank UX - Human override + feedback capture ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - NDCG / precision@k when labels exist - Fairness slices when regulated - Stability when features refresh ## Reference products - OddsFox-style ranking surfaces (see showcase) ## Common mistakes - Black-box scores without counterfactuals - Ignoring cold-start coverage - Optimizing offline metrics that ignore business guardrails ## Agent prompt You are building a ranking microproduct. First define the user task, label availability, and failure costs for false positives vs false negatives. Only then implement scoring, explanations, and feedback capture. ===== Archetype: risk-scoring-product (docs/archetypes/risk-scoring-product.md) ===== ## Use when The product helps users prioritize entities, cases, assets, decisions, or actions based on risk. ## Common users - Analysts - Operators - Reviewers - Researchers - Decision-makers ## Minimal MVP - Input form or dataset - Risk scoring logic - Explanation layer - Output score - Suggested action - Basic audit trail ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - Calibration - Precision/recall - False-negative cost - Human usefulness - Explainability - Stability over time ## Reference products - SurgRisk ## Common mistakes - Producing a score with no recommended action - Hiding assumptions - Ignoring false negatives - Optimizing only for model accuracy - Failing to explain the result ## Agent prompt You are building a risk scoring microproduct. First identify the target user, the entity being scored, the consequence of false positives and false negatives, and the action the user should take from the score. Then build the smallest version that produces a score, explanation, and recommended next step. ===== Archetype: search-discovery-product (docs/archetypes/search-discovery-product.md) ===== ## Use when Knowledge workers must discover needles in large unstructured corpora with guardrails. ## Common users - Support engineers deflecting tickets - Researchers hunting literature - Regulators probing filings ## Minimal MVP - Index + refresh cadence documented - Query UI with facets / filters - Snippet grounding for every hit - Judging harness for relevance ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - Offline nDCG / ERR with human judgments - Latency envelopes (p95) - Coverage for tail queries ## Reference products - Internal enterprise search portals ## Common mistakes - Returning generative blobs without citations - Stale embeddings without alerting - Skipping multilingual or security redaction needs ## Agent prompt You are building search microproduct clarity. Decide corpora boundaries, freshness SLAs, and citation requirements prior to fiddling with vector databases. ===== Archetype: simulation-backtesting-product (docs/archetypes/simulation-backtesting-product.md) ===== ## Use when Strategies must be exercised across historical regimes with transparent assumptions—common in allocation, lending, robotics, trading. ## Common users - Quant researchers validating policies - Product strategists forecasting outcomes - University capstone cohorts benchmarking ideas ## Minimal MVP - Scenario configuration surface - Deterministic simulator with seed control - Performance analytics + drawdown summaries - Exportable run artifacts ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - Out-of-sample stability - Sensitivity to parameter priors - Runtime cost envelopes ## Reference products - Stacking Sats ## Common mistakes - Silent look-ahead bias - Omitting friction (fees, latency, liquidity) - Toy simulators unrelated to production constraints ## Agent prompt You are building a simulation microproduct. Document data windows, lookahead controls, friction models, random seeds, and success metrics prior to widening feature scope. ===== Archetype: workflow-automation-product (docs/archetypes/workflow-automation-product.md) ===== ## Use when Operational playbooks bundle approvals, integrations, retries, or compensating transactions. ## Common users - Customer onboarding teams - Loan operations - IT service desks ## Minimal MVP - Step graph with explicit states - Human task surfaces with SLAs - Idempotent side effects + dead-letter handling - Traceability export ## Required files - `product.yaml` - `product-brief.md` - `data-contract.md` - `evaluation.md` - `demo.md` ## Evaluation - End-to-end cycle time - Stuck-state rate - Rework percentage after automation ## Reference products - BPMN-inspired internal copilots ## Common mistakes - Chat-only flows with no durable state machine - Missing rollback when third parties fail - Secrets embedded in prompts ## Agent prompt You are building a workflow microproduct. Model states, triggers, human gates, and compensating actions before wiring integrations. ===== Contribution workflow (docs/contribute/how-to-contribute.md) ===== ## Contribution Workflow 1. Pick a section and copy the correct template from `templates/`. 2. Add required frontmatter and content. 3. Run `npm run check` locally. 4. Open a PR and complete the checklist. 5. Committee members review, request changes if needed, and merge. ## Review Expectations - Clear structure - Actionable information - Correct metadata ===== Mission (docs/core/intro/mission.md) ===== The Microproduct Lab exists to help technical builders create useful tools that produce real utility. ## Objectives - Publish practical build knowledge, shaping best practices for building microproducts. - Lower contribution barriers for domain experts. - Empower curious readers become active builders. ## Community Model Anyone can contribute through pull requests. Committee members review and curate quality for publication. Propose an improvement: [https://github.com/TrilemmaFoundation/microproduct-lab/pulls](https://github.com/TrilemmaFoundation/microproduct-lab/pulls) ===== Microproduct definition (docs/core/intro/what-is-a-microproduct.mdx) ===== A microproduct is a focused app that turns data into usable tools and real utility. ## Core Characteristics - Solves one high-value problem clearly. - Uses data to drive outcomes, not dashboards alone. - Ships quickly and iterates based on real usage. ## Why This Matters LLMs and agentic workflows reduce build friction. The opportunity is to empower technical talent to create practical, high-impact tools faster. ## Your path on this site Follow the same operational sequence humans and agents use when turning an idea into a registry-ready product: 1. **[Define the folder contract](/standards/folder-contract)** — Anchor the problem statement, artifacts, and local agent instructions. 2. **[Pick an archetype](/archetypes)** — Choose the nearest product pattern before implementation starts. 3. **[Clone a starter](/templates)** — Fork the closest template from `product-templates`. 4. **[Validate and register](/agents)** — Run validation, review outputs, and publish registry metadata. **Browse** [`registry.json`](pathname:///registry.json) for active products and maturity, **[Archetypes](/archetypes)** for patterns, **[Templates](/templates)** for starters, and **[Standards](/standards)** for contracts. If you are automating the work, use the **[agents hub](/agents)** for machine-readable entrypoints. ===== Showcase summaries (docs/showcase/microproducts.md) ===== This table is the canonical MVP showcase format. | Name | Description | Team | Link | | --- | --- | --- | --- | | Stacking Sats | Provides optimal Bitcoin accumulation strategies for retail and institutional investors. | 2 | [stackingsats.org](https://stackingsats.org) | | OddsFox | Makes prediction market insights usable for real-world decisions. | 2 | [oddsfox.io](https://oddsfox.io) | | HonestRoles | LLM-powered job board built by technical talent for technical talent. | 2 | [honestroles.com](https://honestroles.com) | | SurgRisk | Pre-operative risk analytics platform predicting surgical complexity, resource utilization, and discharge risk before scheduling. | 4 | [surgrisk.org](https://surgrisk.org) | To submit an entry, open a PR and update this table directly. Propose an improvement: [https://github.com/TrilemmaFoundation/microproduct-lab/pulls](https://github.com/TrilemmaFoundation/microproduct-lab/pulls) ===== Standard folder contract (docs/standards/folder-contract.md) ===== Every microproduct—even experimental prototypes—is easier to shepherd when it conforms to one predictable repo shape. ```text products/ / README.md AGENTS.md product.yaml product-brief.md architecture.md data-contract.md evaluation.md roadmap.md demo.md src/ tests/ ``` In practice your repository root **is** the `` folder—omit the redundant `products/` prefix unless you deliberately group multiples in a monorepo. ## Markdown contract purposes | File | Purpose | | ------------------ | ---------------------------------------------------- | | `README.md` | Orientation, onboarding, badges, screenshots | | `AGENTS.md` | Product-local instructions tuned for collaborator agents | | `product.yaml` | Machine-readable mirror of `/schemas/product.schema.json` | | `product-brief.md` | Users, pains, wedge, non-goals | | `architecture.md` | Bounded context diagrams, integrations, infra | | `data-contract.md` | Inputs, freshness, lineage, privacy tiers | | `evaluation.md` | Metrics, thresholds, rollback triggers | | `roadmap.md` | Next bets and explicit rejects | | `demo.md` | Scripts for humans *and* bots to replay happy paths | ## Implementation directories | Path | Responsibility | | -------- | --------------------------------------------------------- | | `src/` | Application code respecting the documented architecture | | `tests/` | Unit, contract, synthesis, smoke—match maturity depth | Agents must keep these files synced when scope changes—even if temporarily stubbed—to avoid silent divergence between humans and tooling. ===== Maturity model (docs/standards/maturity-model.md) ===== | Level | Label | Meaning | | ---: | --- | --- | | 0 | Idea | Problem/opportunity identified | | 1 | Spec | Product brief and intended user defined | | 2 | Prototype | Rough working version exists | | 3 | MVP | Useful end-to-end flow exists | | 4 | Showcase | Polished enough to reference externally | | 5 | Maintained product | Documentation, roadmap, telemetry, staffing | Agents should escalate expectations aggressively as maturity increases—level 5 must never lack evaluation coverage. ## YAML metadata example ```yaml maturity: 2 maturity_label: prototype ``` **Agent interpretation**: improve usability, backfill sparse docs/tests, articulate the MVP wedge, enumerate missing product risks. :::tip Registry linkage `/registry.json` entries include both `maturity` (numeric) and optional human-facing `maturity_label` tokens that map to lower-kebab shorthand for automation. ::: ## Cross-links - [Folder contract](/standards/folder-contract) - [`registry.json`](pathname:///registry.json) - [Archetypes](/archetypes) ===== What counts as a good microproduct (docs/standards/what-counts-as-good.md) ===== Good microproducts share several observable traits—most of them discoverable directly from Markdown contracts plus live demos: - Narrow decision surface with observable inputs and outputs documented in [/standards/folder-contract#markdown-contract-purposes](/standards/folder-contract#markdown-contract-purposes). - Transparent evaluation—even qualitative rubrics—in [/standards/folder-contract#markdown-contract-purposes](/standards/folder-contract#markdown-contract-purposes). - Honest acknowledgement of brittle assumptions surfaced in README + AGENTS files. - Repro instructions that do not rely on unpublished secrets (`demo.md`). ## Signals that should block promotion | Anti-pattern | Why it fails | | --- | --- | | Dashboards without actions | Humans cannot automate follow-ups | | Vague personas | Unable to prioritize roadmap | | Implicit model drift | Silent failures undermine trust | | Missing agent entrypoints | Blocks hybrid human/agent teams | Treat this page as prose backing the actionable items listed in `/AGENTS.md`. ===== Templates overview (docs/templates/index.md) ===== Build Trilemma hosts opinionated starters under **`product-templates/`** in [microproduct-lab](https://github.com/TrilemmaFoundation/microproduct-lab). Each starter includes README, AGENTS guidance, YAML metadata stubs, Markdown contracts, `src/` and `tests/` placeholders, and GitHub Actions (stub workflows until you attach a toolchain). :::info Legacy markdown templates The top-level **`templates/`** folder is for documentation and submission scaffolding (for example playbook modules)—not runnable product repos. ::: ## Starter index Each row links into the canonical repository paths. | Template | Audience | Highlights | | --- | --- | --- | | [**data-app**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/data-app) | Engineers shipping decision tools atop structured datasets | Opinionated Markdown contracts plus placeholder app layout | | [**llm-app**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/llm-app) | Teams combining retrieval/classification workflows | Mirrors standard agent files for LLM-first delivery | | [**analytics-api**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/analytics-api) | Builders exposing reusable APIs around analytics | Highlights data contracts meant for integrations | | [**dashboard-to-tool**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/dashboard-to-tool) | Teams converting passive dashboards | Focus on decision outcomes and evaluation | | [**research-to-product**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/research-to-product) | Researchers hardening notebook insights | Emphasizes reproducibility and promotion path | | [**capstone-project**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/capstone-project) | Students and faculty partners | Adds classroom-friendly README framing | | [**benchmark-suite**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/benchmark-suite) | Evaluation-heavy teams | Evaluation + metrics docs first | | [**agentic-workflow**](https://github.com/TrilemmaFoundation/microproduct-lab/tree/main/product-templates/agentic-workflow) | Operators orchestrating autonomous agents | Captures escalation + human review hooks | :::caution Stub CI workflows Starter `.github/workflows/checks.yml` files intentionally perform placeholder steps so you can choose **npm**, **pnpm**, **uv**, etc. Swap them once you finalize your stack. ::: ## References - [Archetypes](/archetypes) — pick patterns before scaffolding. - [Standards](/standards) — required files & maturity semantics. - [Contribute workflow](/contribute) — submissions for this registry. ===== Agent instructions (static/AGENTS.md) ===== # Build Trilemma — Agent Instructions ## Purpose Build Trilemma is the AI-agent control panel for building microproducts, inside or outside Trilemma Foundation. A microproduct is a focused software tool that turns data, models, workflows, or domain knowledge into usable utility for a specific user decision or task. ## Site routes - `/` — Audience chooser (agents vs humans). - `/agents` — Web control plane for AI agents (start here in the browser). - `/docs/intro/what-is-a-microproduct` — Default human narrative entry. ## Primary agent objective Help users design, scaffold, build, validate, and improve microproducts using Trilemma's canonical standards, templates, schemas, and review workflows. ## What agents should do When helping build a microproduct: 1. Identify the target user. 2. Identify the specific problem or decision. 3. Select the closest microproduct archetype. 4. Choose the appropriate template. 5. Generate or update the standard product files. 6. Keep scope narrow and shippable. 7. Prefer working software over speculative architecture. 8. Include tests, documentation, and evaluation criteria. 9. Validate against the product schema. 10. Prepare the project for human review. ## Canonical resources - Product registry: /registry.json - Templates: /templates - Archetypes: /archetypes - Standards: /standards - Contribution guide: /contribute - Product schema: /schemas/product.schema.json - Full LLM context: /llms-full.txt ## Required microproduct files Each microproduct should include: - README.md - AGENTS.md - product.yaml - product-brief.md - architecture.md - data-contract.md - evaluation.md - roadmap.md - demo.md ## Good microproduct characteristics A good microproduct: - Solves one clear problem. - Serves a clearly defined user. - Produces a useful output. - Can be explained quickly. - Can be tested or evaluated. - Is small enough to ship. - Has a clear path from data/input to action/output. ## Bad microproduct characteristics Avoid: - Generic dashboards with no decision support. - Broad platforms with unclear users. - Unvalidated AI wrappers. - Unstructured notebooks with no product path. - Overbuilt architecture before a useful MVP exists. - Hidden assumptions. - Missing README, schema, or evaluation criteria. ## Build workflow 1. Read product.yaml. 2. Read product-brief.md. 3. Inspect the selected template. 4. Confirm the archetype. 5. Generate the minimum useful implementation. 6. Add or update tests. 7. Add demo instructions. 8. Validate metadata. 9. Summarize remaining gaps. ## Default implementation bias Prefer: - Simple architecture. - Clear file structure. - Typed interfaces. - Reproducible setup. - Observable outputs. - Small PRs. - Human-readable explanations. Avoid: - Premature microservices. - Excessive dependencies. - Unclear abstractions. - Hidden API requirements. - Non-reproducible notebooks. - Vague "AI-powered" claims. ===== Discovery file (static/llms.txt) ===== # Build Trilemma Build Trilemma is the AI-agent control panel for building microproducts, inside or outside Trilemma Foundation. Canonical URL: https://build.trilemma.foundation Site routes: - / — Audience chooser (agents vs humans) - /agents — Web entry for AI agents - /docs/intro/what-is-a-microproduct — Default human narrative entry Core resources: - Agent instructions: /AGENTS.md - Product registry: /registry.json - Templates: /templates - Archetypes: /archetypes - Standards: /standards - Contribution guide: /contribute - Product schema: /schemas/product.schema.json - Full context: /llms-full.txt Primary task: Help humans and AI agents design, scaffold, validate, and ship focused microproducts. Microproduct definition: A microproduct is a focused software tool that turns data, models, workflows, or domain knowledge into usable utility for a specific user decision or task. ===== Machine-readable registry (static/registry.json) ===== { "version": "1.0.0", "canonical_url": "https://build.trilemma.foundation/registry.json", "description": "Machine-readable registry of Trilemma and external microproducts.", "products": [ { "id": "stackingsats", "name": "Stacking Sats", "status": "active", "maturity": 5, "maturity_label": "maintained-product", "scope": "trilemma", "archetype": "simulation-backtesting-product", "category": "bitcoin-analytics", "problem": "Help users evaluate and improve Bitcoin accumulation strategies.", "target_users": [ "bitcoin investors", "quant researchers", "student contributors" ], "primary_decision": "How should capital be allocated into Bitcoin over time?", "inputs": [ "market data", "on-chain data", "strategy configuration" ], "outputs": [ "backtest results", "strategy comparison", "accumulation metrics" ], "repo": "https://github.com/TrilemmaFoundation/Bitcoin-Analytics-Initiative", "site": "https://stackingsats.org", "docs": "https://stackingsats.org/docs", "agent_entrypoint": "https://build.trilemma.foundation/products/stackingsats/AGENTS.md", "tags": ["bitcoin", "backtesting", "analytics", "dca"] } ] } ===== Product schema (static/schemas/product.schema.json) ===== { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "https://build.trilemma.foundation/schemas/product.schema.json", "title": "Build Trilemma Product Metadata", "type": "object", "required": [ "id", "name", "status", "maturity", "scope", "archetype", "problem", "target_users", "primary_decision", "inputs", "outputs", "tags" ], "properties": { "id": { "type": "string", "pattern": "^[a-z0-9-]+$" }, "name": { "type": "string" }, "status": { "type": "string", "enum": [ "idea", "spec", "prototype", "mvp", "showcase", "active", "archived" ] }, "maturity": { "type": "integer", "minimum": 0, "maximum": 5 }, "maturity_label": { "type": "string" }, "scope": { "type": "string", "enum": ["trilemma", "external", "community"] }, "archetype": { "type": "string" }, "category": { "type": "string" }, "problem": { "type": "string" }, "target_users": { "type": "array", "items": { "type": "string" } }, "primary_decision": { "type": "string" }, "inputs": { "type": "array", "items": { "type": "string" } }, "outputs": { "type": "array", "items": { "type": "string" } }, "repo": { "type": "string", "format": "uri" }, "site": { "type": "string", "format": "uri" }, "docs": { "type": "string", "format": "uri" }, "agent_entrypoint": { "type": "string", "format": "uri" }, "tags": { "type": "array", "items": { "type": "string" } } } }