Helix is a persistent intelligence layer that sits between you and your AI assistant — learning from every session the way biology learns from experience, without rewriting a single line of its own code.
The model has no access to the context that makes you productive: the patterns you've built dozens of times, the naming conventions you've settled on, the architectural decisions you made last week. Every session begins at zero.
You re-describe your stack every session. The AI asks what FastAPI is. Again.
The AI proposes solutions that contradict conventions established three weeks ago.
Long sessions lose early decisions. Short sessions have none to lose.
The work done yesterday doesn't make today faster. Every session has the same ceiling.
Biological epigenetics describes how an organism's experience leaves marks on gene expression without changing the underlying DNA. A cell exposed to repeated stress develops methylation patterns that make it respond faster next time. The organism adapts by annotating, not rewriting.
Helix applies this principle to AI infrastructure. Every record has two regions.
Every code block is tracked for frequency. Patterns that appear repeatedly are promoted to the permanent library. No human decision required.
→ DNA methylation in biologyCoding style observed across sessions builds a personalized style guide. The system learns your naming patterns, your indentation, your architectural preferences.
→ Histone modificationHigh-frequency phrases are tracked, scored by token savings, and promoted to a personal symbol dictionary. Your vocabulary compresses your own context.
→ Chromatin accessibility mapsAfter enough LLM classifications of the same language, the scanner generates its own rules — and stops paying for API calls. It learned the pattern itself.
→ Epigenetic memory across generationsUnresolved risks persist across sessions. A flagged decision that was never resolved stays active and resurfaces when relevant context appears again.
→ Stress-induced marks that sensitizeStructural fingerprints that match across sessions auto-register as templates. The system generates diversity from proven segments — no curation needed.
→ V(D)J recombinationFive biological components. One unified intelligence layer.
The Atom Pool doesn't contain templates someone wrote. It contains templates that emerged automatically from observing what was built. Frequency is the signal. What you do repeatedly is what you care about.
Individual code blocks, function signatures, config entries. 2–15 tokens. The base vocabulary of your infrastructure.
Groups of atoms that combine into working subsystems — auth middleware, DB modules, health checks. 15–100 tokens.
Full MCP servers, service stacks, API routers assembled from molecules. 100+ tokens. Ready to instantiate.
Every user message flows through Membrane, fans into four parallel processing channels, converges at Context Builder, passes through compression and injection — then the LLM response loops back to start the cycle again.
Structural analysis of any code in 100+ languages. Runs on every file at near-zero cost. Identifies structure without understanding meaning.
When Tier 1 needs deeper understanding, Haiku classifies the code block — purpose, type, quality, reuse potential. Only runs when Tier 1 is insufficient.
After N Tier 2 analyses of the same language, the system generates its own rules. Those heuristics run before the LLM on future analyses — eliminating the API call entirely.
High-frequency phrases are promoted to symbols in your personal dictionary. The LLM always receives the full phrase. Your context window carries the compressed form. Token savings compound with every session.
Helix is in active development. We're looking for early builders, researchers, and teams who want to compound their AI workflows before everyone else does.