Our Large Behavioural Model creates a digital twin of every user. When our agents call your API, they carry deep human context — turning generic requests into precise, personal actions.
Rule-based engines match "users who bought X also bought Y". They know what you did — never who you are. Half the time, recommendations miss. Users ignore. Churn.
The missing layer isn't more data. It's behavioural understanding.
We build a secure, private behavioural twin for each user. It runs locally, predicting their state, intent, and receptivity in real-time.
How we turn chaos into understanding, entirely on-device.
No raw data ever leaves the phone.
State updates every 200ms.
Wearables, apps, IoT devices pipe raw signals — biometrics, tap patterns, scroll behaviour — collected passively with consent.
A specialised tokenizer converts continuous signals into discrete behavioural tokens — like sound waves into musical notation.
A 200B-parameter Mixture-of-Experts predicts and contrasts possible futures via masked + contrastive learning.
Per-user LoRA layers generated in 30 seconds — a personalised behavioural engine that fits in your pocket.
The agent calls your API with a behaviourally-enriched request — same endpoint, profoundly different context.
Every outcome refines the twin. The system learns from the consequences of its own recommendations.
Click any agent in the orbital diagram to explore what it does, what APIs it needs, and how it transforms outcomes.
The emerging standard for agent-to-service communication. Bi-directional context sharing with the behavioural twin.
{
"method": "tools/call",
"params": {
"name": "food_service.recommend",
"arguments": {
"behavioural_context": {
"state_vector": [0.42, 0.81, 0.18],
"cognitive_load": "low",
"decision_style": "deliberate",
"peak_receptivity": true,
"confidence": 0.94
},
"preferences": {
"nutrition": "high_protein",
"max_options": 3
}
}
}
}Mobile chips (NPU) are finally powerful enough to run quantized 7B models locally. No cloud latency. No privacy paradox.
1M+ context windows allow us to feed a month of behavioural history into a single inference pass. The "vibe" is now mathematical.
APIs are moving from serving humans (UI) to serving Agents (JSON). Your API needs to learn how to speak "Agency".
We operate on a "Zero-Knowledge Proof" of intent. The twin runs on the user's device. We only train the base model.
The model runs on-device. No PII leaves the phone.
Only noisy gradients are shared for base model training.
Our base models andtokenizer are open source on HuggingFace.
We are selecting 10 high-volume API partners for our Q3 2026 pilot program. Integrate the twin, remove the friction.
We analyze your API surface to identify "human moments" — endpoints that benefit from context.
We map our behavioural tokens to your API parameters (e.g., `impulse_score` → `recommendation_weight`).
You add a single middleware to handle the `X-Behavioural-Context` header or MCP tool call.