Dots-In Research Labs

Modelling the
human,
not the text.

We build the Large Behavioural Model (LBM) — the world's only foundational AI architecture that treats human behaviour as a first-class computational primitive.

Mission & Vision
Why we exist.
Language has LLMs. Vision has ViTs. The physical world has JEPA. Human behaviour has no foundation model. We're here to change that.
Mission
Build the world's first foundational AI that understands human behaviour — not through clicks and keystrokes, but through the causal, temporal, multi-dimensional structure of how humans actually function.
Vision
A world where every AI system — from healthcare to education, from productivity to mental health — is powered by a shared behavioural intelligence layer that truly understands the human it serves.
ModalityInputArchitectureStatus
LanguageTokensGPTClaudeLLaMA✓ Solved
VisionPixelsViTsCLIPDINOv2✓ Solved
Physical WorldActions / PhysicsJEPAGenie◐ Emerging
AudioSpectrogramsWhisperAudioPaLM✓ Solved
Human BehaviourBehavioural State VectorsLBM◉ We're building it
Ongoing Research
What we're working on.
Six core research streams, each addressing an unsolved problem in modelling human behaviour at scale.
01
Behavioural State Vectors
The 240-dimensional continuous representation of a human's current state — cognitive, emotional, biological, motivational — updating in real-time.
Active
02
Causal Behavioural Graphs
Directed causal graphs capturing why behaviour happens — 230M+ validated edges across 8 discovery methods.
Active
03
Episodic Memory Architecture
Multi-scale memory compressing 15-minute episodes and 15-year developmental arcs into a unified retrieval system.
In Progress
04
Self-Supervised Learning
7 self-supervised objectives for training on behavioural data without ground truth — next-token prediction for behaviour.
Active
05
Safe Reinforcement Learning
Mathematical safety guarantees for behavioural interventions. When should a model nudge, warn, or stay silent?
In Progress
06
Population Intelligence
Scaling individual models to population-level insights without Simpson's paradox — from n=1 to n=10M.
Forming
Architecture
How LBM works.
A five-layer architecture transforming raw signals into causal understanding and safe intervention.
LBM Architecture Stack
Each layer builds on the one below — from raw signal to actionable understanding.
Ingestion
Digital activitySleep patternsPhysiologyEnvironmentSocial signals
Episodes
Micro (15-30m)Macro (days-weeks)Context windowsState inference
BSV Engine
240-dim vectorReal-time updatesLatent spaceTemporal dynamics
Causal Graph
8 discovery methods230M+ edgesRoot-cause inferenceCounterfactuals
Intervention
Safe RL policyNudge / Warn / SilenceSafety boundsLong-horizon RL
Built by the Dots-In founding research team.
The Team
Who's building this.
A small, focused founding team with deep expertise across AI, behavioural science, genomics, and quantitative finance.
Publications & Open Source
Our work.
Papers, proposals, and open-source artifacts from the community.
Paper
Large Behavioural-Omics Model (LBOM) — Technical Dossier
Foundational architecture notes
Paper
LBM Technical White Paper — Architecture, Formalisms & Validation
Pre-publication · Access under NDA via community onboarding
Proposal
IndiaAI Mission — National Foundational AI Model Proposal
Submitted January 2026 · Behavioural intelligence infrastructure
Open Source
Behavioural Data Standards v0.1
Community Build Workshop · June 2026
Open Source
CBG-Viz — Causal Behavioural Graph Visualisation
Community Build Workshop · June 2026
Open Source
BehavioralBench — Evaluation Benchmark for Behavioural AI
Community Build Workshop · June 2026

The room is open.

We're building the first Behavioural Foundation Model from India — not as a company project, but as a national scientific effort.

Join the community →