Causal. Not probabilistic.
Knowledge is stored as explainable causal chains, not as an opaque parameter vector. No hallucinations by design.
The neocortex is not a monolith of 175 billion parameters. It is roughly 150,000 small, repeating cortical columns, each learning locally, each growing and pruning its own connections — a universe of connections that shape dynamically, moment to moment. There is no global error signal. There is no frozen checkpoint. Memory is updated through the act of being used.
FractalBrain is built on this principle. Its compute fabric behaves closer to an FPGA, or to a cortex, than to a fixed ANN: it grows relevant connections when confronted with novel tasks, and recycles unused ones. Learning is local, temporal, and does not require differentiable domains. The model of the world is causal rather than statistical. The agent chooses what to sense.
We are not scaling a known thing. We are proposing a different one.
Active sensing lets the agent solve tasks with a fraction of the interactions required by conventional deep RL.
Unbounded temporal credit assignment over genomics and long-range structured data, where transformer context windows clip.
Fractal networks run on a single CPU core. No GPUs, no data centres, no backpropagation. A significantly lower energy footprint at comparable performance.
No attention window. No fixed parameter count. No boundary between training and inference. The model grows as it is used.
Knowledge is stored as explainable causal chains, not as an opaque parameter vector. No hallucinations by design.
Our internal language model runs on a fractal substrate. Local updates, no backpropagation, a fraction of the energy of GPU-based LLMs.
Knowledge acquired during deployment is kept, forever. No retraining, no forgetting. The model grows as it is used.
Four head-to-head differences. The fractal column is what we believe; the neural column is the dominant paradigm we are departing from.
FractalBrain LTD is a team of industry-hardened PhDs and engineers from DeepMind, Google, IBM, CERN, DESY. The result of over a decade of our own R&D across AI/ML, fractal theory, theoretical physics, and quantum computing.
The first cortically-inspired model. Sequence learning over a hierarchy of repeating regions.
Local, sparse, online learning at the column level. No backprop, no batches.
A unifying theory: the neocortex as a hierarchy of pattern recognisers operating in parallel.
HTM extended with attention, active sensing and hierarchical credit assignment.
Options-based planning across many cortical columns: concurrent, hierarchical, asynchronous.
One self-similar substrate. Growing topology, local Hebbian updates, causal world model — unified.
If a continually-learning substrate sounds like the problem you want to spend the next decade on, get in touch.
Join the team →