Meta Unveils TRIBE v2 AI Model That Predicts Human Brain Activity
Rongchai Wang Mar 26, 2026 13:29
Meta releases TRIBE v2, an AI foundation model trained on 700+ subjects that creates digital twins of human neural responses to visual and audio stimuli.
Meta has released TRIBE v2, a foundation model capable of predicting human brain activity in response to images, sounds, videos, and text. The TRImodal Brain Encoder represents the company's first AI system designed to function as a digital twin of human neural processing.
The model builds on Meta's Algonauts 2025 award-winning architecture but scales dramatically in scope. While the original version trained on low-resolution fMRI recordings from just four individuals, TRIBE v2 incorporates data from more than 700 healthy volunteers exposed to diverse media inputs including podcasts, videos, images, and written content.
What makes TRIBE v2 potentially significant for researchers is its zero-shot capability. The model can predict high-resolution fMRI brain activity for new subjects, different languages, and novel tasks without requiring additional training data. According to Meta, this consistently outperforms standard modeling approaches in neuroscience research.
The practical application here is efficiency. Neuroscience experiments traditionally require human subjects for each hypothesis test—an expensive, time-consuming process with significant ethical oversight requirements. A reliable digital model of brain responses could let researchers run preliminary tests computationally before committing to human trials.
Meta is releasing the research paper, model weights, and code under a Creative Commons BY-NC license, meaning non-commercial use is permitted. The company has also launched a demo website for researchers to explore the model's capabilities.
The release fits within Meta's broader push into brain-computer interface research, an area where the company has invested heavily through its Reality Labs division. While TRIBE v2 focuses on understanding brain responses rather than direct neural interfaces, the underlying research could inform future products in the AR/VR space where predicting user perception matters.
For the AI research community, the open release of a model trained on such extensive neuroimaging data provides a new tool for studying how biological neural networks process multimodal information—insights that could eventually feed back into artificial neural network design.
Image source: Shutterstock