
This is the first edition of NeuroClip. If you're new here: we use Meta's TRIBE v2 — an AI model trained on 1,000+ hours of fMRI brain data from 720 real subjects — to predict how the human cortex responds to content. Then we share what we find.
Today we scanned a 24-second Lamborghini reel.
Why this video? It's the kind of content most brand strategists would dismiss as superficial — cars, fast cuts, loud audio. But it has millions of views. Something is working at a level deeper than aesthetics. We wanted to see what.
Here's what TRIBE v2 found.
The visual cortex fired immediately
V1 — the primary visual cortex — activated in the first frame. This is normal for any video. What's notable is how high the activation stayed throughout the entire 24 seconds. Most content shows a sharp drop-off in V1 activation after second 3 as the brain habituates. This reel didn't habituate. The reason: constant visual novelty. Every cut showed a different angle, a different surface, a different reflection. The brain never had a chance to settle.
The auditory cortex stayed engaged the whole time
Engine sounds aren't just aesthetic in car content — they're functional. They keep the auditory cortex stimulated in a way that quiet, music-only content doesn't. Combined with the visual stimulation, this creates what neuroscientists call multi-sensory integration: when multiple sensory channels fire simultaneously, the brain treats the content as more salient than the sum of its parts. You can't get this from a static image with background music.
The superior temporal sulcus activated on every human frame
The STS is the brain's social processing center. It fires when we see other people, especially faces and bodies in motion. Every time the reel showed a hand polishing the car or a person stepping into frame, the STS spiked. This is why "people in your content" advice keeps coming up in marketing playbooks — it's not just engagement-bait, it's literally activating a different cortical network.
The key insight
Most content relies on a single neural channel. Visual content for V1. Voiceover content for the auditory cortex. Text content for Broca's area. The best content stacks multiple channels simultaneously so your brain has no idle moment to disengage.
This is also why polished brand commercials sometimes underperform raw UGC. A polished commercial often optimizes for one channel — usually visual storytelling — while a UGC creator naturally talks, moves, and shows things at the same time. Three channels firing at once beats one channel firing perfectly.
What's coming next week
We're scanning 5 more pieces of content this week — a high-budget brand ad, a low-budget UGC ad, a viral talking-head reel, a static carousel, and a meme. We'll share which one activates the most cortical area and what that tells us about what actually works.
If you have a piece of content you want scanned, just hit reply. We'll get to as many as we can.
Until next week,
NeuroClip
PS: All scans are predictions from Meta's TRIBE v2 model. It's the largest publicly available neural prediction model trained on real fMRI data, but it's still a model. We'll always be honest about what it can and can't tell us.