pith. machine review for the scientific record. sign in

arxiv: 2605.13904 · v1 · submitted 2026-05-13 · 🧬 q-bio.NC · cs.LG

Recognition: no theorem link

Feature Visualization Recovers Known Cortical Selectivity from TRIBE v2

Authors on Pith no claims yet

Pith reviewed 2026-05-15 02:55 UTC · model grok-4.3

classification 🧬 q-bio.NC cs.LG
keywords feature visualizationbrain encoder modelscortical selectivityventral visual hierarchygradient ascentfMRI predictionTRIBE v2
0
0 comments X

The pith

Feature visualization via gradient ascent on a brain encoder recovers the known progression of selectivity from V1 to V4 and distinctive patterns for MT, FFA and PPA.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes feature visualization, which uses gradient ascent to maximize an encoder's predicted activation for a chosen brain region, as a way to check whether the model has captured the functional layout of cortex. When run on TRIBE v2 with a frozen V-JEPA 2 backbone, the method produces images showing steadily larger and more complex features moving from V1 through V4, consistent with the ventral visual hierarchy. It further yields area-specific patterns: radial streaks for MT, face-like structures for FFA, and straight-line grids for PPA. Optimized images for FFA drive the model's predicted response roughly four times higher than a real photograph of a face.

Core claim

The central claim is that feature visualization recovers a visible progression of increasing spatial scale and feature complexity across V1 to V4, matching the ventral-stream hierarchy. It also produces three distinctive downstream regimes: radial frozen-motion streaks for the middle temporal area (MT) despite static-only optimization, face-like features for the fusiform face area (FFA), and consistent rectilinear line patterns for the parahippocampal place area (PPA). Optimized FFA stimuli drive the predicted region approximately 4x as much as a natural face photograph.

What carries the argument

Feature visualization defined as gradient ascent on the encoder's predicted activation for a target ROI, applied while holding the V-JEPA 2 backbone frozen.

If this is right

  • The recovered images exhibit spatial scales and feature types that align with the established selectivity of each visual area.
  • The technique supplies a qualitative test of whether an encoder has internalized the ventral-stream hierarchy beyond mere prediction accuracy.
  • FFA-optimized stimuli elicit substantially stronger model responses than natural face photographs, indicating the production of super-stimuli.
  • The same procedure can be applied to any brain encoder whose backbone is differentiable.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the method generalizes, it could be used to probe whether encoders capture additional known properties such as position invariance or category selectivity.
  • The emergence of motion-like patterns for MT from purely static optimization implies the underlying model has extracted some dynamic structure from its training data.
  • This approach might help flag cases where an encoder relies on spurious image statistics instead of representations that match biological tuning.

Load-bearing premise

That the images produced by gradient ascent reveal the functional organization the model has learned rather than being shaped mainly by optimization artifacts unrelated to biological selectivity.

What would settle it

If the optimized images for V1 fail to show simple oriented edges or Gabor-like patterns, or if the images for MT lack radial motion streaks while the method is still said to recover known cortical selectivity, the central claim would be falsified.

Figures

Figures reproduced from arXiv: 2605.13904 by Brinnae Bent, Stuart Bladon.

Figure 1
Figure 1. Figure 1: All 35 optimized stimuli: 7 target ROIs (columns) × 5 random-seed restarts (rows), under identical hyperparameters (gray-64 + global lift, β = 1.0, λfft = 10−3 , 3000 steps). The number printed below each panel is the predicted mean activation of the target ROI for that restart (TRIBE v2 z-scored units); the restart with the highest target activation in each column is marked with a star (⋆). The FFA/r2 cel… view at source ↗
Figure 2
Figure 2. Figure 2: Per-ROI predicted activation for the FFA-optimized stimulus (blue) vs. the 5-seed random-noise baseline (orange). FFA (highlighted) is driven far above baseline; V4 and MT also increase moderately; PPA is suppressed well below baseline. (a) +0.080 (b) +0.343 [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Natural face photograph (left, predicted FFA activation +0.080) versus the color-64 optimized FFA stimulus (right, predicted FFA activation +0.343). The natural face is a frontal portrait resized to 256 × 256 and tiled to 64 identical frames, used as the photograph row of [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
read the original abstract

Brain encoder models predict cortical fMRI responses from the internal activations of pretrained vision and language networks, and are typically evaluated by held-out prediction accuracy. This is a useful signal for training but a poor one for interpretation: it tells us an encoder fits the data without telling us whether it has internalized the functional organization of the brain. We propose feature visualization -- gradient ascent on the encoder's predicted activation for a target region of interest (ROI) -- as a complementary interpretability technique, and apply it to TRIBE v2 composed with V-JEPA 2 (ViT-G, 40 layers), holding both frozen and synthesizing still images for seven regions spanning the ventral and dorsal visual hierarchies. Under identical hyperparameters, the probe recovers a visible progression of increasing spatial scale and feature complexity across V1 to V4, matching the ventral-stream hierarchy. It also produces three distinctive downstream regimes: radial "frozen-motion" streaks for the middle temporal area (MT) despite static-only optimization, face-like features for the fusiform face area (FFA), and consistent rectilinear line patterns for the parahippocampal place area (PPA). Optimized FFA stimuli drive the predicted region ~4x as much as a natural face photograph, consistent with feature visualization producing adversarial super-stimuli rather than canonical exemplars. The probe is simple, differentiable, and applicable to any brain encoder with a differentiable backbone, allowing for qualitative evaluation of brain encoders.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes feature visualization via gradient ascent on the frozen encoder's scalar output for target ROIs as a complementary interpretability method for brain encoder models. Applied to TRIBE v2 composed with V-JEPA 2 (ViT-G), it claims to recover the ventral-stream hierarchy through a visible progression of increasing spatial scale and feature complexity from V1 to V4, plus distinctive patterns for downstream areas: radial frozen-motion streaks in MT, face-like features in FFA, and rectilinear line patterns in PPA. Optimized FFA stimuli are reported to drive ~4x the activation of a natural face photograph, with the method positioned as simple and broadly applicable to any differentiable brain encoder.

Significance. If the observed patterns can be shown to reflect the model's internalized mapping to cortical selectivity rather than optimization artifacts, the work supplies a qualitative interpretability tool that complements held-out prediction accuracy. This could help evaluate whether brain encoders capture known neuroscientific structure such as hierarchical complexity and area-specific tuning, and the approach is differentiable and extensible to other models.

major comments (3)
  1. [Abstract and Results] Abstract and Results: The claim that the probe recovers known cortical selectivity rests on qualitative visual matches, but the reported ~4x activation for optimized FFA stimuli versus a natural face photograph lacks error bars, statistical tests, details on the comparison stimulus, or hyperparameter sensitivity analysis, leaving the quantitative support for the central claim incomplete.
  2. [Methods and Results] Methods and Results: No controls or quantitative metrics are described to separate internalized biological selectivity from optimization artifacts or V-JEPA 2 inductive biases (e.g., scrambled-encoder ablations, similarity scores to fMRI-validated preferred stimuli, or comparisons across random seeds). This is load-bearing because the paper itself notes that the procedure produces 'adversarial super-stimuli' rather than canonical exemplars.
  3. [Results on MT, FFA, and PPA] Results on MT, FFA, and PPA: The distinctive regimes (radial streaks despite static-only optimization, face-like features, rectilinear patterns) are presented as evidence of recovered selectivity, yet without metrics or ablations it remains unclear whether these arise from the TRIBE v2 mapping or from the geometry of gradient ascent on the frozen backbone.
minor comments (2)
  1. [Abstract] The abstract states that seven regions are examined but does not list them explicitly; adding the ROI names would improve clarity.
  2. [Methods] Reproducibility would benefit from explicit reporting of gradient-ascent hyperparameters (learning rate, number of iterations, any regularization) and the precise definition of the scalar output being optimized.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback. We address each major comment point by point below, clarifying our position and noting revisions where the manuscript will be updated to strengthen the presentation.

read point-by-point responses
  1. Referee: [Abstract and Results] The claim that the probe recovers known cortical selectivity rests on qualitative visual matches, but the reported ~4x activation for optimized FFA stimuli versus a natural face photograph lacks error bars, statistical tests, details on the comparison stimulus, or hyperparameter sensitivity analysis, leaving the quantitative support for the central claim incomplete.

    Authors: We agree that the quantitative support for the ~4x activation claim requires additional rigor. In the revised manuscript we have added error bars computed across multiple independent optimization runs with different random seeds, specified the exact natural face photograph used for comparison (a canonical stimulus drawn from the same fMRI dataset), and included a paired statistical test confirming the difference is significant. A short hyperparameter sensitivity analysis has also been inserted. The primary evidence for recovered selectivity remains the qualitative progression observed across V1–V4 under fixed hyperparameters, which the quantitative FFA result is intended only to illustrate. revision: yes

  2. Referee: [Methods and Results] No controls or quantitative metrics are described to separate internalized biological selectivity from optimization artifacts or V-JEPA 2 inductive biases (e.g., scrambled-encoder ablations, similarity scores to fMRI-validated preferred stimuli, or comparisons across random seeds). This is load-bearing because the paper itself notes that the procedure produces 'adversarial super-stimuli' rather than canonical exemplars.

    Authors: We acknowledge that stronger controls would help isolate the contribution of the learned TRIBE v2 mapping. The manuscript already states that the outputs are adversarial super-stimuli. In revision we have added (i) consistency checks across random seeds showing that the reported patterns are stable and (ii) a new paragraph comparing the optimized stimuli to fMRI-validated preferred features from the literature. Full scrambled-encoder ablations lie outside the scope of this initial proof-of-concept study; we have explicitly noted this limitation and flagged it as planned future work. revision: partial

  3. Referee: [Results on MT, FFA, and PPA] The distinctive regimes (radial streaks despite static-only optimization, face-like features, rectilinear patterns) are presented as evidence of recovered selectivity, yet without metrics or ablations it remains unclear whether these arise from the TRIBE v2 mapping or from the geometry of gradient ascent on the frozen backbone.

    Authors: The distinctive patterns are interpreted against established neuroscientific priors: radial structure for MT (motion selectivity), face-like structure for FFA, and rectilinear structure for PPA. To address the concern we have added quantitative similarity scores between the optimized images and canonical exemplars drawn from the literature, and we demonstrate that the same regimes appear across independent optimization runs. The fact that a clear ventral-stream progression emerges from V1 to V4 under identical hyperparameters provides evidence that the patterns are not solely artifacts of gradient ascent on the frozen backbone. revision: yes

Circularity Check

0 steps flagged

No significant circularity: direct gradient ascent on frozen encoder compared to external biological benchmarks

full rationale

The paper's central procedure is gradient ascent on the scalar output of a frozen brain encoder (TRIBE v2 composed with V-JEPA 2) to synthesize images maximizing predicted activation for a target ROI. The resulting patterns are then inspected for qualitative matches to independently established cortical selectivities (ventral-stream hierarchy progression, MT radial streaks, FFA face-like features, PPA rectilinear patterns). These matches rely on external neuroscience literature rather than any reduction to parameters fitted from the same data or self-citation chains. No self-definitional steps, fitted-input-as-prediction, or ansatz smuggling via self-citation appear in the derivation. The method is self-contained and falsifiable against known biology.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard optimization assumptions and the pre-existing encoder; no new free parameters are introduced beyond typical hyperparameter choices, and no invented entities are postulated.

free parameters (1)
  • gradient ascent hyperparameters
    Step size, number of iterations, and regularization choices for image synthesis are selected but not reported as data-fitted in the abstract.
axioms (1)
  • standard math Gradient ascent on a differentiable model's output can synthesize inputs that maximize activation for a target unit or region.
    Invoked implicitly when describing the probe as gradient ascent on predicted ROI activation.

pith-pipeline@v0.9.0 · 5558 in / 1358 out tokens · 38100 ms · 2026-05-15T02:55:25.341664+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

20 extracted references · 20 canonical work pages · 4 internal anchors

  1. [1]

    V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning

    URL https://arxiv.org/abs/ 2506.09985. Bashivan, P., Kar, K., and DiCarlo, J. J. Neural population control via deep image synthesis.Science, 364(6439): eaav9436,

  2. [2]

    Preprint at bioRxiv 461525

    doi: 10.1126/science.aav9436. Preprint at bioRxiv 461525. Born, R. T. and Bradley, D. C. Structure and function of visual area MT.Annual Review of Neuroscience, 28: 157–189,

  3. [3]

    doi: 10.1146/annurev.neuro.26.041002. 131052. d’Ascoli, S., Rapin, J., Benchetrit, Y ., Banville, H., and King, J.-R. TRIBE: TRImodal brain encoder for whole- brain fMRI response prediction.arXiv preprint,

  4. [4]

    URL https://arxiv

    doi: 10.48550/arXiv.2507.22229. URL https://arxiv. org/abs/2507.22229. Algonauts 2025 winning en- try; FAIR Brain & AI team. d’Ascoli, S., Rapin, J., Benchetrit, Y ., Brookes, T., Begany, K., Raugel, J., Banville, H., and King, J.-R. A foundation model of vision, audition, and language for in-silico neuroscience. Meta AI Research,

  5. [5]

    Erhan, D., Bengio, Y ., Courville, A., and Vincent, P

    doi: 10.1038/33402. Erhan, D., Bengio, Y ., Courville, A., and Vincent, P. Visual- izing higher-layer features of a deep network. Technical Report Technical Report 1341, Department of Computer Science and Operations Research, Universit´e de Montr´eal, June

  6. [6]

    Goodfellow, I

    doi: 10.1038/nature18933. Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples.arXiv preprint,

  7. [7]

    Explaining and Harnessing Adversarial Examples

    URLhttps://arxiv.org/abs/1412.6572. Hegd´e, J. and Van Essen, D. C. Selectivity for complex shapes in primate visual area V2.Journal of Neuro- science, 20(5):RC61,

  8. [9]

    Kanwisher, N., McDermott, J., and Chun, M

    doi: 10.1113/jphysiol.1962.sp006837. Kanwisher, N., McDermott, J., and Chun, M. M. The fusiform face area: A module in human extrastriate cortex specialized for face perception.Journal of Neuroscience, 17(11):4302–4311,

  9. [10]

    Frémaux, H

    doi: 10.1523/JNEUROSCI. 17-11-04302.1997. Kourtzi, Z. and Kanwisher, N. Activation in human MT/MST by static images with implied motion.Jour- nal of Cognitive Neuroscience, 12(1):48–55,

  10. [11]

    LeCun, Y

    doi: 10.1162/08989290051137594. LeCun, Y . A path towards autonomous machine intelligence. OpenReview,

  11. [12]

    net/forum?id=BZ5a1r-kVsf

    URL https://openreview. net/forum?id=BZ5a1r-kVsf. Position paper, ver- sion 0.9.2, 2022-06-27. Maunsell, J. H. R. and Van Essen, D. C. Functional prop- erties of neurons in middle temporal visual area of the macaque monkey. I. selectivity for stimulus direction, speed, and orientation.Journal of Neurophysiology, 49 (5):1127–1147,

  12. [13]

    Olah, C., Mordvintsev, A., and Schubert, L

    doi: 10.1152/jn.1983.49.5.1127. Olah, C., Mordvintsev, A., and Schubert, L. Fea- ture visualization.Distill, 2(11),

  13. [14]

    URL https://distill

    doi: 10.23915/distill.00007. URL https://distill. pub/2017/feature-visualization. https://distill.pub/2017/feature-visualization. Pasupathy, A. and Connor, C. E. Population coding of shape in area V4.Nature Neuroscience, 5(12):1332–1338,

  14. [15]

    Ponce, C

    doi: 10.1038/nn972. Ponce, C. R., Xiao, W., Schade, P. F., Hartmann, T. S., Kreiman, G., and Livingstone, M. S. Evolving images for visual neurons using a deep generative network reveals coding principles and neuronal preferences.Cell, 177(4): 999–1009.e10,

  15. [16]

    Ratan Murty, N

    doi: 10.1016/j.cell.2019.04.005. Ratan Murty, N. A., Bashivan, P., Abate, A., DiCarlo, J. J., and Kanwisher, N. Computational models of category- selective brain regions enable high-throughput tests of selectivity.Nature Communications, 12(1):5540,

  16. [17]

    Schrimpf, M., Kubilius, J., Hong, H., Majaj, N

    doi: 10.1038/s41467-021-25409-6. Schrimpf, M., Kubilius, J., Hong, H., Majaj, N. J., Rajaling- ham, R., Issa, E. B., Kar, K., Bashivan, P., Prescott-Roy, J., Geiger, F., Schmidt, K., Yamins, D. L. K., and Di- Carlo, J. J. Brain-score: Which artificial neural network for object recognition is most brain-like?bioRxiv,

  17. [18]

    URL https://www.biorxiv

    doi: 10.1101/407007. URL https://www.biorxiv. org/content/10.1101/407007v2. Preprint, original 2018, updated

  18. [19]

    Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

    URL https://arxiv.org/abs/1312.6034. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks.arXiv preprint,

  19. [20]

    Intriguing properties of neural networks

    URL https: //arxiv.org/abs/1312.6199. Walker, E. Y ., Sinz, F. H., Cobos, E., Muhammad, T., Froudarakis, E., Fahey, P. G., Ecker, A. S., Reimer, J., Pitkow, X., and Tolias, A. S. Inception loops discover what excites neurons most using deep predictive mod- els.Nature Neuroscience, 22(12):2060–2065,

  20. [21]

    Mouse V1 (not macaque); preprint at bioRxiv 506956

    doi: 10.1038/s41593-019-0517-x. Mouse V1 (not macaque); preprint at bioRxiv 506956. Yamins, D. L. K. and DiCarlo, J. J. Using goal-driven deep learning models to understand sensory cortex.Nature Neuroscience, 19(3):356–365,