pith. machine review for the scientific record. sign in

arxiv: 2604.22716 · v1 · submitted 2026-04-24 · 🧬 q-bio.NC

Recognition: unknown

What are the functions of primary visual cortex (V1)?

Authors on Pith no claims yet

Pith reviewed 2026-05-08 08:54 UTC · model grok-4.3

classification 🧬 q-bio.NC
keywords primary visual cortexV1saliency mapsaccade guidancevisual bottlenecktop-down feedbackeye movementsvisual recognition
0
0 comments X

The pith

V1 constructs a bottom-up saliency map to guide exogenously driven saccades and initiates a processing bottleneck that limits downstream visual recognition.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper argues that the primary visual cortex V1 performs three linked roles. It builds a saliency map from retinal inputs that directs automatic eye movements toward standout locations in the scene. V1 also marks the start of a severe cut in the quantity of visual data sent forward, so higher areas receive only a thinned-out version of the input. To offset this limit, V1 answers top-down requests for extra details, mostly in the central visual field, allowing better recognition of whatever was selected. A reader would care because the account recasts vision as a narrow-channel process of choosing where to look and then seeing only that slice, instead of grasping the entire field at once.

Core claim

V1 acts as a motor cortex for exogenously guiding saccades by constructing a bottom-up saliency map of the visual field. V1 initiates a processing bottleneck: a massive reduction of visual information begins at its output to downstream areas. Downstream recognition is limited by impoverished information, V1 supports ongoing recognition by providing additional information queried by top-down feedback from downstream areas, directed predominantly to central visual field representations. These V1 functions underpin a framework in which vision is mainly looking and seeing through the bottleneck. Looking selects a fraction of visual information into the bottleneck, largely by saccades that center

What carries the argument

The bottom-up saliency map constructed in V1, which directs exogenous saccades and marks the onset of the information bottleneck that forces downstream areas to operate on reduced data.

If this is right

  • Saccades are guided exogenously by the saliency map built in V1 rather than by prior knowledge or top-down signals.
  • A large fraction of visual information is discarded starting at V1 outputs, so downstream recognition must work with far less data than arrives at the retina.
  • Top-down feedback to V1 supplies missing details needed for recognition, with this feedback directed mainly to representations of the central visual field.
  • Vision operates as a looking-and-seeing process: looking selects limited content via saccades that center it at gaze, and seeing recognizes the selected content using mainly peripheral and central field processing.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Disrupting V1 would be expected to impair both reflexive shifts of gaze to conspicuous items and the refinement of recognition through feedback queries.
  • The known tuning properties of V1 neurons are repurposed to compute global scene saliency in addition to local feature detection.
  • Models of artificial vision could gain from inserting an explicit early saliency stage followed by an information bottleneck before recognition layers.

Load-bearing premise

The assumption that V1's bottom-up saliency map is the primary driver for exogenous saccade guidance and that the major reduction in visual information begins specifically at V1 output rather than earlier or in a more distributed way.

What would settle it

An experiment showing that exogenous saccades to salient stimuli continue normally after V1 inactivation, or measurements demonstrating that information content does not drop sharply between V1 inputs and V1 outputs to higher areas.

Figures

Figures reproduced from arXiv: 2604.22716 by Li Zhaoping.

Figure 1
Figure 1. Figure 1: Functional roles of the primary visual cortex V1. V1 first creates a bottom-up saliency view at source ↗
Figure 2
Figure 2. Figure 2: V1 exogenously guides saccades. A: V1 in a network of brain areas for guiding gaze and atten￾tion. A-1: in primates, electric stimulation of retinotopic locations in V1, extrastriate cortices, parietal cortex, superior colliculus (SC), or frontal eye field (FEF) evokes saccades, but only V1 lesions abolish visually guided saccade for weeks. A-2: a view on how bottom-up saliency (computed by V1) and object … view at source ↗
Figure 3
Figure 3. Figure 3: , caption in the next page 7 view at source ↗
Figure 3
Figure 3. Figure 3: Caption of figure 3 in the preceding page: view at source ↗
read the original abstract

Although Hubel and Wiesel established decades ago how individual V1 neurons transform retinal inputs, functions of V1 as a whole are being discovered only recently. First, V1 acts as a motor cortex for exogenously guiding saccades by constructing a bottom-up saliency map of the visual field. Second, V1 initiates a processing bottleneck: a massive reduction of visual information begins at its output to downstream areas. Third, downstream recognition is limited by impoverished information, V1 supports ongoing recognition by providing additional information queried by top-down feedback from downstream areas, directed predominantly to central visual field representations. These V1 functions underpin a framework in which vision is mainly looking and seeing through the bottleneck. Looking selects a fraction of visual information into the bottleneck, largely by saccades that center selected contents at gaze. Seeing recognizes the selected contents. Looking and seeing rely mainly on processing in the peripheral and central visual fields.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that primary visual cortex (V1) has three key functions: (1) acting as a motor cortex for exogenously guiding saccades by constructing a bottom-up saliency map of the visual field; (2) initiating a processing bottleneck via massive reduction of visual information at its outputs to downstream areas; and (3) supporting ongoing recognition by supplying additional information through top-down feedback from downstream areas, directed mainly to central visual field representations. These functions support an overarching framework in which vision is primarily 'looking' (selecting information via peripheral field and saccades) and 'seeing' (recognizing selected contents via central field) through the bottleneck.

Significance. If the proposed functions hold, the manuscript would offer a significant integrative reframing of V1 from a passive feature extractor to an active participant in visuomotor control and information gating. The 'looking and seeing' framework could influence models of active vision, hierarchical processing, and the interplay between bottom-up saliency and top-down feedback. It synthesizes recent discoveries into a cohesive view with potential implications for understanding visual bottlenecks and recognition limits.

major comments (2)
  1. Abstract: The central claim that 'a massive reduction of visual information begins at its output to downstream areas' and thereby limits downstream recognition is load-bearing for the bottleneck and framework arguments, yet no quantitative metrics (e.g., mutual information, bits per neuron, population decoding accuracy, or direct comparisons of task-relevant information between LGN input and V1 output) are provided to establish the localization, magnitude, or timing of this reduction relative to pre-V1 or post-V1 stages.
  2. Abstract: The synthesis of V1 functions, particularly the saliency-map role in exogenous saccade guidance, depends on external evidence and prior results without new supporting data, derivations, or controls presented in the manuscript, leaving the grounding of the three-function framework reliant on unexamined citations.
minor comments (1)
  1. Abstract: The phrasing in the third function ('Downstream recognition is limited by impoverished information, V1 supports ongoing recognition...') lacks a connecting conjunction or clause, reducing readability of the core claims.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which highlight opportunities to better ground our integrative framework. We address each major comment below and commit to revisions that strengthen the evidence presentation while preserving the manuscript's synthetic nature.

read point-by-point responses
  1. Referee: Abstract: The central claim that 'a massive reduction of visual information begins at its output to downstream areas' and thereby limits downstream recognition is load-bearing for the bottleneck and framework arguments, yet no quantitative metrics (e.g., mutual information, bits per neuron, population decoding accuracy, or direct comparisons of task-relevant information between LGN input and V1 output) are provided to establish the localization, magnitude, or timing of this reduction relative to pre-V1 or post-V1 stages.

    Authors: We agree that explicit quantitative grounding would improve clarity. The manuscript synthesizes literature showing information reduction at the V1 output stage (e.g., via anatomical divergence and physiological capacity limits), but does not introduce new metrics. In revision we will expand the abstract with targeted citations to information-theoretic and decoding studies and add a concise subsection summarizing key quantitative comparisons between LGN and V1 population outputs. This will localize the bottleneck more precisely without new experiments. revision: yes

  2. Referee: Abstract: The synthesis of V1 functions, particularly the saliency-map role in exogenous saccade guidance, depends on external evidence and prior results without new supporting data, derivations, or controls presented in the manuscript, leaving the grounding of the three-function framework reliant on unexamined citations.

    Authors: The paper's goal is to propose a cohesive 'looking and seeing' framework that integrates three established V1 roles rather than to report new data or derivations. Each role draws on prior empirical work that is cited in the text. To address the concern, we will revise the abstract to foreground the primary supporting references for the saliency-map function and add a brief evidence-summary paragraph or table in the main text that maps each function to its key studies. This makes the citation basis more transparent while remaining faithful to the manuscript's synthetic scope. revision: partial

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper presents three conceptual functions of V1 as recently discovered insights without any mathematical derivations, equations, fitted parameters, or explicit prediction steps that could reduce to their own inputs by construction. The saliency-map claim for saccade guidance and the bottleneck assertion are stated directly as properties of V1 rather than derived from self-referential assumptions or prior self-citations within the provided text. No load-bearing self-citation chain, ansatz smuggling, or renaming of known results is exhibited that would make the central framework equivalent to its inputs. The synthesis is therefore self-contained as a high-level organizational proposal.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claims rest on foundational knowledge from Hubel and Wiesel plus unspecified recent experimental discoveries about V1 functions; no free parameters or new entities are introduced in the abstract.

axioms (1)
  • domain assumption Hubel and Wiesel's established findings on how individual V1 neurons transform retinal inputs
    Cited as foundational knowledge established decades ago.

pith-pipeline@v0.9.0 · 5445 in / 1362 out tokens · 66420 ms · 2026-05-08T08:54:46.639348+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

70 extracted references

  1. [1]

    Hubel DH, Wiesel TN:Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex.The Journal of physiology1962,160:106–154

  2. [2]

    Carandini M, Demb JB, Mante V , Tolhurst DJ, Dan Y, Olshausen BA, Gallant JL, Rust NC: Do we know what the early visual system does?Journal of Neuroscience2005,25:10577–10597

  3. [3]

    Olshausen BA, Field DJ:What is the other 85 percent of v1 doing.L van Hemmen, & T Sejnowski (Eds)2006,23:182–211

  4. [4]

    Journal of Cerebral Blood Flow & Metabolism2001,21:1133–1145

    Attwell D, Laughlin SB:An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow & Metabolism2001,21:1133–1145

  5. [5]

    Zhaoping L:Understanding vision: theory, models, and data.Oxford University Press 2014,

  6. [6]

    Proceedings of the National Academy of Sciences of the USA1999,96:10530–10535

    Li Z:Contextual influences in V1 as a basis for pop out and asymmetry in visual search. Proceedings of the National Academy of Sciences of the USA1999,96:10530–10535

  7. [7]

    Li Z:A saliency map in primary visual cortex.Trends in Cognitive Sciences2002,6:9–16

  8. [8]

    Zhaoping L:Feedback from higher to lower visual areas for visual recognition may be weaker in the periphery: Glimpses from the perception of brief dichoptic stimuli.Vision Research2017,136:32–49

  9. [9]

    Zhaoping L:A new framework for understanding vision from the perspective of the primary visual cortex.Current Opinion in Neurobiology2019,58:1–10

  10. [10]

    Klink PC, Teeuwen RR, Lorteije JA, Roelfsema PR:Inversion of pop-out for a distracting feature dimension in monkey visual cortex.Proceedings of the National Academy of Sciences 2023,120:e2210839120

  11. [11]

    •[12] Yu G, Katz L, Quaia C, Messinger A, Krauzlis R:Face-related activity in superior collicu- lus and temporal cortex of primates.Neuron2024,112:2814–22

    Isa T, Yoshida M:Neural mechanism of blindsight in a macaque model.Neuroscience 2021,469:138–161. •[12] Yu G, Katz L, Quaia C, Messinger A, Krauzlis R:Face-related activity in superior collicu- lus and temporal cortex of primates.Neuron2024,112:2814–22. Showed that visually responsive neurons in the superior colliculus stop responding to visual inputs whe...

  12. [12]

    ••[14] Zhaoping L:Peripheral vision is mainly for looking rather than seeing.Neuroscience Research2024,201:18–26

    Zhaoping L:Attention capture by eye of origin singletons even without awareness—a hallmark of a bottom-up saliency map in the primary visual cortex.Journal of Vision2008, 8:article 1. ••[14] Zhaoping L:Peripheral vision is mainly for looking rather than seeing.Neuroscience Research2024,201:18–26. An understanding of a wide array of phenomena in the periph...

  13. [13]

    Zhaoping L, Zhe L:Primary visual cortex as a saliency map: A parameter-free prediction and its test by behavioral data.PLoS Comput Biol2015,11:e1004375

  14. [14]

    Dombrowe I, Donk M, Wright H, Olivers CN, Humphreys GW:The contribution of stimulus-driven and goal-driven mechanisms to feature-based selection in patients with spatial attention deficits.Cognitive neuropsychology2012,29:249–274

  15. [15]

    Finlay B, Schiller P , Volman S:Quantitative studies of single-cell properties in monkey striate cortex. iv. corticotectal cells.Journal of Neurophysiology1976,39:1352–1361

  16. [16]

    InCere- bral Cortex: Extrastriate Cortex in Primate, Edited by Rockland K, Kaas J, Peters A, New York, Plenum Publishing Corporation; 1997, 205–242

    Nowak L, Bullier J:The timing of information transfer in the visual system. InCere- bral Cortex: Extrastriate Cortex in Primate, Edited by Rockland K, Kaas J, Peters A, New York, Plenum Publishing Corporation; 1997, 205–242. 1997

  17. [17]

    Cerkevich CM, Lyon DC, Balaram P , Kaas JH:Distribution of cortical neurons projecting to the superior colliculus in macaque monkeys.Eye Brain2014,2014:121–137

  18. [18]

    Knierim J, Van Essen D:Neuronal responses to static texture patterns in area V1 of the alert macaque monkey.Journal of Neurophysiology1992,67:961–80

  19. [19]

    Yan Y, Zhaoping L, Li W:Bottom-up saliency and top-down learning in the primary visual cortex of monkeys.Proceedings of the National Academy of Sciences2018,115:10499– 10504

  20. [20]

    Tehovnik E, Slocum W, Schiller P:Saccadic eye movements evoked by microstimulation of striate cortex.The European Journal of Neuroscience2003,17:870–8

  21. [21]

    Jazayeri M, Lindbloom-Brown Z, Horwitz GD:Saccadic eye movements evoked by op- togenetic activation of primate v1.Nature neuroscience2012,15:1368–1370. 14

  22. [22]

    Westerberg JA, Schall JD, Woodman GF, Maier A:Feedforward attentional selection in sensory cortex.Nature Communications2023,14:article number: 5993

  23. [23]

    Thompson KG, Hanes DP , Bichot NP , Schall JD:Perceptual and motor processing stages identified in the activity of macaque frontal eye field neurons during visual search.Journal of neurophysiology1996,76:4040–4055

  24. [24]

    neurons encode the location of the salient stimulus.Cerebral Cortex2001,11:581– 591

    Constantinidis C, Steinmetz MA:Neuronal responses in area 7a to multiple-stimulus displays: I. neurons encode the location of the salient stimulus.Cerebral Cortex2001,11:581– 591

  25. [25]

    White BJ, Kan JY, Levy R, Itti L, Munoz DP:Superior colliculus encodes visual saliency before the primary visual cortex.Proceedings of the National Academy of Sciences2017,114:9451– 9456

  26. [26]

    Zhaoping L:From the optic tectum to the primary visual cortex: migration through evo- lution of the saliency map for exogenous attentional guidance.Current opinion in neurobiology 2016,40:94–102

  27. [27]

    Wu R, Xu J, Li C, Zhang Z, Lin S, Li Ly, Li Yt:Preference-independent saliency map in the mouse superior colliculus.Communications biology2025,8:article number: 565

  28. [28]

    Ipata AE, Gee AL, Gottlieb J, Bisley JW, Goldberg ME:Lip responses to a popout stimulus are reduced if it is overtly ignored.Nature neuroscience2006,9:1071–1076

  29. [29]

    Gaspelin N, Ma X, Luck SJ:Signal suppression 2.0: An updated account of attentional capture and suppression.Psychonomic bulletin & review2025,32:2648–2668

  30. [30]

    Buschman TJ, Miller EK:Top-down versus bottom-up control of attention in the pre- frontal and posterior parietal cortices.science2007,315:1860–1862

  31. [31]

    Bisley J, Goldberg M:Attention, intention, and priority in the parietal lobe.Annual Review of Neuroscience2010,33:1–21

  32. [32]

    Zhou H, Desimone R:Feature-based attention in the frontal eye field and area V4 during visual search.Neuron2011,70:1205–1217

  33. [33]

    Duecker K, Shapiro KL, Hanslmayr S, Griffiths BJ, Pan Y, Wolfe JM, Jensen O:Guided visual search is associated with target boosting and distractor suppression in early visual cortex.Communications Biology2025,8:article number: 912

  34. [34]

    Donk M, Van Zoest W:Effects of salience are short-lived.Psychological Science2008, 19:733–739

  35. [35]

    Liu C, Liu C, Huber L, Zhaoping L, Zhang P:The superficial layers of the primary visual cortex create a saliency map that feeds forward to the parietal cortex.Plos Biology2025, 23:e3003159. 15

  36. [36]

    Stoll J, Thrun M, Nuthmann A, Einh ¨auser W:Overt attention in natural scenes: Objects dominate features.Vision research2015,107:36–48

  37. [37]

    Mannan S, Kennard C, Husain M:The role of visual salience in directing eye move- ments in visual object agnosia.Current Biology2009,19:R247–8

  38. [38]

    Meng M, Tong F:Can attention selectively bias bistable perception? differences be- tween binocular rivalry and ambiguous figures.Journal of vision2004,4:article 2

  39. [39]

    ••[42] Kim T, Pasupathy A:Neural correlates of crowding in macaque area V4.Journal of Neu- roscience2024,44:e2260232024

    Hsu YH, Chen CC:Eye-movement patterns for perceiving bistable figures.Journal of Vision2025,25:article 3. ••[42] Kim T, Pasupathy A:Neural correlates of crowding in macaque area V4.Journal of Neu- roscience2024,44:e2260232024. Examined neural responses in monkey V4 to an object presented with or without surrounding objects, and found that shape selectivit...

  40. [40]

    Zhaoping L:Contrast-reversed binocular dot-pairs in random-dot stereograms for depth perception in central visual field: Probing the dynamics of feedforward-feedback processes in visual inference.Vision Research2021,186:124–139

  41. [41]

    Whitney D, Levi DM:Visual crowding: A fundamental limit on conscious perception and object recognition.Trends in cognitive sciences2011,15:160–168

  42. [42]

    Rosenholtz R, Yu D, Keshvari S:Challenges to pooling models of crowding: Implica- tions for visual mechanisms.Journal of vision2019,19:aritcle 15

  43. [43]

    ••[47] Zhaoping L:Testing the top-down feedback in the central visual field using the re- versed depth illusion.iScience2025,28:112223

    Nuthmann A:How do the regions of the visual field contribute to object search in real- world scenes? evidence from eye movements.Journal of Experimental Psychology: Human Perception and Performance2014,40:342–360. ••[47] Zhaoping L:Testing the top-down feedback in the central visual field using the re- versed depth illusion.iScience2025,28:112223. Showed ...

  44. [44]

    Zhaoping L, Ackermann J:Reversed depth in anticorrelated random-dot stereograms and the central-peripheral difference in visual inference.Perception2018,47:531–539

  45. [45]

    Tanabe S, Umeda K, Fujita I:Rejection of false matches for binocular correspondence in macaque visual cortical area v4.Journal of Neuroscience2004,24:8170–8180

  46. [46]

    •[51] Moore CM, Zheng Q, Semizer Y:Perceptual organization is limited in peripheral vision: Evidence from configural superiority.Journal of Vision2025,25:article number: 16

    Kanizsa G:Subjective contours.Scientific American1976,234:48–53. •[51] Moore CM, Zheng Q, Semizer Y:Perceptual organization is limited in peripheral vision: Evidence from configural superiority.Journal of Vision2025,25:article number: 16. Showed that percepts of surfaces in 3D world arising from surface completion, 3D shape from 2D images, transparency/su...

  47. [47]

    Majka L P Zhaoping, Rosa M:A central-field focus in ventral-stream feedback to V1 in primates: theoretical prediction confirmed.Oral presentation, Vision Sciences Society Annual Meeting, May 20262026,

  48. [48]

    Sims SA, Demirayak P , Cedotal S, Visscher KM:Frontal cortical regions associated with attention connect more strongly to central than peripheral V1.NeuroImage2021, 238:article:118246. ••[54] Morales-Gregorio A, Kurth AC, Ito J, Kleinjohann A, Barth ´elemy FV , Brochier T, Gr¨un S, van Albada SJ:Neural manifolds in V1 change with top-down signals from V4 ...

  49. [49]

    Kar K, DiCarlo JJ:Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition.Neuron2021, 109:164–176

  50. [50]

    Xin Y, Yan Y, Li W:A central and unified role of corticocortical feedback in parsing visual scenes.Nature Communications2025,16:article number: 6930

  51. [51]

    Pizzuti A, Gulban OF, Huber L, Peters JC, Goebel R:In the brain of the beholder: bi- stable motion reveals mesoscopic-scale feedback modulation in v1.Brain Structure and Func- tion2025,230:47. 17

  52. [52]

    Levi DM:Rethinking amblyopia 2020.Vision research2020,176:118–129

  53. [53]

    a review.Vision Research2025,226:108503

    Hess RF:Towards a principled and efficacious approach to the treatment of amblyopia. a review.Vision Research2025,226:108503

  54. [54]

    Lu ZL, Dosher BA:Current directions in visual perceptual learning.Nature reviews psy- chology2022,1:654–668

  55. [55]

    Li W, Pi ¨ech V , Gilbert C:Perceptual learning and top-down influences in primary visual cortex.Nature Neuroscience2004,7:651–657

  56. [56]

    •[63] Liang J, Zhaoping L:Trans-saccadic integration for object recognition peters out with pre-saccadic object eccentricity as target-directed saccades become more saliency-driven

    Melcher D, Colby CL:Trans-saccadic perception.Trends in cognitive sciences2008,12:466– 473. •[63] Liang J, Zhaoping L:Trans-saccadic integration for object recognition peters out with pre-saccadic object eccentricity as target-directed saccades become more saliency-driven. Vision Research2025,226:number 108500. By measuring locations and durations of the ...

  57. [57]

    Kroell LM, Rolfs M:The magnitude and time course of pre-saccadic foveal prediction depend on the conspicuity of the saccade target.Elife2025,12:RP91236

  58. [58]

    Golomb JD, Mazer JA:Visual remapping.Annual review of vision science2021,7:257–277

  59. [59]

    Williams MA, Baker CI, De Beeck HPO, Shim WM, Dang S, Triantafyllou C, Kanwisher N:Feedback of visual object information to foveal retinotopic cortex.Nature neuroscience 2008,11:1439–1445

  60. [60]

    Knapen T, Swisher JD, Tong F, Cavanagh P:Oculomotor remapping of visual informa- tion to foveal retinotopic cortex.Frontiers in systems neuroscience2016,10:54

  61. [61]

    Fan X, Wang L, Shao H, Kersten D, He S:Temporally flexible feedback signal to foveal cortex for peripheral object recognition.Proceedings of the National Academy of Sciences2016, 113:11627–11632

  62. [62]

    Oletto CM, Contemori G, Cessa R, Battaglini L, Bertamini M:Foveal masking impairs orientation discrimination of peripheral low-level stimuli.Heliyon2026,12:e44279

  63. [63]

    Rucci M, Poletti M:Control and functions of fixational eye movements.Annual review of vision science2015,1:499–518. 18

  64. [64]

    Annual review of vision science2018,4:215–237

    Wurtz RH:Corollary discharge contributions to perceptual continuity across saccades. Annual review of vision science2018,4:215–237. •[72] Wang X, Zhang C, Yang L, Jin M, Goldberg ME, Zhang M, Qian N:Perisaccadic and attentional remapping of receptive fields in lateral intraparietal area and frontal eye fields. Cell Reports2024,43:113820. Showed that remap...

  65. [65]

    Cavanagh P , Melcher D:Steerable autoencoders underlying remapping, spatiotopy, and visual stability.PsyArXiv,2026, osfio/preprints/psyarxiv/5cku8 v22025,

  66. [66]

    Luck SJ, Vogel EK:Visual working memory capacity: from psychophysics and neurobi- ology to individual differences.Trends in cognitive sciences2013,17:391–400

  67. [67]

    Yates JL, Coop SH, Sarch GH, Wu RJ, Butts DA, Rucci M, Mitchell JF:Detailed character- ization of neural selectivity in free viewing primates.Nature Communications2023,14:article number 3656

  68. [68]

    Sereno MI, Huang RS:Multisensory maps in parietal cortex.Current opinion in neurobiol- ogy2014,24:39–46

  69. [69]

    Vater C, Wolfe B, Rosenholtz R:Peripheral vision in real-world tasks: A systematic re- view.Psychonomic bulletin & review2022,29:1531–1557

  70. [70]

    Zhaoping L:Peripheral and central sensation: multisensory orienting and recognition across species.Trends in Cognitive Sciences2023,27:539–552. 19