pith. machine review for the scientific record. sign in

arxiv: 2605.08564 · v1 · submitted 2026-05-08 · 💻 cs.AI · cs.CV· cs.LG

Recognition: no theorem link

Biological Plausibility and Representational Alignment of Feedback Alignment in Convolutional Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:22 UTC · model grok-4.3

classification 💻 cs.AI cs.CVcs.LG
keywords feedback alignmentbackpropagationconvolutional networksrepresentational similaritybiological plausibilityCIFAR-10learning algorithms
0
0 comments X

The pith

Modified feedback alignment succeeds in convolutional networks by converging on representations structurally similar to backpropagation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests five algorithms, including variants of feedback alignment and standard backpropagation, on the same convolutional architecture using the CIFAR-10 dataset. It finds that the modified feedback alignment versions reach internal representations whose geometry matches backpropagation's despite relying on different weight updates. A sympathetic reader would care because this points to representation alignment, rather than exact biological copying of the update rule, as the source of practical success in training deep networks without backpropagation. The comparison also weighs biological plausibility, interpretability, and computational cost.

Core claim

Modified FA algorithms converge on internal representations that are structurally similar to those produced by backpropagation. In particular, it appears the functional success of modified FA algorithms may be rooted in their ability to mimic the representational geometry of backpropagation, converging on similar representations despite relying on fundamentally different weight update mechanisms.

What carries the argument

Structural similarity of learned representations between modified feedback alignment variants and backpropagation.

If this is right

  • Modified feedback alignment can scale to convolutional networks while retaining more biological plausibility than backpropagation.
  • Functional success depends primarily on reaching aligned representations rather than matching the precise weight update rule.
  • Different learning algorithms can produce equivalent network behavior if their final representational geometry matches.
  • Interpretability and complexity trade-offs can be evaluated by how closely each algorithm's representations align with backpropagation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Any learning rule that enforces similar representational geometry might achieve comparable results even if it differs from both backpropagation and feedback alignment.
  • This opens the possibility that biological systems achieve effective learning through mechanisms that align representations without using explicit backprop-style error signals.
  • A direct test would involve perturbing representation similarity during training and checking whether performance collapses.
  • The finding connects to broader questions in neuroscience about whether similar computations can arise from dissimilar local rules.

Load-bearing premise

That observed structural similarity in internal representations directly accounts for why modified feedback alignment performs well.

What would settle it

Demonstrating strong performance by a modified feedback alignment algorithm whose learned representations remain dissimilar to backpropagation's on the same task.

Figures

Figures reproduced from arXiv: 2605.08564 by Jake Lance, Larry Kieu.

Figure 1
Figure 1. Figure 1: Biological Plausibility taxonomy of learning [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: Angle between the BP gradient and the feedback [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 2
Figure 2. Figure 2: Validation accuracy over 50 epochs of training. [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: Sign concordance (fraction of elements where [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: CKA similarity between BP and each FA variant, [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Top-activating test images for the three most [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Sign concordance for FC2 and FC3 layers during [PITH_FULL_IMAGE:figures/full_fig_p008_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: CKA similarity (BP vs. each FA variant) on all [PITH_FULL_IMAGE:figures/full_fig_p008_8.png] view at source ↗
read the original abstract

The feedback alignment (FA) algorithm offers a biologically plausible alternative to backpropagation (BP) for training neural networks yet notably fails to scale to convolutional architectures. Modifications have been proposed to address this limitation, but at questionable cost to biological plausibility. In this paper, we evaluate five learning algorithms including modified FA and standard BP, applied to the same convolutional architecture with the CIFAR-10 dataset. We provide a tripartite comparative analysis focusing on biological plausibility, interpretability, and computational complexity. Our results indicate that modified FA algorithms converge on internal representations that are structurally similar to those produced by backpropagation. In particular, it appears the functional success of modified FA algorithms may be rooted in their ability to mimic the representational geometry of backpropagation, converging on similar representations despite relying on fundamentally different weight update mechanisms.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript evaluates five learning algorithms (including modified feedback alignment variants and standard backpropagation) on the same convolutional architecture trained on CIFAR-10. It performs a tripartite comparison of biological plausibility, interpretability, and computational complexity, concluding that the empirical success of modified FA stems from convergence to internal representations whose geometry is structurally similar to those produced by BP, despite fundamentally different weight-update rules.

Significance. If the central claim is substantiated with causal evidence, the work would help explain why certain biologically motivated rules scale to convnets and could guide design of more plausible deep-learning algorithms. The explicit focus on representational geometry rather than raw accuracy is a positive framing, but the current correlational results limit immediate theoretical or practical impact.

major comments (2)
  1. [Abstract and Results] The claim that representational similarity is the root mechanism for modified FA success (Abstract) is not supported by any ablation, intervention, or regression analysis. No evidence is given that the degree of similarity predicts accuracy once learning-rate, initialization, and modification-specific update rules are controlled for; similarity could be a byproduct rather than the explanatory factor.
  2. [Methods] Methods details are absent for the five algorithms, the precise modifications to FA, the representational-similarity metrics employed, and any statistical tests or error bars on the reported similarities. Without these, the tripartite analysis cannot be reproduced or evaluated for robustness.
minor comments (1)
  1. Clarify how 'interpretability' is quantified in the tripartite analysis and whether any quantitative metric (beyond qualitative description) is used.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below, indicating where revisions will be made to improve clarity and reproducibility.

read point-by-point responses
  1. Referee: [Abstract and Results] The claim that representational similarity is the root mechanism for modified FA success (Abstract) is not supported by any ablation, intervention, or regression analysis. No evidence is given that the degree of similarity predicts accuracy once learning-rate, initialization, and modification-specific update rules are controlled for; similarity could be a byproduct rather than the explanatory factor.

    Authors: We agree that the manuscript reports correlational observations of representational alignment rather than causal evidence via ablations, interventions, or controlled regressions. The abstract employs cautious phrasing ('it appears' and 'may be rooted'), but we will revise the abstract, results, and discussion to explicitly characterize the findings as correlational, to avoid implying a demonstrated root mechanism, and to note that similarity could be a byproduct. We will add a limitations paragraph suggesting future causal tests. This is a partial revision focused on textual clarification rather than new experiments. revision: partial

  2. Referee: [Methods] Methods details are absent for the five algorithms, the precise modifications to FA, the representational-similarity metrics employed, and any statistical tests or error bars on the reported similarities. Without these, the tripartite analysis cannot be reproduced or evaluated for robustness.

    Authors: We acknowledge the omission and will fully expand the Methods section in the revision. This will include complete specifications of all five algorithms, the exact modifications applied to feedback alignment, the representational similarity metrics and their computation, and all statistical procedures with error bars, confidence intervals, and tests. These additions will enable reproduction and robustness evaluation. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical comparison with no derivations or self-referential reductions

full rationale

The paper performs a direct empirical evaluation of five learning algorithms (including modified FA variants and BP) on the same convolutional architecture trained on CIFAR-10. It reports observed structural similarities in internal representations via comparative analysis of biological plausibility, interpretability, and complexity. No equations, derivations, fitted parameters renamed as predictions, or self-citations appear in the provided text that would reduce any claim to its own inputs by construction. The central observation that modified FA converges on BP-like geometry is presented as an empirical finding rather than a deductive result, rendering the analysis self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Empirical machine learning study; no explicit free parameters, axioms, or invented entities are introduced beyond standard assumptions of gradient-based training and representation similarity metrics.

pith-pipeline@v0.9.0 · 5434 in / 1101 out tokens · 38987 ms · 2026-05-12T01:22:48.466532+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

17 extracted references · 17 canonical work pages · 1 internal anchor

  1. [1]

    Lillicrap, Daniel Cownden, Douglas B

    Random synaptic feedback weights support error backpropagation for deep learning , volume =. Nature Communications , author =. 2016 , pages =. doi:10.1038/ncomms13276 , abstract =

  2. [2]

    2016 , eprint=

    How Important is Weight Symmetry in Backpropagation? , author=. 2016 , eprint=

  3. [3]

    Proceedings of the National Academy of Sciences , author =

    Performance-optimized hierarchical models predict neural responses in higher visual cortex , volume =. Proceedings of the National Academy of Sciences , author =. 2014 , pages =. doi:10.1073/pnas.1403112111 , abstract =

  4. [4]

    Scientific Reports , author =

    Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence , volume =. Scientific Reports , author =. 2016 , pages =. doi:10.1038/srep27755 , abstract =

  5. [5]

    Cognitive Science , author =

    Competitive. Cognitive Science , author =. 1987 , pages =. doi:10.1111/j.1551-6708.1987.tb00862.x , abstract =

  6. [6]

    and Marris, Luke and Hinton, Geoffrey E

    Bartunov, Sergey and Santoro, Adam and Richards, Blake A. and Marris, Luke and Hinton, Geoffrey E. and Lillicrap, Timothy , month = nov, year =. Assessing the. doi:10.48550/arXiv.1807.04587 , abstract =

  7. [7]

    Nøkland, Arild , month = dec, year =. Direct. doi:10.48550/arXiv.1609.01596 , abstract =

  8. [8]

    Simonyan, Karen and Vedaldi, Andrea and Zisserman, Andrew , month = apr, year =. Deep. doi:10.48550/arXiv.1312.6034 , abstract =

  9. [9]

    R.et al.Grad-cam: Visual explanations from deep networks via gradient-based localization.International Journal of Computer Vision128, 336–359 (2019)

    Grad-. International Journal of Computer Vision , author =. 2020 , note =. doi:10.1007/s11263-019-01228-7 , abstract =

  10. [10]

    Do Wide and Deep Networks Learn the Same Things?

    Nguyen, Thao and Raghu, Maithra and Kornblith, Simon , year =. Do Wide and Deep Networks Learn the Same Things?. Proceedings of the 9th

  11. [11]

    Similarity of Neural Network Representations Revisited

    Kornblith, Simon and Norouzi, Mohammad and Lee, Honglak and Hinton, Geoffrey , month = jul, year =. Similarity of. doi:10.48550/arXiv.1905.00414 , abstract =

  12. [12]

    Align, then memorise: the dynamics of learning with feedback alignment , url =

    Refinetti, Maria and d'Ascoli, Stephane and Ohana, Ruben and Goldt, Sebastian , year =. Align, then memorise: the dynamics of learning with feedback alignment , url =. Proceedings of the 38th

  13. [13]

    and Litwin-Kumar, Ashok and Abbott, L

    Moskovitz, Theodore H. and Litwin-Kumar, Ashok and Abbott, L. F. , year =. Feedback. doi:10.48550/arXiv.1812.06488 , publisher =

  14. [14]

    Learning

    Krizhevsky, Alex , year =. Learning

  15. [15]

    and Ba, Jimmy , year =

    Kingma, Diederik P. and Ba, Jimmy , year =. Adam:. Proceedings of the 3rd

  16. [16]

    , year =

    Gray, Robert M. , year =. Toeplitz and

  17. [17]

    and Fergus, Rob , month = nov, year =

    Zeiler, Matthew D. and Fergus, Rob , month = nov, year =. Visualizing and. doi:10.48550/arXiv.1311.2901 , abstract =