pith. machine review for the scientific record. sign in

arxiv: 2604.20519 · v1 · submitted 2026-04-22 · ⚛️ physics.optics

Recognition: unknown

Node-reduction through Joint Optimization of Input and Readout Layers in Photonic Reservoir Equalization

Authors on Pith no claims yet

Pith reviewed 2026-05-09 23:16 UTC · model grok-4.3

classification ⚛️ physics.optics
keywords photonic reservoir computingoptical signal equalizationjoint input-output optimizationnode reductionbit error ratememory extensionIM/DD transmission
0
0 comments X

The pith

Jointly optimizing input and readout layers halves the node count in photonic reservoirs while improving equalization performance.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines adding trainable input mappings to the conventional training of only output weights in photonic reservoir computing for optical signal equalization. This joint optimization produces over two orders of magnitude better bit error rates across short- and mid-reach IM/DD links up to 200 km. It permits using half as many nodes for equivalent results and stretches the reservoir's effective memory, yielding more than three orders of magnitude gains on memory-heavy tasks. From 16 nodes onward the method also surpasses complexity-matched feed-forward equalizers and second-order Volterra filters.

Core claim

In photonic reservoir computing for IM/DD transmission equalization up to 200 km at 28 GBd NRZ, jointly optimizing the input mapping with the readout weights delivers more than two orders of magnitude improvement in bit error rate. This approach halves the required reservoir size while preserving performance and extends the reservoir memory to produce over three orders of magnitude better results on memory-intensive tasks. Starting at 16 nodes, the optimized system outperforms both a complexity-matched FFE and a second-order Volterra filter by one to two orders of magnitude.

What carries the argument

The trainable input mapping optimized jointly with the readout layer, which augments the fixed photonic reservoir dynamics to support stronger equalization with fewer nodes.

If this is right

  • The reservoir network size can be halved without sacrificing equalization performance.
  • Effective memory length increases, producing over three orders of magnitude better results on memory-intensive equalization tasks.
  • Bit error rate improves by more than two orders of magnitude for short- and mid-reach IM/DD links up to 200 km.
  • From 16 nodes the method exceeds the performance of complexity-matched FFE and second-order Volterra filters by one to two orders of magnitude.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same joint-optimization idea could be tested in non-photonic reservoir systems to check whether node reduction is a general property.
  • Trainable input layers might allow online adaptation to slowly varying optical channels without retraining the full reservoir.
  • Lower node counts could translate directly into reduced optical power and smaller integrated photonic chips for high-speed links.
  • The approach might be extended to longer-haul or higher-baud-rate scenarios where memory effects dominate and current reservoirs struggle.

Load-bearing premise

Optimizing the input mapping leaves the photonic reservoir's stability and fixed internal dynamics unchanged and does not introduce hardware noise or instabilities that would hurt real performance.

What would settle it

Running a physical photonic reservoir experiment with joint input-output optimization and observing either no BER improvement over output-only training or clear degradation from added noise or instability.

Figures

Figures reproduced from arXiv: 2604.20519 by Peter Bienstman, Ruben Van Assche, Sarah Masaad.

Figure 1
Figure 1. Figure 1: Optical communications setup. Upper and lower paths respectively show the setups with and without the photonic reservoir. The lower path is used for contextual baseline comparisons. CW: continuous wave. OOK: on-off keying. OSNR: optical signal-to-noise ratio. B. Reservoir Model in Photontorch The equalizer, see figure 2, takes the abstract form x(t+dt) = Rx(t)+wT inu(t), y(t) = g [PITH_FULL_IMAGE:figures/… view at source ↗
Figure 2
Figure 2. Figure 2: Schematic overview of the four-port reservoir computing architecture. An input signal u gets transmitted to a trainable input layer with weights win. This input is used in a fixed, random reservoir, here configured in the four-port architecture. Finally the states from the reservoir are extracted and recombined into an output y by being transmitted through a trainable output layer with weights wout. where … view at source ↗
Figure 3
Figure 3. Figure 3: Median BER versus fiber length at (a) 0 and (b) 15 dBm launch power and 30 dB receiver OSNR for an 8-node jointly optimized reservoir, an 8-node readout-only reservoir, and a 16-node readout-only reservoir. The jointly optimized 8-node variant consistently improves on the 8-node readout-only variant and, over a useful short-to-intermediate-length regime, approaches the BER range of the 16-node readout￾only… view at source ↗
Figure 4
Figure 4. Figure 4: Median BER versus reservoir size. The jointly optimized configurations, particularly IR-all, improve much more rapidly with node count than the readout-only variants, indicating that trainable optical encoding increases the usefulness of additional reservoir nodes in this operating regime. optical degrees of freedom. This again indicates that the observed performance is not merely a consequence of paramete… view at source ↗
Figure 5
Figure 5. Figure 5: Median BER versus the number of real-valued trainable degrees of freedom for the same operating point as in [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
read the original abstract

Photonic reservoir computing is a machine learning paradigm in which a recurrent neural network remains fixed while only the output weights are trained. This makes it a well-suited approach for high-speed signal equalisation in optical communication systems, offering a trainable, low-power, and low-complexity solution. However, achieving strong performance typically requires relatively large network sizes, as learning is confined to the output layer. To address this, we investigate the role of trainable input mappings alongside conventional output weight optimisation. Across a range of short- and mid-reach IM/DD transmission scenarios, reaching up to 200 km for a 28 GBd NRZ signal, improvements of over two orders of magnitude in BER are achieved. This enables halving the network size while maintaining comparable performance. Furthermore, we show that this approach effectively extends the memory of the reservoir, resulting in over three orders of magnitude improvement in memory-intensive tasks. These results also show that starting at 16 nodes a performance of at least one to two magnitudes better than both a complexity matched FFE and a Volterra filter of second order are reached.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper investigates joint optimization of input mappings and readout weights in photonic reservoir computing for IM/DD optical signal equalization. It claims that this allows halving the reservoir network size while maintaining performance, extends memory capacity by over three orders of magnitude in memory-intensive tasks, and achieves 1-2 orders better performance than FFE and second-order Volterra filters starting from 16 nodes, with BER improvements over two orders in scenarios up to 200 km for 28 GBd NRZ signals.

Significance. If the results hold under rigorous verification, this could advance photonic RC by enabling smaller, more efficient equalizers for high-speed optical communications, with reported BER gains and memory extensions that outperform complexity-matched linear and nonlinear filters.

major comments (2)
  1. [Results section (transmission scenarios)] The manuscript reports quantitative BER and memory-capacity gains but provides no details on experimental vs. simulation setup, error bars, data exclusion criteria, or statistical significance testing (e.g., in the results section describing the 28 GBd NRZ scenarios up to 200 km). This is load-bearing for the central claims of two-order BER improvement and halving network size.
  2. [Reservoir model and optimization description] The claim that joint input-readout optimization extends memory by >3 orders while preserving fixed reservoir dynamics requires that the optimized input mapping leaves the echo-state property and stability unchanged. No post-optimization spectral-radius verification, Lyapunov exponent check, or noise-robustness ablation is described, which directly affects whether the reported gains are artifacts of idealized simulation.
minor comments (2)
  1. [Abstract and results] Define 'memory-intensive tasks' explicitly and state how the three-order-of-magnitude memory improvement is computed (e.g., via the standard memory capacity metric).
  2. [Comparison section] Clarify the precise complexity matching used for the FFE and Volterra baselines when claiming 1-2 orders better performance at 16 nodes.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We are grateful to the referee for their detailed and insightful comments, which have helped us identify areas where the manuscript can be improved. Below we provide a point-by-point response to the major comments.

read point-by-point responses
  1. Referee: [Results section (transmission scenarios)] The manuscript reports quantitative BER and memory-capacity gains but provides no details on experimental vs. simulation setup, error bars, data exclusion criteria, or statistical significance testing (e.g., in the results section describing the 28 GBd NRZ scenarios up to 200 km). This is load-bearing for the central claims of two-order BER improvement and halving network size.

    Authors: We agree that the manuscript would benefit from explicit methodological details to support the quantitative claims. All results are obtained via numerical simulation of the IM/DD transmission link. In the revised manuscript we will add a dedicated subsection describing the simulation framework (including the fiber propagation model), the procedure for BER estimation, how variability across realizations is quantified via error bars, confirmation that no data exclusion criteria were applied beyond standard BER computation, and the approach to assessing performance consistency across scenarios. These additions will directly substantiate the reported BER gains and the feasibility of halving the node count. revision: yes

  2. Referee: [Reservoir model and optimization description] The claim that joint input-readout optimization extends memory by >3 orders while preserving fixed reservoir dynamics requires that the optimized input mapping leaves the echo-state property and stability unchanged. No post-optimization spectral-radius verification, Lyapunov exponent check, or noise-robustness ablation is described, which directly affects whether the reported gains are artifacts of idealized simulation.

    Authors: The recurrent reservoir matrix remains completely fixed; only a static input mapping and the readout weights are optimized. We acknowledge that explicit post-optimization checks would strengthen the argument that the echo-state property is unaffected. In the revised manuscript we will add a verification that the spectral radius of the reservoir matrix is unchanged (and remains below unity) after input optimization, together with a noise-robustness ablation that evaluates performance under controlled additive noise at the reservoir input. For discrete-time reservoirs the spectral radius is the conventional metric for the echo-state property, so a full Lyapunov-exponent analysis is not required; we will clarify this rationale in the text. These additions will confirm that the observed memory extension arises from the joint optimization rather than from any alteration of reservoir stability. revision: partial

Circularity Check

0 steps flagged

No circularity: empirical performance metrics are independent measured outcomes.

full rationale

The paper's central claims rest on joint optimization of input mappings and readout weights in a standard photonic reservoir computing setup, with reported BER improvements, node reduction, and memory capacity gains presented as direct simulation and transmission results across IM/DD scenarios. These quantities are not defined by or reduced to the paper's own equations or fitted parameters; the reservoir dynamics are kept fixed while external weights are trained, and performance is evaluated against external benchmarks like FFE and Volterra filters. No self-definitional loops, fitted inputs renamed as predictions, load-bearing self-citations, or ansatz smuggling appear in the derivation. The approach is self-contained against external benchmarks, yielding a normal non-finding of circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The work relies on standard assumptions from reservoir computing and photonic signal processing literature with no new free parameters, axioms, or invented entities introduced in the abstract.

pith-pipeline@v0.9.0 · 5495 in / 1129 out tokens · 38646 ms · 2026-05-09T23:16:49.676381+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

38 extracted references · 16 canonical work pages

  1. [1]

    Digital filters for coherent optical receivers

    Seb J. Savory. “Digital filters for coherent optical receivers”. In:Opt. Express16.2 (Jan. 2008), pp. 804–817.DOI: 10.1364/OE.16. 000804.URL: https : / / opg . optica . org / oe / abstract.cfm?URI=oe-16-2-804

  2. [2]

    Machine-learning-based equalization for short-reach transmission: neural networks and reservoir computing

    F. Da Ros et al. “Machine-learning-based equalization for short-reach transmission: neural networks and reservoir computing”. In:Metro and Data Center Optical Net- works and Short-Reach Links IV. Ed. by Atul K. Srivastava, Madeleine Glick, and Youichi Akasaka. V ol. 11712. International Society for Optics and Photonics. SPIE, 2021, p. 1171205.DOI: 10.1117...

  3. [3]

    Signal recovery based on optoelectronic reservoir computing for high speed op- tical fiber communication system

    Shuai Wang, Nian Fang, and Lutang Wang. “Signal recovery based on optoelectronic reservoir computing for high speed op- tical fiber communication system”. In:Op- tics Communications495 (2021), p. 127082. ISSN: 0030-4018.DOI: https : / / doi . org / 10 . 1016 / j . optcom . 2021 . 127082.URL: https : //www.sciencedirect.com/science/article/pii/ S003040182100331X

  4. [4]

    Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Per- turbations

    Wolfgang Maass, Thomas Natschl ¨ager, and Henry Markram. “Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Per- turbations”. In:Neural Computation14.11 (2002), pp. 2531–2560.DOI: 10 . 1162 / 089976602760407955

  5. [5]

    The “echo state

    Herbert Jaeger. “The “echo state” approach to analysing and training recurrent neural networks-with an erratum note”. In: vol. 148

  6. [6]

    Bonn, 2001, p. 13. 10

  7. [7]

    Reservoir computing approaches to recur- rent neural network training

    Mantas Luko ˇseviˇcius and Herbert Jaeger. “Reservoir computing approaches to recur- rent neural network training”. In:Computer Science Review3.3 (2009), pp. 127–149. ISSN: 1574-0137.DOI: https : / / doi . org / 10 . 1016 / j . cosrev. 2009 . 03 . 005.URL: https : / / www.sciencedirect.com/science/article/pii/ S1574013709000173

  8. [8]

    Information Processing Ca- pacity of Dynamical Systems

    J. Dambre et al. “Information Processing Ca- pacity of Dynamical Systems”. In:Scientific reports2 (July 2012), p. 514.DOI: 10.1038/ srep00514

  9. [9]

    Reservoir Computing Beyond Memory-Nonlinearity Trade-off

    Masanobu Inubushi and Kazuyuki Yoshimura. “Reservoir Computing Beyond Memory-Nonlinearity Trade-off”. In: Scientific Reports7 (Dec. 2017).DOI: 10.1038/s41598-017-10257-6

  10. [10]

    Clarke, and Federico Capasso

    Kristof Vandoorne et al. “Toward optical signal processing using Photonic Reservoir Computing”. In:Opt. Express16.15 (July 2008), pp. 11182–11192.DOI: 10.1364/OE. 16.011182.URL: https://opg.optica.org/oe/ abstract.cfm?URI=oe-16-15-11182

  11. [11]

    Information pro- cessing using a single dynamical node as complex system

    Lennert Appeltant et al. “Information pro- cessing using a single dynamical node as complex system”. In:Nature Commu- nications2 (2011).URL: https : / / api . semanticscholar.org/CorpusID:7571855

  12. [12]

    Optoelectronic Reservoir Computing

    Yvan Paquot et al. “Optoelectronic Reservoir Computing”. In:Scientific Reports2 (2012), p. 287.DOI: 10.1038/srep00287.URL: https: //www.nature.com/articles/srep00287

  13. [13]

    Experimental demonstration of reservoir computing on a silicon photonics chip

    Kristof Vandoorne et al. “Experimental demonstration of reservoir computing on a silicon photonics chip”. In:Nature Com- munications5 (2014).URL: https : / / api . semanticscholar.org/CorpusID:205324293

  14. [14]

    Foiret, X

    Andrew Katumba et al. “Low-Loss Photonic Reservoir Computing with Multimode Pho- tonic Integrated Circuits”. In:Scientific Re- ports8 (Feb. 2018).DOI: 10.1038/s41598- 018-21011-x

  15. [15]

    Scalable reservoir computing on coherent linear photonic pro- cessor

    Mitsumasa Nakajima, Kenji Tanaka, and Toshikazu Hashimoto. “Scalable reservoir computing on coherent linear photonic pro- cessor”. In:Communications Physics4 (2021).URL: https://api.semanticscholar.org/ CorpusID:231876989

  16. [16]

    Deep photonic reservoir computing recurrent network

    Yi-Wei Shen et al. “Deep photonic reservoir computing recurrent network”. In:Optica 10.12 (Dec. 2023), pp. 1745–1751.DOI: 10. 1364 / OPTICA . 506635.URL: https : / / opg . optica.org/optica/abstract.cfm?URI=optica- 10-12-1745

  17. [17]

    Ultrafast silicon pho- tonic reservoir computing engine delivering over 200 TOPS

    Dongliang Wang et al. “Ultrafast silicon pho- tonic reservoir computing engine delivering over 200 TOPS”. In:Nature Communications 15 (2024).URL: https://api.semanticscholar. org/CorpusID:275142268

  18. [18]

    A Neuromor- phic Silicon Photonics Nonlinear Equalizer For Optical Communications With Intensity Modulation and Direct Detection

    Andrew Katumba et al. “A Neuromor- phic Silicon Photonics Nonlinear Equalizer For Optical Communications With Intensity Modulation and Direct Detection”. In:Jour- nal of Lightwave Technology37.10 (2019), pp. 2232–2239.DOI: 10 . 1109 / JLT. 2019 . 2900568

  19. [19]

    Reservoir- Computing Based Equalization With Optical Pre-Processing for Short-Reach Optical Transmission

    Francesco Da Ros et al. “Reservoir- Computing Based Equalization With Optical Pre-Processing for Short-Reach Optical Transmission”. In:IEEE Journal of Selected Topics in Quantum Electronics26.5 (2020), pp. 1–12.DOI: 10 . 1109 / JSTQE . 2020 . 2975607

  20. [20]

    Experimental realiza- tion of integrated photonic reservoir comput- ing for nonlinear fiber distortion compensa- tion

    Stijn Sackesyn et al. “Experimental realiza- tion of integrated photonic reservoir comput- ing for nonlinear fiber distortion compensa- tion”. In:Opt. Express29.20 (Sept. 2021), pp. 30991–30997.DOI: 10.1364/OE.435013. URL: https://opg.optica.org/oe/abstract.cfm? URI=oe-29-20-30991

  21. [21]

    Experimental Demon- stration of 4-Port Photonic Reservoir Com- puting for Equalization of 4 and 16 QAM Signals

    Sarah Masaad et al. “Experimental Demon- stration of 4-Port Photonic Reservoir Com- puting for Equalization of 4 and 16 QAM Signals”. In:Journal of Lightwave Technol- ogy42.24 (2024), pp. 8555–8563.DOI: 10. 1109/JLT.2024.3444480

  22. [22]

    Ruben Van Assche et al.Real-time all-optical signal equalisation with silicon photonic re- current neural networks. 2025. arXiv: 2503. 19911[physics.optics].URL: https : //arxiv.org/abs/2503.19911. 11

  23. [23]

    Equalization of a 10  Gbps IMDD signal by a small silicon photonics time delayed neural network

    Emiliano Staffoli et al. “Equalization of a 10  Gbps IMDD signal by a small silicon photonics time delayed neural network”. In:Photon. Res.11.5 (May 2023), pp. 878–886.DOI: 10 . 1364 / PRJ . 483356. URL: https://opg.optica.org/prj/abstract.cfm? URI=prj-11-5-878

  24. [24]

    A Silicon Photonic Neu- ral Network for Chromatic Dispersion Com- pensation in 20 Gbps PAM4 Signal at 125 km and its Scalability up to 100 Gbps

    Emiliano Staffoli, Gianpietro Maddinelli, and Lorenzo Pavesi. “A Silicon Photonic Neu- ral Network for Chromatic Dispersion Com- pensation in 20 Gbps PAM4 Signal at 125 km and its Scalability up to 100 Gbps”. In:J. Lightwave Technol.43.2 (Jan. 2025), pp. 557–571.URL: https://opg.optica.org/jlt/ abstract.cfm?URI=jlt-43-2-557

  25. [25]

    Benshan Wang et al.Beyond Terabit/s In- tegrated Neuromorphic Photonic Processor for DSP-Free Optical Interconnects. 2025. arXiv: 2504.15044[physics.optics]. URL: https://arxiv.org/abs/2504.15044

  26. [26]

    Guy Van der Sande, Daniel Brunner, and Miguel C. Soriano. In:Nanophotonics6.3 (2017), pp. 561–576.DOI: doi : 10 . 1515 / nanoph - 2016 - 0132.URL: https : / / doi . org / 10.1515/nanoph-2016-0132

  27. [27]

    Constructing opti- mized binary masks for reservoir computing with delay systems

    Lennert Appeltant et al. “Constructing opti- mized binary masks for reservoir computing with delay systems”. English. In:Scientific Reports - Nature4 (Jan. 2014).ISSN: 2045- 2322.DOI: 10.1038/srep03629

  28. [28]

    Impact of input mask sig- nals on delay-based photonic reservoir com- puting with semiconductor lasers

    Yoma Kuriki et al. “Impact of input mask sig- nals on delay-based photonic reservoir com- puting with semiconductor lasers”. In:Opt. Express26.5 (Mar. 2018), pp. 5777–5788. DOI: 10 . 1364 / OE . 26 . 005777.URL: https : //opg.optica.org/oe/abstract.cfm?URI=oe-26- 5-5777

  29. [29]

    Performance boost of time-delay reservoir computing by non- resonant clock cycle

    Florian Stelzer et al. “Performance boost of time-delay reservoir computing by non- resonant clock cycle”. In:Neural Networks 124 (2020), pp. 158–169.ISSN: 0893-6080. DOI: https://doi.org/10.1016/j.neunet.2020. 01.010.URL: https://www.sciencedirect.com/ science/article/pii/S0893608020300125

  30. [30]

    Role of delay-times in delay-based photonic reservoir computing, Invited

    Tobias H ¨ulser et al. “Role of delay-times in delay-based photonic reservoir computing, Invited”. In:Opt. Mater. Express12.3 (Mar. 2022), pp. 1214–1231.DOI: 10.1364/OME. 451016.URL: https://opg.optica.org/ome/ abstract.cfm?URI=ome-12-3-1214

  31. [31]

    Reservoir Computing with Delayed Input for Fast and Easy Op- timisation

    Lina Jaurigue et al. “Reservoir Computing with Delayed Input for Fast and Easy Op- timisation”. In:Entropy23.12 (2021).ISSN: 1099-4300.DOI: 10.3390/e23121560.URL: https://www.mdpi.com/1099- 4300/23/12/ 1560

  32. [32]

    Efficient optimisation of physical reservoir computers using only a delayed input

    Enrico Picco et al. “Efficient optimisation of physical reservoir computers using only a delayed input”. In:Communications En- gineering4.1 (2025), p. 3.DOI: 10 . 1038 / s44172 - 025 - 00340 - 6.URL: https : / / www. nature.com/articles/s44172-025-00340-6

  33. [33]

    Anas Skalli et al.Model-free front-to-end training of a large high performance laser neural network. 2025. arXiv: 2503 . 16943 [cs.LG].URL: https://arxiv.org/abs/2503. 16943

  34. [34]

    An enhanced architecture for silicon photonic reservoir computing

    Sackesyn, Stijn and Ma, Chonghuai and Dambre, Joni and Bienstman, Peter. “An enhanced architecture for silicon photonic reservoir computing”. und. In:Cognitive Computing 2018 - Merging Concepts with Hardware. Hannover, Germany, 2018, 1–2

  35. [35]

    Version 11.5, Accessed Mar

    VPIphotonics.VPI Design Suite. Version 11.5, Accessed Mar. 03, 2026. URL: https://www.vpiphotonics.com/Tools/ DesignSuite/

  36. [36]

    Highly parallel simulation and op- timization of photonic circuits in time and frequency domain based on the deep-learning framework pytorch

    Floris Laporte, Joni Dambre, and Peter Bien- stman. “Highly parallel simulation and op- timization of photonic circuits in time and frequency domain based on the deep-learning framework pytorch”. In:Scientific reports9.1 (2019), p. 5918

  37. [37]

    Computing with inte- grated photonic reservoirs

    Dambre, Joni and Katumba, Andrew and Ma, Chonghuai and Sackesyn, Stijn and La- porte, Floris and Freiberger, Matthias and Bienstman, Peter. “Computing with inte- grated photonic reservoirs”. eng. In:Reser- voir computing : theory, physical implemen- tations, and applications. Ed. by Nakajima, Kohei and Fischer, Ingo. Natural Comput- ing Series. Springer, ...

  38. [38]

    Margin measurements in optical am- plifier system

    N.S. Bergano, F.W. Kerfoot, and C.R. David- sion. “Margin measurements in optical am- plifier system”. In:IEEE Photonics Technol- ogy Letters5.3 (1993), pp. 304–306.DOI: 10.1109/68.205619